Interactive Log Severity Filtering Explorer
See log filtering in action! Use the interactive explorer below to step through 3 stages of severity-based filtering. Watch how DEBUG logs get dropped at the edge, saving 90% on storage costs while improving query performance.
How to Use This Explorer
- Navigate using arrow keys (← →) or click the numbered stage buttons
- Compare the Input (left) and Output (right) showing filtering at each stage
- Observe how DEBUG logs are removed (highlighted in red)
- Inspect the YAML code showing the filtering and routing logic
- Learn from the stage description explaining the cost and performance benefits
Interactive Log Severity Filtering Explorer
All Log Levels Mixed
Raw logs arrive with all severity levels mixed together. DEBUG logs often outnumber ERROR logs 10-100x in production, consuming storage and hiding critical issues.
Use ← → arrow keys to navigate
📥Input
{"level":"DEBUG","msg":"Cache hit","user_id":42}
{"level":"DEBUG","msg":"SQL query: SELECT *","user_id":42}
{"level":"INFO","msg":"User login","user_id":42}
{"level":"DEBUG","msg":"Memory usage: 45%","user_id":42}
{"level":"WARN","msg":"Slow query 2.3s","user_id":42}
{"level":"ERROR","msg":"Payment failed","user_id":42}
{"level":"DEBUG","msg":"Request finished","user_id":42}
📤Output
{"level":"DEBUG","msg":"Cache hit","user_id":42}
{"level":"DEBUG","msg":"SQL query: SELECT *","user_id":42}
{"level":"INFO","msg":"User login","user_id":42}
{"level":"DEBUG","msg":"Memory usage: 45%","user_id":42}
{"level":"WARN","msg":"Slow query 2.3s","user_id":42}
{"level":"ERROR","msg":"Payment failed","user_id":42}
{"level":"DEBUG","msg":"Request finished","user_id":42}
Added/Changed
Removed
Completed Step
Current Step
Not Done Yet
📄New Pipeline Stepstep-0-unfiltered.yaml
input:
http_server:
address: 0.0.0.0:8080
path: /logs
# No filtering - all logs flow throughTry It Yourself
Ready to build cost-optimized log pipelines? Follow the step-by-step tutorial:
Deep Dive into Each Step
Want to understand each filtering technique in depth?
- Step 1: Parse JSON & Add Metadata - Extract severity and classify priority
- Step 2: Filter by Severity - Drop DEBUG/TRACE logs at the edge
- Step 3: Route by Severity - Send critical logs to Elasticsearch, info to S3
Next: Set up your environment to build log filtering pipelines yourself