Interactive Splunk Edge Processing Explorer
See Splunk edge processing in action! Use the interactive explorer below to step through 5 stages of log processing. Watch as raw syslog messages are progressively parsed, filtered, enriched, and prepared for efficient Splunk HEC ingestion.
How to Use This Explorer
- Navigate using arrow keys (← →) or click the numbered stage buttons
- Compare the Input (left) and Output (right) JSON at each stage
- Observe how fields are added (green highlight) or filtered (removal)
- Inspect the YAML code showing exactly what processor was added
- Learn from the stage description explaining the technique and business benefit
Interactive Splunk Edge Processing Explorer
Raw Syslog Data
Unprocessed syslog messages from application servers. In traditional Splunk, ALL of this data gets indexed at $200/TB, including verbose DEBUG messages and noise.
Use ← → arrow keys to navigate
📥Input
2024-01-15 10:30:15 INFO [main] Application started successfully
2024-01-15 10:30:16 DEBUG [worker-1] Initializing connection pool
2024-01-15 10:30:16 DEBUG [worker-1] Pool size: 10, timeout: 30s
2024-01-15 10:30:17 WARN [auth] Failed login attempt: user=admin ip=192.168.1.100
2024-01-15 10:30:18 ERROR [db] Connection timeout to database server
2024-01-15 10:30:19 DEBUG [health] Health check passed - all services OK
📤Output
2024-01-15 10:30:15 INFO [main] Application started successfully
2024-01-15 10:30:16 DEBUG [worker-1] Initializing connection pool
2024-01-15 10:30:16 DEBUG [worker-1] Pool size: 10, timeout: 30s
2024-01-15 10:30:17 WARN [auth] Failed login attempt: user=admin ip=192.168.1.100
2024-01-15 10:30:18 ERROR [db] Connection timeout to database server
2024-01-15 10:30:19 DEBUG [health] Health check passed - all services OK
Added/Changed
Removed
Completed Step
Current Step
Not Done Yet
📄New Pipeline Stepsplunk-input.yaml
input:
file:
paths: [ "/var/log/app/*.log" ]
multiline:
pattern: '^\d{4}-\d{2}-\d{2}'
negate: true
match: afterTry It Yourself
Ready to build this Splunk edge processing pipeline? Follow the step-by-step tutorial:
Deep Dive into Each Step
Want to understand each transformation in depth?
- Step 1: Collect Like inputs.conf - Set up file monitoring and multiline parsing
- Step 2: Parse Like props.conf - Extract fields with Bloblang transformations
- Step 3: Filter Before Indexing - Drop noise and reduce costs
- Step 4: Route to Splunk HEC - Configure HEC output with proper tagging
- Step 5: Advanced Splunk Patterns - Multi-destination routing and compliance
Next: Set up your environment to build this pipeline yourself