Interactive Log Enrichment Explorer
See log enrichment in action! Use the interactive explorer below to step through 5 stages of log processing. Watch as raw application logs are progressively enriched with metadata, restructured for analytics, and prepared for efficient S3 storage.
How to Use This Explorer
- Navigate using arrow keys (← →) or click the numbered stage buttons
- Compare the Input (left) and Output (right) JSON at each stage
- Observe how fields are added (green highlight) or restructured (organizational changes)
- Inspect the YAML code showing exactly what processor was added
- Learn from the stage description explaining the technique and business benefit
Interactive Log Enrichment Explorer
Original Log Data
Raw application logs generated by your services. No enrichment yet - just basic event information that needs lineage tracking and restructuring for analytics.
Use ← → arrow keys to navigate
📥Input
{
"id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
"timestamp": "2024-01-15T10:30:00Z",
"level": "INFO",
"service": "demo-service",
"message": "Demo log message from edge",
"user_id": "user_123",
"request_id": "b2c3d4e5-f6a7-8901-bcde-f12345678901"
}
📤Output
{
"id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
"timestamp": "2024-01-15T10:30:00Z",
"level": "INFO",
"service": "demo-service",
"message": "Demo log message from edge",
"user_id": "user_123",
"request_id": "b2c3d4e5-f6a7-8901-bcde-f12345678901"
}
Added/Changed
Removed
Completed Step
Current Step
Not Done Yet
📄New Pipeline Stepinput.yaml
input:
generate:
interval: 2s
mapping: |
root.id = uuid_v4()
root.timestamp = now()
root.level = "INFO"
root.service = "demo-service"
root.message = "Demo log message from edge"
root.user_id = "user_123"
root.request_id = uuid_v4()Try It Yourself
Ready to build this log enrichment pipeline? Follow the step-by-step tutorial:
Deep Dive into Each Step
Want to understand each transformation in depth?
- Step 1: Generate Test Data - Set up synthetic log generation for testing
- Step 2: Add Lineage Metadata - Track processing history and context
- Step 3: Restructure Format - Organize for analytics efficiency
- Step 4: Configure Batching - Optimize cloud storage costs
- Step 5: Export to S3 - Integrate with cloud storage
Next: Set up your environment to build this pipeline yourself