Skip to main content

Step 3: Multi-Level Fallback

Sometimes, even a single fallback isn't enough. What if your primary database fails, and your secondary option (like writing to a log file) also fails due to a disk space issue? For critical data, you need a multi-level fallback strategy.

In this step, you will use the fallback output processor to create a chain of outputs, ensuring that if one destination fails, the pipeline automatically tries the next.

The fallback Output

The fallback output takes a list of outputs and tries each one in order until one succeeds.

output:
fallback:
- # 1. Try this output first
- # 2. If the first one fails, try this one
- # 3. And so on...

Implementation

You will now modify the database circuit breaker pipeline from the previous step to add a multi-level fallback.

Goal:

  1. Primary: Try to write the enriched data to a PostgreSQL database.

  2. Secondary (Fallback): If the database fails, write the event to a local file for later recovery.

  3. Tertiary (Dead-Letter Queue): If writing to the local file also fails, drop the event to prevent the pipeline from crashing.

  4. Copy the Previous Step's Pipeline:

    cp http-circuit-breaker.yaml multi-level-fallback.yaml
  5. Modify the Output: Open multi-level-fallback.yaml and replace the entire output section with the fallback block below.

    Insert this into multi-level-fallback.yaml
    output:
    fallback:
    # PRIMARY: Try to insert into the database first.
    # This will only be attempted if the enrichment in the pipeline was successful.
    - switch:
    cases:
    - check: this.db_enriched == true
    output:
    sql_insert:
    driver: postgres
    data_source_name: ${DB_CONNECTION_STRING}
    table: "enriched_events"
    columns:
    - event_id
    - user_id
    - event_type
    - enriched_payload
    args_mapping: |
    root = [
    this.event_id,
    this.user_id,
    this.event_type,
    this.format_json()
    ]

    # SECONDARY: If the database insert fails (e.g., DB is down),
    # write the event to a local buffer file.
    - file:
    path: /tmp/fallback-buffer-${!timestamp_unix_date()}.jsonl
    codec: lines

    # TERTIARY (DLQ): If writing to the file fails (e.g., disk full),
    # log a critical error and drop the message to prevent a crash.
    - processors:
    - log:
    level: FATAL
    message: "All fallbacks failed. Dropping event: ${!this.event_id}"
    drop: {}

3. Deploy and Test

  1. Test the Primary Path (Database Online): Make sure your PostgreSQL container is running and send a request. The event should be written to the database.

    # Ensure Postgres is running
    docker compose -f services/postgres.yml start

    # Send a request
    curl -X POST http://localhost:8084/user-events \
    -H "Content-Type: application/json" \
    -d '{"user_id": "user_001", "event_type": "login"}'
  2. Test the Secondary Path (Database Offline): Stop the database and send another request. The event should now be written to a file in /tmp/.

    # Stop Postgres
    docker compose -f services/postgres.yml stop

    # Send a request
    curl -X POST http://localhost:8084/user-events \
    -H "Content-Type: application/json" \
    -d '{"user_id": "user_002", "event_type": "logout"}'

    # Check the buffer file
    ls /tmp/fallback-buffer-*.jsonl

You have now built a resilient pipeline that can survive multiple downstream failures.