Step 1: Configure Broker for Fan-Out
In this step, you will learn the fundamental fan-out pattern: how to take a pipeline that sends data to a single destination and modify it to send copies of each message to multiple destinations concurrently.
The Goal
You will convert a pipeline that writes to a single file into a pipeline that writes to both a file and to your console (stdout) at the same time.
The broker Output
To achieve this, you will use the broker output with pattern: fan_out. The broker acts as a distributor, and fan_out tells it to send a copy of every message to every output listed in its outputs array.
Implementation
-
Start with the Foundation: Copy the
fan-out-foundation.yamlfile to a new file namedfan-out.yaml. This foundation file contains a simple pipeline that sends all messages to a single file.cp examples/data-routing/fan-out-foundation.yaml fan-out.yaml -
Modify the Output: Open
fan-out.yamland replace the entireoutputsection with thebrokerblock below.Insert this into fan-out.yamloutput:
broker:
pattern: fan_out
# The outputs array lists all the destinations.
outputs:
# The first destination is the original file output.
- file:
path: /tmp/events.jsonl
codec: lines
# The second destination is stdout.
- stdout:
codec: linesYou have now configured the pipeline to send every message to two places.
-
Deploy and Test:
# Send a test event
curl -X POST http://localhost:8080/events \
-H "Content-Type: application/json" \
-d '{"message": "This goes to two places at once"}' -
Verify: Check your console. You should see the JSON message printed directly to
stdoutby the second output. Now, check the contents of the file.cat /tmp/events.jsonlYou will see the exact same message in the file, written by the first output.
You have now successfully configured a fan-out pipeline. In the next steps, you will replace these simple file and stdout outputs with real-world destinations like Kafka, S3, and Elasticsearch.