Skip to main content

O-RAN Edge Telemetry: Multi-Destination Data Pipeline

Streamline O-RAN network observability with a single edge pipeline that collects telemetry from DU/RU/CU nodes and routes data simultaneously to multiple destinations: real-time dashboards, analytics platforms, and long-term storage.

The Problem

Telco operators managing O-RAN networks face a complex observability challenge:

  • Thousands of distributed edge nodes generate continuous telemetry (DU, RU, CU units)
  • Multiple consumption patterns require the same data:
    • Real-time dashboards (Grafana) for NOC operations
    • Long-term storage (Parquet) for capacity planning
    • Analytics platforms (Cloudera) for ML/AI insights
    • Compliance reporting for PTP timing requirements
  • The edge gap: Data needs to be collected, shaped, and enriched before it reaches your Cloudera/Kafka/Grafana backends — but there's no lightweight, Red Hat-integrated way to do it at the edge

The result: 3-5x network overhead from shipping raw data, inconsistent schemas across destinations, and no edge-native processing before data hits your backends.

The Solution

Single collection, multiple destinations: Expanso Edge pipelines run on OpenShift Single Node OpenShift (SNO) directly alongside RAN workloads, collecting O-RAN telemetry once and routing to all destinations simultaneously.

DU Telemetry (PTP, CPU, PRB, RSRP)
Expanso Edge Pipeline
parse, normalize, enrich, filter anomalies, fan-out
📊 Grafana / Prometheus
📦 Parquet Storage
☁️ Cloudera CDP
💾 Local Buffer

Where Expanso Fits In Your Stack

Your backend stays exactly as-is — Cloudera CDP, Kafka, Grafana, HDFS/Ozone. Expanso Edge handles what happens before data reaches those systems: collect, shape, augment, and deliver to multiple destinations simultaneously.

LayerWhatHow Expanso Helps
CollectMQTT, OPC-UA, syslog, file tailing from DU/RU/CU nodesSingle lightweight agent on OpenShift SNO — no NiFi deployment at the edge
ShapeParse messy telco logs, normalize schemas, filter noiseBloblang processors run at the edge — only clean data leaves the site
AugmentEnrich with cell site metadata, geo coordinates, compliance zonesLookups and enrichment happen before transmission, not after
DeliverFan-out to Kafka, Grafana, Parquet, Cloudera — all at onceOne pipeline, multiple destinations. Your backends receive ready-to-use data
Tower → Expanso Edge (collect, shape, augment) → Kafka / Cloudera / Grafana / S3
├── runs on OpenShift SNO (your existing backends, unchanged)
├── buffers locally during outages
└── managed via Red Hat-integrated operator

Key Benefits

  • Edge-native processing: Collect and transform at the source, not in the datacenter
  • 99% bandwidth reduction: Only shaped, filtered data leaves the edge site
  • Zero data loss: Local buffering handles burst traffic and connectivity drops
  • Unified data model: Every destination receives consistent, enriched schemas
  • Multi-destination fan-out: Same data to Kafka, Grafana, Parquet, and Cloudera simultaneously
  • Red Hat integrated: Runs as certified OpenShift operator on existing SNO infrastructure

What You'll Build

This guide walks through creating a production-ready O-RAN telemetry pipeline that:

  1. Collects PTP timing, CPU, PRB utilization, and RF metrics from DU nodes
  2. Transforms raw telemetry with Bloblang processing (compliance classification, enrichment)
  3. Routes to multiple destinations using fan-out broker pattern
  4. Monitors pipeline health with built-in observability

Key Metrics Processed

MetricSourcePurposeCompliance Threshold
PTP4L OffsetDU Timing5G sync complianceless than ±100ns (compliant), ±1000ns (critical)
PRB DL/UL %DU SchedulerResource utilizationgreater than 90% (congested)
CPU %DU SystemPerformance monitoringgreater than 80% (alert)
RSRP/SINRUE ReportsRF qualityRSRP less than -120dBm (poor coverage)

Prerequisites

  • Expanso Edge running on OpenShift SNO nodes
  • Access to DU telemetry endpoints or files
  • Grafana + OTEL Collector + Prometheus stack
  • Parquet writer capability
  • Cloudera Data Platform (CDP) or Kafka endpoint

Get Started

Choose your path:

Interactive Explorer

See each O-RAN telemetry processing technique with side-by-side transformations

Step-by-Step Tutorial

Build the pipeline incrementally:

  1. Collect O-RAN Metrics
  2. Transform and Enrich
  3. Multi-Destination Routing
  4. Grafana Dashboards
  5. Parquet and Cloudera

Complete Pipeline

Download the production-ready solution