In this article, we will cover best practices for migrating from Jaeger, the deprecated OpenShift distributed tracing platform, to the OpenShift distributed tracing platform (Tempo). The support for the former product will be dropped towards the end of 2025.
These two distributed tracing backends are fundamentally different in how they store data. However, they support similar ingestion protocols—the OpenTelemetry protocol (OTLP) and visualization Jaeger user interface (UI). These capabilities allow us to provide smooth migration.
Tempo
The OpenShift distributed tracing platform Tempo is based on the upstream Grafana Tempo project. For persistence, Tempo can use either an object store (e.g., S3) or local storage. For production deployments, we recommend an object store. The OpenShift distributed tracing platform Tempo can be deployed with two custom resource definitions (CRDs): TempoStack and TempoMonolithic. Local storage is supported only with TempoMonolitic which deploys a single pod with all services that access the same local filesystem. On the other hand, TempoStack deploys Tempo services in individual pods.
For visualization, TempoStack and TempoMonolithic can be configured to expose Jaeger UI or the new OpenShift distributed tracing UI plug-in provided by the cluster observability operator. The UI plug-in is based on the upstream CNCF Perses project which is a preferred distributed tracing UI on OpenShift. The plug-in UI will, in the future, support fine-grained query role-based access control (RBAC) to allow users to retrieve spans only from services and namespaces they are allowed to access.
The query API is another innovation. Tempo supports the TraceQL query language which allows users to construct more complex queries. For instance, use range queries for HTTP response codes or use structural operators to define relationships between spans.
Tempo versus Jaeger
The OpenShift distributed tracing platform Jaeger uses Elasticsearch 6 as a backend persistent store. Compared to Tempo, it does not require an object store. However, it relies on fast persistent volumes. The Jaeger backend implementation also does not support multi-tenancy which is natively implemented in Tempo.
Migrate from Jaeger to Tempo
The foundational idea when migrating from Jaeger to Tempo is to run both systems simultaneously for a period of time until the data from Jaeger is no longer needed or is removed due to retention. Therefore, the migration configuration has to ensure the trace data is sent to both systems simultaneously for some time, or only to Tempo.
Before we look into the migration, let’s first deploy Tempo into a tempo-observability
namespace by applying manifests from migrate-from-jaeger-to-tempo/tempo-observability. It deploys a multi-tenant Tempo instance and an OpenTelemetry collector which pushes data to the dev tenant. In this new setup, applications should be sending data to the OpenTelemetry collector and not directly to the Tempo backend. This flexible architecture enables users to make important changes to their setup in the OpenTelemetry collector. For instance, users can send all data or a subset of data to another backend, perform filtering of sensitive data or additional downsampling.
As mentioned above, the data in Tempo can be visualized in Jaeger UI deployed alongside Tempo or via the new OpenShift distributed tracing UI plug-in. You can access the Jaeger UI and the OpenShift UI plug-in from the observe menu in the OpenShift admin console.
Reconfiguring applications
The first approach we will explore is changing the instrumentation to report data to the newly deployed Tempo instance. There are two ways the application can send data to the tracing backend. It either configures the exporter in the software development kit (SDK) to directly send data to the backend, or it sends data to a sidecar.
Reconfigure SDK exporter
Configuring the SDK exporter depends on how the application is built and configured. Some applications might hardcode the exporter in the source code or define it as environment variable.
In this case, the workloads should be configured to send data to the OpenTelemetry collector in the tempo-observability
namespace. The service dev-collector
exposes all protocol Jaeger agent and collector supports.
Switch from the Jaeger sidecar to OpenTelemetry
You can configure the application to send data to the Jaeger sidecar which runs in an additional container in the application pod. In this scenario, the OpenTelemetry collector sidecar can remove the Jaeger sidecar. The following OpenTelemetry collector CR configures a sidecar which can receive data in all Jaeger supported protocols. Then the sidecar has to be injected into the pod by adding annotation sidecar.opentelemetry.io/inject=otel-sidecar
to the pod annotations in the deployment, as follows:
apiVersion: opentelemetry.io/v1beta1
kind: OpenTelemetryCollector
metadata:
name: otel-sidecar
spec:
mode: sidecar
config:
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
jaeger:
protocols:
thrift_compact:
endpoint: 0.0.0.0:6831
thrift_binary:
endpoint: 0.0.0.0:6832
thrift_http:
endpoint: 0.0.0.0:14268
grpc:
endpoint: 0.0.0.0:14250
zipkin:
endpoint: 0.0.0.0:9411
processors:
resourcedetection/env:
detectors: [ env ]
timeout: 2s
override: false
exporters:
otlphttp/tempo:
endpoint: http://dev-collector.tempo-observability.svc.cluster.local:4318
otlphttp/jaeger:
endpoint: http://jaeger-collector.ploffay.svc.cluster.local:4318
tls:
ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt"
debug: {}
service:
pipelines:
traces:
receivers: [otlp,jaeger,zipkin]
processors: [resourcedetection/env]
exporters: [debug, otlphttp/tempo]
If you wish to forward data to Jaeger, then configure the Jaeger exporter endpoint and add the exporter to the traces pipeline.
Change Jaeger to Tempo with OpenTelemetry collector
You can configure Red Hat OpenShift Service Mesh to send trace data directly to the trace backend (Jaeger) and optionally provision it, or send data to an OpenTelemetry collector. For migration, the latter case is easy to solve, because in that case we can simply reconfigure the collector by adding an exporter to export data to our newly provisioned Tempo backend.
If the OpenShift Service Mesh (OSSM) was configured to provision Jaeger backend (as shown next), we can reconfigure it to send trace data first to an OpenTelemetry collector where we can easily configure an additional exporter to a newly deployed Tempo backend.
apiVersion: maistra.io/v2
kind: ServiceMeshControlPlane
metadata:
name: basic
namespace: istio-system
spec:
version: v2.6
mode: ClusterWide
tracing:
type: Jaeger
sampling: 10000
addons:
kiali:
enabled: false
name: kiali
grafana:
enabled: false
jaeger:
name: jaeger
install:
storage:
type: Memory
ingress:
enabled: true
The following manifests deploy an OpenTelemetry collector in the OpenShift Service Mesh namespace which forwards trace data to Tempo and the old Jaeger instance. It also reconfigures the OSSM to send the data to the collector, as follows:
apiVersion: opentelemetry.io/v1beta1
kind: OpenTelemetryCollector
metadata:
name: otel
namespace: istio-system
spec:
mode: deployment
config:
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
exporters:
debug:
verbosity: detailed
otlphttp/tempo:
endpoint: http://dev-collector.tempo-observability.svc.cluster.local:4318
otlphttp/jaeger:
endpoint: http://jaeger-collector.istio-system.svc.cluster.local:4318
tls:
ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt"
service:
pipelines:
traces:
receivers: [otlp]
processors: []
exporters: [debug, otlphttp/tempo, otlphttp/jaeger]
---
apiVersion: maistra.io/v2
kind: ServiceMeshControlPlane
metadata:
name: basic
namespace: istio-system
spec:
version: v2.6
mode: ClusterWide
tracing:
type: Jaeger
sampling: 10000
addons:
kiali:
enabled: false
name: kiali
grafana:
enabled: false
jaeger:
name: jaeger
install:
storage:
type: Memory
ingress:
enabled: true
meshConfig:
extensionProviders:
- name: otel
opentelemetry:
port: 4317
service: otel-collector.istio-system.svc.cluster.local
---
apiVersion: telemetry.istio.io/v1alpha1
kind: Telemetry
metadata:
name: mesh-default
namespace: istio-system
spec:
tracing:
- providers:
- name: otel
randomSamplingPercentage: 100
Next steps
In this article, we covered possible migrating approaches from Jaeger to Tempo trace backend. The approach will ultimately depend on how the tracing infrastructure with instrumentation is set up. However, given the flexibility of the OpenTelemetry collector with supporting all Jaeger and Zipkin ingestion protocols, the migration is straightforward.
You can find all the manifests from this article hosted in GitHub.