Skip to Content
Reference guidesOpenTelemetry Operator for Kubernetes
OpenTelemetry Operator for Kubernetes Logo

Get started

OpenTelemetry Operator for Kubernetes  is an implementation of a Kubernetes Operator, that manages collectors and auto-instrumentation of the workload using OpenTelemetry instrumentation libraries.

Before exploring the chart characteristics, let’s start by deploying the default configuration. The default configuration requires cert-manager installed in your cluster:

helm install <release-name> oci://dp.apps.rancher.io/charts/opentelemetry-operator \ --set global.imagePullSecrets={application-collection}

Please check our authentication guide if you need to configure Application Collection OCI credentials in your Kubernetes cluster.

Chart overview

The OpenTelemetry Operator for Kubernetes Helm chart distributed in Application Collection is based on the OpenTelemetry Operator Helm chart  and adapted to include our best practices. As such, any chart-related documentation provided by upstream will work out of the box with our chart. Additionally to the upstream chart repository, you can check the official OpenTelemetry Operator for Kubernetes documentation .

The Helm chart will deploy the Operator controller component ready to handle managed instances of these Custom Resources (CR):

  • OpenTelemetryCollector: Creates an instance of the Collector that the Operator manages.
  • TargetAllocator: A tool to distribute targets of the PrometheusReceiver on all deployed Collector instances.
  • OpAMPBridge: OpAMP Bridge is an optional component of the OpenTelemetry Operator that can be used to report and manage the state of OpenTelemetry Collectors in Kubernetes. It implements the agent-side of the OpAMP protocol and communicates with an OpAMP server.
  • Instrumentation: Defines the configuration for automatic instrumentation so the Operator knows what pods to instrument and which automatic instrumentation to use for those pods.

Check them by running kubectl api-resources --api-group=opentelemetry.io.

The default configuration expects cert-manager  to be installed in your Kubernetes cluster. See the section below for more details.

Chart configuration

To view the supported configuration options and documentation, run:

helm show values oci://dp.apps.rancher.io/charts/opentelemetry-operator

Use auto-generated certificates

As an alternative to use cert-manager, a self-signed certificate for the webhook component can be generated by Helm itself by installing the Helm chart with the following parameters:

helm install <release-name> oci://dp.apps.rancher.io/charts/opentelemetry-operator \ --set global.imagePullSecrets={application-collection} \ --set admissionWebhooks.certManager.enabled=false \ --set admissionWebhooks.autoGenerateCert.enabled=true

RBAC permissions for the Collector

The Collector instances may need specific RBAC permissions  to work. Manual configuration by defining specific Roles or ClusterRoles is recommended for granular control. As an alternative, the setup can be delegated to the Operator so the RBAC configuration for the Collectors is managed automatically. This automatic configuration can be activated via Helm chart parameters:

helm install <release-name> oci://dp.apps.rancher.io/charts/opentelemetry-operator \ --set global.imagePullSecrets={application-collection} \ --set manager.createRbacPermissions=true

Operations

Install cert-manager

By default, OpenTelemetry Operator for Kubernetes depends on cert-manager  to issue and renew the self-signed certificate for the webhook component. Read more on this here: TLS Certificate Requirement .

You need to have cert-manager installed in your cluster, which you can do so from Application Collection:

helm install <cert-manager-release-name> oci://dp.apps.rancher.io/charts/cert-manager \ --namespace cert-manager --create-namespace \ --set global.imagePullSecrets={application-collection} \ --set crds.enabled=true

Once cert-manager is deployed and its Custom Resource Definitions (CRDs) are installed (you can check it by running kubectl api-resources --api-group=cert-manager.io), the default configuration of the OpenTelemetry Operator for Kubernetes Helm chart can be applied on installation.

Install OpenTelemetry Collector

For demonstration purposes and simplification, let the Helm chart create the RBAC configuration of the Collector as explained in the section above. Once the OpenTelemetry Operator controller is up and running along with its CRDs, a Collector instance can be defined as a CR.

To be able to pull the Collector image from Application Collection, you first need to define a service account with the imagePullSecrets attribute to be able to pull images from Application Collection and attach it to the Collector CR via the spec.serviceAccount attribute:

kubectl create serviceaccount image-puller kubectl patch serviceaccount image-puller --patch '{"imagePullSecrets": [{"name": "application-collection"}]}'

The following OpenTelemetryCollector CR (my-collector.yaml) is an example of a simple Collector that configures the Kubernetes Objects Receiver  and an OTLP endpoint.

apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: my-collector spec: serviceAccount: image-puller mode: deployment config: receivers: k8sobjects: objects: - group: events.k8s.io mode: watch name: events otlp: protocols: http: endpoint: 0.0.0.0:4318 processors: {} exporters: debug: verbosity: detailed service: pipelines: logs: receivers: [k8sobjects] processors: [] exporters: [debug] traces: receivers: [otlp] processors: [] exporters: [debug]
kubectl apply -f my-collector.yaml

This OpenTelemetry Collector instance configured with the Kubernetes Objects Receiver can collect metrics and events from the cluster. The following commands are an example of this. They create a test Kubernetes event named otel-col-test and the logs show how OpenTelemetry Collector collects that event:

$ kubectl create -f - <<<'{"apiVersion": "v1", "kind": "Event", "metadata": {"name": "otel-col-test" }}' $ kubectl logs deploy/my-collector-collector | grep event.name | grep otel-col-test -> event.name: Str(otel-col-test)

Install Target Allocator

The Target Allocator optional component can be deployed within the OpenTelemetryCollector CR. The example below (collector-with-ta.yaml) illustrates it.

It is important to note that the Target Allocator will run on its own workload and a service account with permissions to pull images from Application Collection needs to be specified via spec.targetAllocator.serviceAccount. For simplicity, in the example below the service account is being reused for the Collector and the Target Allocator, but ideally those service accounts should be different so narrow RBAC permissions can be associated to each component.

apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: collector-with-ta spec: serviceAccount: image-puller mode: statefulset targetAllocator: enabled: true serviceAccount: image-puller config: receivers: prometheus: config: scrape_configs: - job_name: 'otel-collector' scrape_interval: 10s static_configs: - targets: [ '0.0.0.0:8888' ] exporters: debug: {} service: pipelines: metrics: receivers: [prometheus] exporters: [debug]
kubectl apply -f collector-with-ta.yaml

After applying the resource configuration for its creation, the Operator controller launches the pertaining workloads:

$ kubectl get opentelemetrycollector/collector-with-ta targetallocator/collector-with-ta NAME MODE VERSION READY AGE IMAGE MANAGEMENT opentelemetrycollector.opentelemetry.io/collector-with-ta statefulset 0.127.0 1/1 3m11s dp.apps.rancher.io/containers/opentelemetry-collector:0.127.0-k8s-1.10 managed NAME AGE IMAGE MANAGEMENT targetallocator.opentelemetry.io/collector-with-ta 2m32s managed $ kubectl get pod --selector app.kubernetes.io/instance=default.collector-with-ta NAME READY STATUS RESTARTS AGE collector-with-ta-collector-0 1/1 Running 0 2m58s collector-with-ta-targetallocator-74765648b9-fktnl 1/1 Running 0 2m58s

To allow more customization for the Target Allocator component, the stand-alone TargetAllocator CRD can be used.

Install OpAMP Bridge

The OpAMP Bridge optional component can be deployed by creating an OpAMPBridge CR (opamp-bridge.yaml). As in the previous examples, remember to associate a proper service account that allows pulling images from Application Collection:

apiVersion: opentelemetry.io/v1alpha1 kind: OpAMPBridge metadata: name: opamp-bridge spec: serviceAccount: image-puller endpoint: "<OPAMP_SERVER_ENDPOINT>" capabilities: AcceptsRemoteConfig: true ReportsEffectiveConfig: true ReportsHealth: true ReportsRemoteConfig: true componentsAllowed: receivers: - otlp processors: - memory_limiter - batch exporters: - otlphttp
kubectl apply -f opamp-bridge.yaml

After applying the resource configuration for its creation, the Operator controller launches the pertaining workloads:

$ kubectl get opampbridge opamp-bridge NAME AGE VERSION ENDPOINT opamp-bridge 60s 0.127.0 <OPAMP_SERVER_ENDPOINT> $ kubectl get pod --selector app.kubernetes.io/instance=default.opamp-bridge NAME READY STATUS RESTARTS AGE opamp-bridge-opamp-bridge-d465b94f6-r9jkt 1/1 Running 0 2m1s

Read more on its usage here .

Use auto-instrumentation

The OpenTelemetry Operator supports injecting and configuring auto-instrumentation  libraries for services based on several supported programming languages. The steps below shows how to auto-instrument a Go service  and shows how traces are collected by a previously deployed OpenTelemetry Collector.

To start, make sure that the OpenTelemetry Operator allows to automatically instrument Go applications by passing the --set manager.autoInstrumentation.go.enabled=true Helm chart parameter:

helm upgrade --install <release-name> oci://dp.apps.rancher.io/charts/opentelemetry-operator --reuse-values \ --set manager.autoInstrumentation.go.enabled=true

Then, instruct the Operator to send the traces to the Collector created before by defining an Instrumentation CR (instrumentation.yaml):

apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation metadata: name: my-instrumentation spec: exporter: endpoint: http://my-collector-collector:4318 propagators: - tracecontext - baggage sampler: type: parentbased_traceidratio argument: "1"
kubectl apply -f instrumentation.yaml

Confirm that the Instrumentation CR is created:

$ kubectl get instrumentation my-instrumentation NAME AGE ENDPOINT SAMPLER SAMPLER ARG my-instrumentation 79s http://my-collector-collector:4318 parentbased_traceidratio 1

To finish, run some Go application to auto-instrument. Remember to set the proper imagePullSecrets to pull the OpenTelemetry Instrumentation for Go image from Application Collection and set the expected auto-instrumentation annotations.

kubectl run alertmanager --image quay.io/prometheus/alertmanager --port 9093 \ --overrides='{"spec": {"imagePullSecrets":[{"name": "application-collection"}]}}' \ --annotations=instrumentation.opentelemetry.io/inject-go=true \ --annotations=instrumentation.opentelemetry.io/otel-go-auto-target-exe=/bin/alertmanager

Observe that, by setting the annotations to the pod, a sidecar container is automatically instantiated and the HTTP request traces to the service are sent to the Collector:

$ kubectl get pod alertmanager -o jsonpath="{.spec.containers[0]['name', 'image']}" alertmanager quay.io/prometheus/alertmanager $ kubectl get pod alertmanager -o jsonpath="{.spec.containers[1]['name', 'image']}" opentelemetry-auto-instrumentation dp.apps.rancher.io/containers/opentelemetry-autoinstrumentation-go:0.21.0-1.1 $ kubectl port-forward alertmanager 9093 $ curl -I localhost:9093/-/healthy $ kubectl logs deploy/my-collector-collector | grep -A1 '/-/healthy' -> url.path: Str(/-/healthy) -> http.response.status_code: Int(200)

Upgrade the chart

In general, an in-place upgrade of your OpenTelemetry Operator for Kubernetes installation can be performed using the built-in Helm upgrade workflow:

helm upgrade <release-name> oci://dp.apps.rancher.io/charts/opentelemetry-operator

Be aware that changes from version to version may include breaking changes in OpenTelemetry Operator itself or in the Helm chart templates. In other cases, the upgrade process may require additional steps to be performed. Refer to the official release notes  and always check the UPGRADING.md  notes before proceeding with an upgrade.

Uninstall the chart

Removing an installed OpenTelemetry Operator for Kubernetes is simple:

helm uninstall <release-name>

OpenTelemetry Collector instances and related resources managed by the Operator are left installed when uninstalling the Helm chart. If you don’t need those instances, it is recommended to remove them by deleting the CRs before uninstalling the Helm chart and the deletion will be gracefully handled by the Operator.

Remember to remove any other resources you deployed during this guide.

kubectl delete pod/alertmanager kubectl delete instrumentation my-instrumentation kubectl delete opampbridge opamp-bridge kubectl delete opentelemetrycollector collector-with-ta kubectl delete event otel-col-test kubectl delete opentelemetrycollector my-collector kubectl delete serviceaccount image-puller helm uninstall <release-name>

In case you have installed cert-manager as part of this guide, remember to uninstall it as well:

helm uninstall <cert-manager-release-name> kubectl delete namespace cert-manager
Last updated on
SUSE Application Collection: OpenTelemetry Operator for Kubernetes