Understanding OpenTelemetry Part 2: Main Components of OpenTelemetry


[ad_1]

The OpenTelemetry project consists of application programming interfaces (APIs), software development kits (SDKs), tools, specifications of a data model for telemetry data (metrics, logs and traces) and semantic conventions for data. It also defines a centralized collector and exporters to send telemetry data to various major platforms, so you can gain visibility into the performance of whatever observability platform you choose. To understand how all of these components work together, let’s take a look at an architecture diagram and then take a look at each of the components in more detail:

OpenTelemetry 101 architecture

This video tour introduces OpenTelemetry concepts and components, including API, SDK, OpenTelemetry protocol, and semantic conventions that allow you to work with different programming languages. You can read the sections below for a summary of each. It all starts with how the OpenTelemetry API is decoupled from the implementation in the SDK, allowing you to consume the OpenTelemetry API even without adopting OpenTelemetery into your stack.

OpenTelemetry API

The OpenTelemetry API is used by application developers and library authors to instrument their code to generate telemetry data: traces, metrics, and logs. OpenTelemetry has an API implementation for each of the popular programming languages ​​and is completely decoupled from the implementation (the SDK). The API has a minimal implementation, which means you can use the OpenTelemetry API without needing to adopt OpenTelemetery into your stack.

OpenTelemetry SDK

The OpenTelemetry SDK is an implementation of the OpenTelemetry API. Like the API, it also has an implementation for popular programming languages. Application developers use it to configure OpenTelemetry for their environment.

OpenTelemetry Protocol (OTLP)

The OTLP specification defines the encoding of telemetry data and the protocol used to exchange data between the client and the server. The specification defines how OTLP is implemented over the open source gRPC protocol and HTTP 1.1 transport, and it specifies the Protocol Buffers scheme used for payloads.

OpenTelemetry semantic conventions

OpenTelemetry defines semantic conventions for common operations performed by software, for example HTTP or database or resource calls are consistent regardless of platform or language used.

For example, if a Python application calls a .NET application, you can be assured that both of these applications conform to HTTP conventions such as the http.method attribute, which is used to specify how the call was made (e.g. , one get, one post, one put). These will be consistent across Python and .NET applications.

Resources are primarily attributes that describe the environment your application is running on, such as the name of the service, the Kubernetes node, or the name of the pod.

The OpenTelemetry collector

So, now that you know how APIs and SDKs work, let’s think about a location like a customs office in an international airport, where all the telemetry data can flow between the various telemetry tools and observability platforms. OpenTelemetry Collector is an implementation that serves as a central repository, receiving, processing and exporting telemetry data, regardless of the tools that send the data. This video introduces the architecture of OpenTelemetry Collector and explains how to deploy it in your environment.

OpenTelemetry Collector supports many popular open formats of telemetry data and has three main components:

  • Receivers to receive data like OTLP receiver (to support native OpenTelemetry format) and also other popular open source formats like Jaeger for trace data and Prometheus for metric data
  • Processors to configure a pipeline for each of the data sources (metrics, traces and logs) to process data like filter, sample and enrich your telemetry data in a centralized location.
  • Exporters to send data to a back-end observability tool, such as New Relic.

OpenTelemetry Collector can be deployed as a gateway or as an agent.

  • If deployed as a gateway, all services report telemetry data to a centralized location where it is then exported to a backend observability tool.

  • If deployed as an agent, it will run on the same host as the service reporting data to it. In this case, the collector can collect telemetry data on the host and act as an infrastructure agent. The collector deployed as an agent can send data directly to a backend or another collector who will then export it to a backend.

OpenTelemetry and OTLP exporters

The telemetry data must be translated into the destination language and then transported there. Exporters allow telemetry data to be translated into a particular format required by a back-end observability system. And they also allow data transmission to the system. You have the option of using the current exporter to directly export your service or proxy data through the collector, so that you can redeploy a new exporter without redeploying the service.

To be part of a native OTLP ingestion draft program in New Relic, you can run an OpenTelemetry collector in your environment and configure it to use the OTLP exporter for the collector. It can then export data to New Relic’s native OTLP endpoint in the native OpenTelemetry data format. Learn more in the following video:

Next steps

  • Sign up for the New Relic free tier and start sending your OpenTelemetry data today.

  • Check out the blog post on OpenTelemetry data sources soon!

Warning

New Relic Inc. published this content on July 20, 2021 and is solely responsible for the information it contains. Distributed by Public, unedited and unmodified, on July 20, 2021 04:15:05 PM UTC.

[ad_2]

Leave a Reply

Your email address will not be published. Required fields are marked *

*