Skip to content
This repository has been archived by the owner on May 8, 2024. It is now read-only.

Commit

Permalink
Final touches (#11)
Browse files Browse the repository at this point in the history
* fixed

* How to perform the exercise

* add python alias

* changed images to latest

* how to use this lab

* changed fail to RT exception

* updated intro to lab

* add intro for OTLP and Collector, basic spell checks

---------

Co-authored-by: Jens Plüddemann <[email protected]>
Co-authored-by: maeddes <[email protected]>
Co-authored-by: Jan-Niklas Tille <[email protected]>
  • Loading branch information
4 people authored Apr 16, 2024
1 parent 802ecf7 commit 39de853
Show file tree
Hide file tree
Showing 8 changed files with 64 additions and 74 deletions.
1 change: 1 addition & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ RUN apt-get -qq update
RUN apt-get -qq install \
python3 \
python3-pip \
python-is-python3 \
docker-ce \
docker-ce-cli \
containerd.io \
Expand Down
4 changes: 2 additions & 2 deletions docker-compose.yml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
services:
application:
image: ghcr.io/jensereal/otel-getting-started-application:adds-github-workflow
image: ghcr.io/jensereal/otel-getting-started-application:latest
build:
context: .
dockerfile: Dockerfile
Expand All @@ -14,7 +14,7 @@ services:
image: kennethreitz/httpbin

tutorial:
image: ghcr.io/jensereal/otel-getting-started-tutorial:adds-github-workflow
image: ghcr.io/jensereal/otel-getting-started-tutorial:latest
build:
context: tutorial/
dockerfile: Dockerfile
Expand Down
2 changes: 1 addition & 1 deletion tutorial/content/labs/instrumentation/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ This includes the concept of semantic conventions, which aim to standardize mean
Ensuring that telemetry interpreted consistently regardless of the vendors involved fosters interoperability.
Finally, there specification also defines the OpenTelemetry Protocol (OTLP).

Using the SDK telemetry data can be generated within applications. This can be accomplished in two ways - automatic and manual. With automatic instrumentation there are predefined metrics, traces and logs that are collected within a library or framework. This will yield a standard set of telemetry data that can be used to getting started quickly with observability. Auto-instrumentation is either already added to a library or framework by the authors or can be added using agents, but we will learn about this later. With manual instrumentation more specific telemetry data can be generated. To use manual instrumentation the source code has to modified most of the time, except when you are using an agent like inspectIT Ocelot that can inject manual instrumentation code into your application. This allows for to collect more specific telemetry data that is tailored to your needs. Manual instrumentation is a big part of the following labs chapter.
Using the SDK telemetry data can be generated within applications. This can be accomplished in two ways - automatic and manual. With automatic instrumentation there are predefined metrics, traces and logs that are collected within a library or framework. This will yield a standard set of telemetry data that can be used to getting started quickly with observability. Auto-instrumentation is either already added to a library or framework by the authors or can be added using agents, but we will learn about this later. With manual instrumentation more specific telemetry data can be generated. To use manual instrumentation the source code has to modified most of the time, except when you are using an agent like [inspectIT Ocelot](https://inspectit.rocks/) that can inject manual instrumentation code into your application. This allows for for greater control to collect more specific telemetry data that is tailored to your needs. Manual instrumentation is a big part of the following labs chapter.

The benefit of instrumenting code with OpenTelemetry to collect telemetry data is that the correlation of the previously mentioned signals is simplified since all signals carry metadata. Correlating telemetry data enables you to connect and analyze data from various sources, providing a comprehensive view of your system's behavior. By setting a unique correlation ID for each telemetry item and propagating it across network boundaries, you can track the flow of data and identify dependencies between different components. OpenTelemetry's trace ID can also be leveraged for correlation, ensuring that telemetry data from the same request or transaction is associated with the same trace. Correlation engines can further enhance this process by matching data based on correlation IDs, trace IDs, or other attributes like timestamps, allowing for efficient aggregation and analysis. Correlated telemetry data provides valuable insights for troubleshooting, performance monitoring, optimization, and gaining a holistic understanding of your system's behavior. In the labs' chapter you will see how correlated data looks like. Traditionally this had to be done by hand or just by timestamps which was a tedious task.

Expand Down
23 changes: 8 additions & 15 deletions tutorial/content/labs/instrumentation/manual/logs/logs.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,21 +5,14 @@ draft: false
weight: 4
---

### setup

#### GitHub Codespaces
If you want to use **VS Code**, you have two options.
One option is to use GitHub codespaces.
Open the [repository](https://github.com/JenSeReal/otel-getting-started/) in your browser, click on `Code` and `Create Codespaces on main`.

#### local VS Code
```sh
git clone https://github.com/JenSeReal/otel-getting-started/
```

If you want to run the lab with your local VS Code, install Microsoft's Dev Containers extension.
Open the repository folder and hit `Ctrl` + `Shift` + `P` to open the command palette.
Run `Dev Container: Reopen in Container` attach yourself to the development container.
### How to perform the exercise
* You need to either start the [repository](https://github.com/JenSeReal/otel-getting-started/) with Codespaces, Gitpod or clone the repository with git and run it locally with dev containers or docker compose
* Initial directory: `labs/manual-instrumentation-logs/initial`
* Solution directory: `labs/manual-instrumentation-logs/solution`
* Source code: `labs/manual-instrumentation-logs/initial/src`
* How to run the application either:
* Run the task for the application: `RRun manual-instrumentation-logs initial application` (runs the Python application)
* Run the application with Terminal commands `python3 src/app.py` (runs the Python application)

---

Expand Down
25 changes: 9 additions & 16 deletions tutorial/content/labs/instrumentation/manual/metrics/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,29 +5,22 @@ draft: false
weight: 3
---

### metrics in OpenTelemetry
### Metrics in OpenTelemetry

A `MetricReader` in OpenTelemetry is an interface that defines how to read metrics from the SDK. It is responsible for collecting metrics data from the SDK and exporting it to a backend system for storage and analysis. There are different types of `MetricReader` implementations, such as `PeriodicExportingMetricReader`, which collects metrics at regular intervals and exports them to a backend.

The metric data model in OpenTelemetry defines the structure of the data that is collected and exported by the SDK. It includes information about the resource, instrumentation library, and the actual metrics data. Here's an example of what the metric data model might look like in JSON format:

The lab environment is deliberately designed to be as minimal as possible. It aims to teach fundamental concepts of OpenTelemetry over providing a highly a realistic deployment scenario.

### setup

#### GitHub Codespaces
If you want to use **VS Code**, you have two options.
One option is to use GitHub codespaces.
Open the [repository](https://github.com/JenSeReal/otel-getting-started/) in your browser, click on `Code` and `Create Codespaces on main`.

#### local VS Code
```sh
git clone https://github.com/JenSeReal/otel-getting-started/
```

If you want to run the lab with your local VS Code, install Microsoft's Dev Containers extension.
Open the repository folder and hit `Ctrl` + `Shift` + `P` to open the command palette.
Run `Dev Container: Reopen in Container` attach yourself to the development container.
### How to perform the exercise
* You need to either start the [repository](https://github.com/JenSeReal/otel-getting-started/) with Codespaces, Gitpod or clone the repository with git and run it locally with dev containers or docker compose
* Initial directory: `labs/manual-instrumentation-metrics/initial`
* Solution directory: `labs/manual-instrumentation-metrics/solution`
* Source code: `labs/manual-instrumentation-metrics/initial/src`
* How to run the application either:
* Run the task for the application: `RRun manual-instrumentation-metrics initial application` (runs the Python application)
* Run the application with Terminal commands `python3 src/app.py` (runs the Python application)

---

Expand Down
33 changes: 10 additions & 23 deletions tutorial/content/labs/instrumentation/manual/traces/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,27 +18,14 @@ resourcedetector

The lab environment is deliberately designed to be as minimal as possible. It aims to teach fundamental concepts of OpenTelemetry over providing a highly a realistic deployment scenario.

### setup

#### GitHub Codespaces
If you want to use **VS Code**, you have two options.
One option is to use GitHub codespaces.
Open the [repository](https://github.com/JenSeReal/otel-getting-started/) in your browser, click on `Code` and `Create Codespaces on main`.

#### local VS Code
```sh
git clone https://github.com/JenSeReal/otel-getting-started/
```

If you want to run the lab with your local VS Code, install Microsoft's Dev Containers extension.
Open the repository folder and hit `Ctrl` + `Shift` + `P` to open the command palette.
Run `Dev Container: Reopen in Container` attach yourself to the development container.

---

### Where to find the code
You can find the code inside `manual-instrumentation-traces/initial`.
You can run the application with the task `Run manual-instrumentation-traces initial application` or with `python3 manual-instrumentation-traces/initial/src/app.py`
### How to perform the exercise
* You need to either start the [repository](https://github.com/JenSeReal/otel-getting-started/) with Codespaces, Gitpod or clone the repository with git and run it locally with dev containers or docker compose
* Initial directory: `labs/manual-instrumentation-traces/initial`
* Solution directory: `labs/manual-instrumentation-traces/solution`
* Source code: `labs/manual-instrumentation-traces/initial/src`
* How to run the application either:
* Run the task for the application: `RRun manual-instrumentation-traces initial application` (runs the Python application)
* Run the application with Terminal commands `python3 src/app.py` (runs the Python application)

---

Expand All @@ -58,7 +45,7 @@ The output reveals that OpenTelemetry's API and SDK packages have already been i

### configure tracing pipeline and obtain a tracer

{{< figure src="images/tracer.drawio_pipeline.svg" width=600 caption="tracing signal" >}}
{{< figure src="images/tracer.drawio_pipeline.svg" width=600 caption="Overview of how the tracing signal is created and processed" >}}

```py { title="trace_utils.py" }
# OTel SDK
Expand All @@ -70,7 +57,7 @@ def create_tracing_pipeline() -> BatchSpanProcessor:
return span_processor
```

Inside the `src` directory, create a new file `trace_utils.py`.
Inside the `src` directory, create a new file `trace_utils.py` with the code displayed above.
We'll use it to separate tracing-related configuration from the main application.
At the top of the file, specify the imports as shown above.
Create a new function `create_tracing_pipeline`.
Expand Down
21 changes: 15 additions & 6 deletions tutorial/content/labs/telemetry_pipelines/collector/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,15 @@ draft: false
weight: 1
---

### How to perform the exercise
* You need to either start the [repository](https://github.com/JenSeReal/otel-getting-started/) with Codespaces, Gitpod or clone the repository with git and run it locally with dev containers or docker compose
* Initial directory: `labs/collector/initial`
* Solution directory: `labs/collector/solution`
* Source code: `labs/collector/initial/src`
* How to run the application either:
* Run the task for the application: `Run collector initial application` (runs the Python application) and `Run collector initial` (runs the OpenTelemetry Collector in a Docker Container)
* Run the application with Terminal commands `python3 src/app.py` (runs the Python application) and `docker compose up` (runs the OpenTelemetry Collector in a Docker Container)

### Why do we need Collectors?

Over the previous labs, we have seen how OpenTelemetry's SDK implements the instrumentation which produces the telemetry data.
Expand All @@ -20,18 +29,18 @@ Deploying a Collector has many advantages.
Most importantly, it allows for a cleaner separation of concerns.
Developers shouldn't have to care about what happens to telemetry after it has been generated.
With a collector, operators are able to control the telemetry configuration without having to modify the application code.
Additionally, consolidating these concerns in a central location streamlines maintaince.
In a SDK-based approach, the configuration of where telemetry is going, what format it needs to be in, and it should be processed is spread across various codebases managed by separate teams.
Additionally, consolidating these concerns in a central location streamlines maintenance.
In an SDK-based approach, the configuration of where telemetry is going, what format it needs to be in, and it should be processed is spread across various codebases managed by separate teams.
However, telemetry pipelines are rarely specific to individual applications.
Without a collector, making adjustments to the configuration and keeping it consistent across applications can get fairly difficult.
Moving things out of the SDK has other benefits.
For instance, the overal configuration of the SDK becomes much leaner.
Moreover, we no longer have to re-deploy the application everytime we make a change to the telemetry pipeline.
For instance, the overall configuration of the SDK becomes much leaner.
Moreover, we no longer have to re-deploy the application every time we make a change to the telemetry pipeline.
Troubleshooting becomes significantly easier, since there is only a single location to monitor when debugging problems related to telemetry processing.
Offloading processing and forwarding to another process means applications can spend their resources on performing actual work, rather than dealing with telemetry.
Before going into more detail, let's look at the components that make up a collector.

### architecture of a collector pipeline
### Architecture of a collector pipeline
{{< figure src="images/collector_arch.drawio.svg" width=600 caption="collector to process and forward telemetry" >}}

The pipeline for a telemetry signal consists of a combination of receivers, processors, and exporters.
Expand Down Expand Up @@ -94,7 +103,7 @@ It is also possible for receivers and exporters to be shared by pipelines.
If the same receiver is used in different pipelines, each pipeline receives a replica of the data stream.
If different pipelines target the same exporter, the data stream will be merged into one.

### define a basic collector pipeline
### Define a basic collector pipeline

Let's put the knowledge into practice.
Open the `docker-compose.yml` file to review the lab environment.
Expand Down
29 changes: 18 additions & 11 deletions tutorial/content/labs/use_case_scenarios/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,13 @@ draft: false
weight: 10
---

## How to perform the exercise
* You need to either start the [repository](https://github.com/JenSeReal/otel-getting-started/) with Codespaces, Gitpod or clone the repository with git and run it locally with dev containers or docker compose
* Directory: `labs/otel-in-action`
* How to run the application either:
* Run the task for the application: `Run otel-in-action docker` (runs docker compose)
* Run the application with Terminal commands `docker compose up` (runs docker compose)

## Intro

This introductory lab exercise will demonstrate capabilities of OpenTelemetry from a plain end-user perspective. There will be no changes in configuration necessary. It's simply about starting a set of pre-defined containers and walk through usage scenarios.
Expand All @@ -29,7 +36,7 @@ The following diagram explains the architecture:

- the OpenTelemetry Collector exports the information to various thirs-party applications
- the (distributed) traces are exported to a Jaeger instance
- the metrics are exported to a Prometheus instance
- the metrics are exported to a Prometheus instance

{{< figure src="images/application_instrumented.png" width=700 caption="Application Architecture Instrumented" >}}

Expand All @@ -55,14 +62,14 @@ The output should show the startup process of the containers and all standard ou
The beginning of the output should look similar to this:
```
[+] Running 8/0
✔ Container python-java-otel-todolist-todoui-thymeleaf-1 Created 0.0s
✔ Container python-java-otel-todolist-postgresdb-1 Created 0.0s
✔ Container python-java-otel-todolist-loadgenerator-1 Created 0.0s
✔ Container python-java-otel-todolist-jaeger-1 Created 0.0s
✔ Container python-java-otel-todolist-prometheus-1 Created 0.0s
✔ Container python-java-otel-todolist-todoui-flask-1 Created 0.0s
✔ Container python-java-otel-todolist-todobackend-springboot-1 Created 0.0s
✔ Container python-java-otel-todolist-otelcol-1 Created
✔ Container python-java-otel-todolist-todoui-thymeleaf-1 Created 0.0s
✔ Container python-java-otel-todolist-postgresdb-1 Created 0.0s
✔ Container python-java-otel-todolist-loadgenerator-1 Created 0.0s
✔ Container python-java-otel-todolist-jaeger-1 Created 0.0s
✔ Container python-java-otel-todolist-prometheus-1 Created 0.0s
✔ Container python-java-otel-todolist-todoui-flask-1 Created 0.0s
✔ Container python-java-otel-todolist-todobackend-springboot-1 Created 0.0s
✔ Container python-java-otel-todolist-otelcol-1 Created
```

As the ongoing output of all components can get very noisy, it is recommended to start a new terminal sessionand leave the 'docker compose up' terminal session running in the background.
Expand Down Expand Up @@ -283,7 +290,7 @@ You can access the web UI on the following [link](http://localhost:9090).

The main entry screen looks like this:

{{< figure src="images/prometheus_start_screen.png" width=700 caption="Prometheus Start Screen" >}}
{{< figure src="images/prometheus_start_screen.png" width=700 caption="Prometheus Start Screen" >}}

There isn't much displayed right when you start. To get a list of all the metrics that are currently available click on the little icon called the metrics explorer:

Expand Down Expand Up @@ -317,7 +324,7 @@ And garbage collection duration:

{{< figure src="images/prometheus_graph_jvm_gc_duration.png" width=700 caption="Prometheus Graph JVM Gargabe Collection Duration" >}}

We are not going to analyze individual metrics in this chapter. This is more meant to demonstrate the breadth of information, which the standard OpenTelemetry agent for Java provides. This is similar to the analysis in the traces section.
We are not going to analyze individual metrics in this chapter. This is more meant to demonstrate the breadth of information, which the standard OpenTelemetry agent for Java provides. This is similar to the analysis in the traces section.

If the collected metrics of the auto-configured agents are not enough, manual instrumentation can be used.
It also becomes ovious that there are no Python or Flask metrics being collected. This is how the configuration is set up in this case.
Expand Down

0 comments on commit 39de853

Please sign in to comment.