diff --git a/tutorial/content/intro/_index.md b/tutorial/content/intro/_index.md
index 6154c6e..69aa781 100644
--- a/tutorial/content/intro/_index.md
+++ b/tutorial/content/intro/_index.md
@@ -3,5 +3,3 @@ archetype = "chapter"
title = "Introduction"
weight = 1
+++
-
-Before diving head-first into the labs, this chapter provides an introduction to OpenTelemetry.
\ No newline at end of file
diff --git a/tutorial/content/intro/how_we_got_here/images/distributed_system.drawio.png b/tutorial/content/intro/how_we_got_here/images/distributed_system.drawio.png
new file mode 100644
index 0000000..cbde8bd
Binary files /dev/null and b/tutorial/content/intro/how_we_got_here/images/distributed_system.drawio.png differ
diff --git a/tutorial/content/intro/how_we_got_here/images/distributed_system.drawio.xml b/tutorial/content/intro/how_we_got_here/images/distributed_system.drawio.xml
new file mode 100644
index 0000000..e9f1106
--- /dev/null
+++ b/tutorial/content/intro/how_we_got_here/images/distributed_system.drawio.xml
@@ -0,0 +1,151 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/tutorial/content/intro/status_quo/images/logs.png b/tutorial/content/intro/how_we_got_here/images/logs.png
similarity index 100%
rename from tutorial/content/intro/status_quo/images/logs.png
rename to tutorial/content/intro/how_we_got_here/images/logs.png
diff --git a/tutorial/content/intro/status_quo/images/metric_types.drawio b/tutorial/content/intro/how_we_got_here/images/metric_types.drawio
similarity index 100%
rename from tutorial/content/intro/status_quo/images/metric_types.drawio
rename to tutorial/content/intro/how_we_got_here/images/metric_types.drawio
diff --git a/tutorial/content/intro/how_we_got_here/images/metric_types.drawio.png b/tutorial/content/intro/how_we_got_here/images/metric_types.drawio.png
new file mode 100644
index 0000000..ad45ea2
Binary files /dev/null and b/tutorial/content/intro/how_we_got_here/images/metric_types.drawio.png differ
diff --git a/tutorial/content/intro/how_we_got_here/images/metric_types.drawio.xml b/tutorial/content/intro/how_we_got_here/images/metric_types.drawio.xml
new file mode 100644
index 0000000..88c3f73
--- /dev/null
+++ b/tutorial/content/intro/how_we_got_here/images/metric_types.drawio.xml
@@ -0,0 +1,38 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/tutorial/content/intro/how_we_got_here/images/three_pillars_of_observability.drawio.png b/tutorial/content/intro/how_we_got_here/images/three_pillars_of_observability.drawio.png
new file mode 100644
index 0000000..069a75f
Binary files /dev/null and b/tutorial/content/intro/how_we_got_here/images/three_pillars_of_observability.drawio.png differ
diff --git a/tutorial/content/intro/how_we_got_here/images/three_pillars_of_observability.drawio.xml b/tutorial/content/intro/how_we_got_here/images/three_pillars_of_observability.drawio.xml
new file mode 100644
index 0000000..3473ab1
--- /dev/null
+++ b/tutorial/content/intro/how_we_got_here/images/three_pillars_of_observability.drawio.xml
@@ -0,0 +1,35 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/tutorial/content/intro/how_we_got_here/images/workload_resource_analysis_gregg.png b/tutorial/content/intro/how_we_got_here/images/workload_resource_analysis_gregg.png
new file mode 100644
index 0000000..6f6897a
Binary files /dev/null and b/tutorial/content/intro/how_we_got_here/images/workload_resource_analysis_gregg.png differ
diff --git a/tutorial/content/intro/how_we_got_here/images/workload_resource_analysis_gregg.xcf b/tutorial/content/intro/how_we_got_here/images/workload_resource_analysis_gregg.xcf
new file mode 100644
index 0000000..86aec62
Binary files /dev/null and b/tutorial/content/intro/how_we_got_here/images/workload_resource_analysis_gregg.xcf differ
diff --git a/tutorial/content/intro/how_we_got_here/index.md b/tutorial/content/intro/how_we_got_here/index.md
new file mode 100644
index 0000000..7b88178
--- /dev/null
+++ b/tutorial/content/intro/how_we_got_here/index.md
@@ -0,0 +1,122 @@
+---
+title: "How we (traditionally) observe our systems"
+linktitle: "How we got here"
+draft: false
+weight: 10
+---
+
+
+
+> Observability is a measure of how well the internal states of a system can be inferred from knowledge of its external outputs. [[Wiki]](https://en.wikipedia.org/wiki/Observability)
+
+To make a distributed system observable, we must model its state in a way that lets us reason about its behavior.
+This is a composition of three factors:
+First, there is the *workload*.
+These are the operations a system performs to fulfill its objectives.
+For instance, when a user sends a request, a distributed system often breaks it down into smaller tasks handled by different services.
+Second, there are *software abstractions* that make up the structure of the distributed system.
+These includes elements such as load balancers, services, pods, containers and more.
+Lastly, there are physical machines that provide computational *resources* (e.g. RAM, CPU, disk space, network) to carry out work.
+
+{{< figure src="images/workload_resource_analysis_gregg.png" width=400 caption="workload and resource analysis [[Gregg16]](https://www.brendangregg.com/Slides/ACMApplicative2016_SystemMethodology/#18)" >}}
+
+Depending on our background, we often have a certain bias when investigating the performance of / troubleshooting problems in a distributed system.
+Application developers typically concentrate on workload-related aspects, whereas operations teams tend to look at physical resources.
+To truely understand a system, we must combine insights from multiple angles and figure out how they relate to one another.
+However, before we can analyze something, we must first capture aspects of system behavior.
+As you may know, we commonly do this through combination of *logs*, *metrics* and *traces*.
+Although it seems normal today, things weren't always this way.
+But why should you be concerned about the past?
+The reason is that OpenTelemetry tries to address problems that are the result of historical developments. .
+
+#### logs
+{{< figure src="images/logs.png" width=600 caption="Exemplary log files" >}}
+
+
+A *log* is an append-only data structure that records events occurring in a system.
+A log entry consists of a timestamp that denotes when something happened and a message to describe details about the event.
+However, coming up with a standardized log format is no easy task.
+One reason is that different types of software often convey different pieces of information. Logs of an HTTP web server are bound to look different from those of the kernel.
+But even for similar software, people often have different opinions on what good logs should look like.
+Apart from content, log formats also vary with their consumer. Initially, text-based formats catered to human readability.
+However, as software systems became more complex, the volume of logs soon became unmanageable.
+To combat this, we started encoding events as key/value pairs to make them machine-readable.
+This is commonly known as structured logging.
+Moreover, the distribution and ephemeral nature of containerized applications meant that it was no longer feasible to log onto individual machines and sift through logs.
+As a result, people started to build logging agents and protocols to forward logs to dedicated services.
+These logging systems allowed for efficient storage as well as the ability to search and filter logs in a central location.
+
+#### metrics
+
+
+{{< figure src="images/metric_types.drawio.png" width=400 caption="The four common types of metrics: counters, gauges, histograms and summaries" >}}
+
+Logs shine at providing detailed information about individual events.
+However, sometimes we need a high-level view of the current state of a system.
+This is where *metrics* come in.
+A metric is a single numerical value that was derived by applying a statistical measure to a group of events.
+In other words, metrics represent an aggregate.
+This is useful because their compact representation allows us to graph how a system changes over time.
+In response, the industry developed instruments to extract metrics, formats and protocols to represent and transmit data, specialized time-series databases to store them, and frontends to make this data accessible to end-users.
+
+#### traces
+
+{{< figure src="images/distributed_system.drawio.png" width=400 caption="Exemplary architecture of a distributed system" >}}
+
+As distributed systems grew in scale, it became clear that traditional logging systems often fell short when trying to debug complex problems.
+The reason is that we often have to undertand the chain of events in a system.
+On a single machine, stack traces allow us to track an exception back to a line of code.
+In a distributed environment, we don't have this luxury.
+Instead, we perform extensive filtering to locate log events of interest.
+To understand the larger context, we must identify other related event events.
+This often results in lots of manual labour (e.g. comparing timestamps) or requires extensive domain knowledge about the applications.
+Recognizing this problem, Google developed [Dapper](https://storage.googleapis.com/pub-tools-public-publication-data/pdf/36356.pdf), which popularized the concept of distributed tracing.
+On a fundamental level, tracing is logging on steroids.
+The underlying idea is to add transactional context to logs.
+By indexing this based on this information, it is possible infer causality and reconstruct the journey of requests in the system.
+
+#### three pillars of observability
+On the surface, logs, metrics, and traces share many similarities in their lifecycle and components.
+Everything starts with instrumentation that captures and emits data.
+The data has to have certain structure, which is defined by a format.
+Then, we need a mechanism to collect and forward a piece of telemetry.
+Often there is some kind of agent to further enrich, process and batch data before ingesting it in a backend.
+This typically involves a database to efficiently store, index and search large volumes of data.
+Finally, there is analysis frontend to make the data accessible to the end-user.
+However, in practice, we develop dedicated systems for each type of telemetry, and for good reason:
+Each telemetry signal poses it's own unique technical challenge.
+This is mainly due to the different nature of data.
+The design of data models, interchange formats, transmission protocols, highly depends on whether you are dealing with un- or semi-structured textual information, compact numerical values inside a time series, or graph-like structures depicting causality between events.
+Even for a single signal, there is no consensus on these kind of topics.
+Furthermore, the way we work with and derive insights from telemetry varies dramatically.
+A system might need to perform full-text search, inspect single events, analyze historical trends, visualize request flow, diagnose performance bottlenecks, and more.
+These requirements manifest themselves in the design and optimizations of storage, access patterns, query capabilities and more.
+When addressing these technical challenges, [vertical integration](https://en.wikipedia.org/wiki/Vertical_integration) emerges as a pragmatic solution.
+In practice, observability vendors narrow the scope of the problem to a single signal and provide instrumentation to generate *and* tools to analyse telemetry, as a single, fully-integrated, solution.
+
+{{< figure src="images/three_pillars_of_observability.drawio.png" width=400 caption="The three pillars of observability, including metrics, traces and logs" >}}
+
+Having dedicated systems for logs, metrics, and traces is why we commonly refer to them as the *the three pillars of observability*.
+The notion of pillars provides a great mental framework because it emphasizes that:
+- there are different categories of telemetry
+- each pillar has its unique strengths and stands on its own
+- pillars are complementary / must be combined to form a stable foundation for achieving observability
+
+
diff --git a/tutorial/content/intro/otel/_index.md b/tutorial/content/intro/otel/_index.md
deleted file mode 100644
index 66bafbe..0000000
--- a/tutorial/content/intro/otel/_index.md
+++ /dev/null
@@ -1,38 +0,0 @@
-+++
-title = "promise of OpenTelemetry"
-draft = false
-weight = 2
-+++
-
-### history
-
-OpenTelemetry is the result of the merger from OpenTracing and OpenCensus. Both of these products had the same goal - to standardize the instrumentation of code and how telemetry data is sent to observability backends. Neither of the products could solve the problem independently, so the CNCF merged the two projects into OpenTelemetry. This came with two major advantages. One both projects joined forces to create a better overall product and second it was only one product and not several products. With that standardization can be reached in a wider context of telemetry collection which in turn should increase the adoption rate of telemetry collection in applications since the entry barrier is much lower. The CNCF describes OpenTelemetry as the next major version of OpenTracing and OpenCensus and as such there are even migration guides for both projects to OpenTelemetry.
-
-### promises
-
-At the time of writing, OpenTelmetry is under active development and is one of the fastest-growing projects in the CNCF.
-OpenTelemetry is receiving so much attention because it promises to be a fundamental change in the way we produce telemetry and address many of the problems mentioned earlier.
-Previously, the rate of innovation and conflicts of interest prevented us from defining widely adopted standards for telemetry.
-At the time of writing, the timing and momentum of OpenTelemetry appear to have a realistic chance of pushing for standardization of common aspects of telemetry across vendors.
-A key promise of OpenTelemetry is that you "instrument code once and never again" and can "use your instrumentation everywhere".
-There are multiple factors behind this.
-First, OpenTelemetry recognizes that, should its efforts be successful, it will be a core dependency for countless software projects.
-Therefore, its telemetry signal specifications follow strict processes to provide [long-term stability guarantees](https://opentelemetry.io/docs/specs/otel/versioning-and-stability/).
-Once a signal is declared stable, clients will never experience a breaking API change.
-The second aspect is that OpenTelemetry separates the system that produces telemetry from the one that analyzes the data.
-Open and vendor-agnostic instrumentation to generate, collect, and transmit telemetry marks a fundamental shift in how observability vendors compete for your business.
-Instead of having to put significant investment in building instrumentation, observability vendors must differentiate themselves by building feature-rich analysis platforms with great usability.
-Moreover, users no longer have to commit to the observability solution they choose during development.
-Once you migrate over to OpenTelemetry, you can easily move between different vendors without having to re-instrument your entire system.
-Similarly, developers of open-source software can add native instrumentation to their projects without introducing vendor-specific code and creating burdens for downstream users.
-By avoiding all these struggles, OpenTelemetry pushes for observability to become a first-class citizen during development.
-The goal is to make software observable by default.
-Last (and definitely not least), OpenTelemetry pushes for a change in how we think about and use telemetry.
-Instead of having three separate silos for logs, metrics, and traces, OpenTelemetry follows a paradigm of linking telemetry signals together.
-With context creating touch points between signals, the overall value and usability of telemetry increase drastically.
-For instance, imagine the ability to jump from a conspicuous statistics in a dashboard straight to the related logs.
-Correlated telemetry data helps to reduce the cognitive load on humans operating complex systems.
-Being able to take advantage of linked data will mark a new generation of observability tools.
-While only time will tell if it can live up to its promises, let's dive into its architecture to learn how it tries to achieve these goals.
-
-In order to fulfill its objectives, OpenTelemetry is engineered to provide a uniform set of APIs and libraries that facilitate the instrumentation, generation, collection, and export of telemetry data. As a vendor-agnostic, independent, and heterogeneous layer, it serves as a foundational element for expressing telemetry data, capable of interfacing with a broad spectrum of downstream analysis, querying, alerting, and visualization tools. This design allows for the implementation of OpenTelemetry's capabilities within various libraries, frameworks, and programming languages, streamlining the adoption process. Furthermore, OpenTelemetry's principles ensure that it remains compatible with a myriad of monitoring and observability tools, guaranteeing long-term stability and consistency in telemetry data formats.
\ No newline at end of file
diff --git a/tutorial/content/intro/overview_of_otel/images/otel_implementation.drawio.png b/tutorial/content/intro/overview_of_otel/images/otel_implementation.drawio.png
new file mode 100644
index 0000000..a859643
Binary files /dev/null and b/tutorial/content/intro/overview_of_otel/images/otel_implementation.drawio.png differ
diff --git a/tutorial/content/intro/overview_of_otel/images/otel_implementation.drawio.xml b/tutorial/content/intro/overview_of_otel/images/otel_implementation.drawio.xml
new file mode 100644
index 0000000..fe1fe57
--- /dev/null
+++ b/tutorial/content/intro/overview_of_otel/images/otel_implementation.drawio.xml
@@ -0,0 +1,161 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/tutorial/content/intro/overview_of_otel/images/otel_specification.drawio.png b/tutorial/content/intro/overview_of_otel/images/otel_specification.drawio.png
new file mode 100644
index 0000000..ac0786a
Binary files /dev/null and b/tutorial/content/intro/overview_of_otel/images/otel_specification.drawio.png differ
diff --git a/tutorial/content/intro/overview_of_otel/images/otel_specification.drawio.xml b/tutorial/content/intro/overview_of_otel/images/otel_specification.drawio.xml
new file mode 100644
index 0000000..c9e5461
--- /dev/null
+++ b/tutorial/content/intro/overview_of_otel/images/otel_specification.drawio.xml
@@ -0,0 +1,71 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/tutorial/content/intro/overview_of_otel/index.md b/tutorial/content/intro/overview_of_otel/index.md
new file mode 100644
index 0000000..c71b08e
--- /dev/null
+++ b/tutorial/content/intro/overview_of_otel/index.md
@@ -0,0 +1,115 @@
+---
+title: "Overview of the OpenTelemetry framework"
+linktitle: "Overview of the framework"
+draft: false
+weight: 40
+---
+
+
+
+
+
+Only time will tell if OpenTelemetry can live up to its ambitious goals.
+Chances are, you're eager to explore the project and try it out yourself to see what the fuss it about.
+However, newcomers often feel overwhelmed when getting into OpenTelemetry.
+The reason is clear: OpenTelemetry is a vast endeavor that addresses a multitiude of problems by creating a comprehensive observability framework.
+Before you dive into the labs, we want to give you a high-level overview of the structure and scope of the project.
+
+### signal specification (language-agnostic)
+On a high level, OpenTelemetry is organized into *signals*, which mainly include *tracing*, *metrics*, *logging* and *baggage*.
+Every signal is developed as a standalone component (but there are ways to connect data streams to another.
+Signals are defined inside OpenTelemetry's *language-agnostic* [specification](https://opentelemetry.io/docs/specs/), which lies at the very heart of the project.
+End-user probably won't come in direct contact with the specification, but it plays an crucial role for ensuring consistency and interoperability wihtin the OpenTelemetry ecosystem.
+
+{{< figure src="images/otel_specification.drawio.png" width=600 caption="OpenTelemetry specification" >}}
+
+The specification consists of three parts.
+First, there are [*definitions of terms*](https://opentelemetry.io/docs/specs/otel/glossary/) that establish a common vocabulary and shared understanding to avoid confusion.
+Second, it specificies the technical details of how each signals is designed.
+This includes:
+- an *API specification* (see [traces](https://opentelemetry.io/docs/specs/otel/trace/api/), [metric](https://opentelemetry.io/docs/specs/otel/metrics/api/), and [logs](https://opentelemetry.io/docs/specs/otel/logs/))
+ - defines (conceptual) interfaces that implementations must adhere to
+ - ensures that implemenations are compatible with each other
+ - includes the methods that can be used to generate, process, and export telemetry data
+- a *SDK specification* (see [trace](https://opentelemetry.io/docs/specs/otel/trace/sdk/),[metrics](https://opentelemetry.io/docs/specs/otel/metrics/sdk/), [logs](https://opentelemetry.io/docs/specs/otel/logs/sdk/))
+ - serves as a guide for developers
+ - defines requirements that an language-specific implementation of the API must meet to be compliant
+ - includes concepts around the configuration, processing, and exporting of telemetry data
+
+Besides signal architecture, the specification covers aspect related to the telemetry data.
+For example, OpenTelemetry defines [semantic conventions](https://opentelemetry.io/docs/specs/semconv/).
+By pushing for consistency in the naming and interpretation of common telemetry metadata, OpenTelemetry aims to reduce the need to normalize data coming from different sources.
+Finally, there is also the [OpenTelemetry Protocol (OTLP)](https://opentelemetry.io/docs/specs/otlp/), which we'll cover later.
+
+### instrumentation to generate and emit telemetry (language-specific)
+{{< figure src="images/otel_implementation.drawio.png" width=600 caption="generate and emit telemetry via the OTel API and SDK packages" >}}
+
+To generate and emit telemetry from applications we use **language-*specific* implementations**, which adhere to OpenTelemetry's specification.
+OpenTelemetry supports a wide-range of popular [programming languages](https://opentelemetry.io/docs/instrumentation/#status-and-releases) at varying levels of matureity.
+The implementation of a signal consists of two parts:
+- *API*
+ - defines the interfaces and constants outlined in the specification
+ - used by application and library developers for vendor-agnostic instrumentation
+ - refers to a no-op implementation by default
+- *SDK*
+ - provider implement the OpenTelemetry API
+ - contains the actual logic to generate, process and emit telemetry
+ - OpenTelemetry ships with official providers that serve as the reference implementation (commonly refered to as the SDK)
+ - possible to write your own
+
+
+
+Generally speaking, we use the OpenTelemetry API to add instrumentation to our source code.
+In practice, instrumentation can be achieved in various ways, such as:
+- manual instrumentation
+ - requires modification of source code
+ - allows fine-grained control over what and how telemetry gets generated
+- auto-instrumentation (if available) and library instrumentation to avoid code changes
+ - can be used to getting started quickly with observability
+ - collects predefined metrics, traces and logs within a library or framework
+ - added after the fact by injecting an agent (or already included inside the library or framework)
+ - requires close to zero code changes
+- and native instrumentation (already instrumented with OpenTelemetry)
+
+We'll look at them later in more detail.
+
+For now, let's focus on why OpenTelemetry decides to separate the API from the SDK.
+On startup, the application registers a provider for every type of signal.
+After that, calls to the API are forwarded to the respective provider.
+If we don't explicitly register one, OpenTelemetry will use a fallback provider that translates API calls into no-ops.
+
+The primary reason is that it makes it easier to embed native instrumentation into open-source library code.
+OpenTelemetry's API is designed to be lightwight and safe to depend on.
+The signal's implementation provided by the SDK is signifanctly more complex and likely contains dependencies to other software.
+Forcing these dependencies onto users could lead to conflicts with their particular software stack.
+Registering a provider during the initial setup allows users to resolve dependency conflicts by choosing a different implementation.
+Furthermore, it allows us to ship software with built-in observability, but without forcing the runtime cost of instrumentation onto users that don't need it.
+
+
+
+### collect, process and export telemetry
+
+### data transmission
+To ensure that the collected telemetry data can be collected across different frameworks, libraries or programming languages a vendor-neutral protocol was set into place. The OpenTelemetry Protocol (OTLP) is an open-source protocol for collecting and transmitting telemetry data, to back-end systems for analysis and storage. It defines a standardized data model, encoding format, and transport mechanisms to enable interoperability between telemetry tools and services from different vendors. By standardizing the way telemetry data is collected and transported, OTLP simplifies the integration of telemetry tools and services, improves data consistency, and facilitates data analysis and visualization across multiple technologies and environments.
+
+OTLP offers three transport mechanisms for transmitting telemetry data: HTTP/1.1, HTTP/2, and gRPC. When using OTLP, the choice of transport mechanism depends on application requirements, considering factors such as performance, reliability, and security. OTLP data is often encoded using the Protocol Buffers (Protobuf) binary format, which is compact and efficient for network transmission and supports schema evolution, allowing for future changes to the data model without breaking compatibility. Data can also be encoded in the JSON file format which allows for a human-readable format with the disadvantage of higher network traffic and larger file sizes. The protocol is described in the OpenTelemetry Protocol Specification.
\ No newline at end of file
diff --git a/tutorial/content/intro/status_quo/images/correlate_across_datasets.png b/tutorial/content/intro/problems_with_the_status_quo/images/correlate_across_datasets.png
similarity index 100%
rename from tutorial/content/intro/status_quo/images/correlate_across_datasets.png
rename to tutorial/content/intro/problems_with_the_status_quo/images/correlate_across_datasets.png
diff --git a/tutorial/content/intro/problems_with_the_status_quo/images/need_for_correlated_telemetry.drawio.png b/tutorial/content/intro/problems_with_the_status_quo/images/need_for_correlated_telemetry.drawio.png
new file mode 100644
index 0000000..0ec8eb6
Binary files /dev/null and b/tutorial/content/intro/problems_with_the_status_quo/images/need_for_correlated_telemetry.drawio.png differ
diff --git a/tutorial/content/intro/problems_with_the_status_quo/images/need_for_correlated_telemetry.drawio.xml b/tutorial/content/intro/problems_with_the_status_quo/images/need_for_correlated_telemetry.drawio.xml
new file mode 100644
index 0000000..866bd09
--- /dev/null
+++ b/tutorial/content/intro/problems_with_the_status_quo/images/need_for_correlated_telemetry.drawio.xml
@@ -0,0 +1,41 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/tutorial/content/intro/status_quo/images/xkcd_927_standards.png b/tutorial/content/intro/problems_with_the_status_quo/images/xkcd_927_standards.png
similarity index 100%
rename from tutorial/content/intro/status_quo/images/xkcd_927_standards.png
rename to tutorial/content/intro/problems_with_the_status_quo/images/xkcd_927_standards.png
diff --git a/tutorial/content/intro/problems_with_the_status_quo/images/young_metrics_compare_timestamps.png b/tutorial/content/intro/problems_with_the_status_quo/images/young_metrics_compare_timestamps.png
new file mode 100644
index 0000000..d8f7e89
Binary files /dev/null and b/tutorial/content/intro/problems_with_the_status_quo/images/young_metrics_compare_timestamps.png differ
diff --git a/tutorial/content/intro/problems_with_the_status_quo/images/young_metrics_jump_to_logs.png b/tutorial/content/intro/problems_with_the_status_quo/images/young_metrics_jump_to_logs.png
new file mode 100644
index 0000000..8172f67
Binary files /dev/null and b/tutorial/content/intro/problems_with_the_status_quo/images/young_metrics_jump_to_logs.png differ
diff --git a/tutorial/content/intro/problems_with_the_status_quo/images/young_recostruct_chain_of_events.png b/tutorial/content/intro/problems_with_the_status_quo/images/young_recostruct_chain_of_events.png
new file mode 100644
index 0000000..df24b0a
Binary files /dev/null and b/tutorial/content/intro/problems_with_the_status_quo/images/young_recostruct_chain_of_events.png differ
diff --git a/tutorial/content/intro/problems_with_the_status_quo/index.md b/tutorial/content/intro/problems_with_the_status_quo/index.md
new file mode 100644
index 0000000..d450e26
--- /dev/null
+++ b/tutorial/content/intro/problems_with_the_status_quo/index.md
@@ -0,0 +1,99 @@
+---
+title: "Problems with our approach to observability"
+linktitle: "Problems with status quo"
+draft: false
+weight: 20
+---
+
+With loads of open-source and commercial observability solutions on the market, you might (rightly) ask yourself:
+- Why is there so much hype around OpenTelemetry?
+- If there are plenty mature solutions for generating, collecting, storing, and analysing logs, metrics and traces, why should I care?
+- What's wrong with the current state of observability?
+- Oh, great ... is this yet another attempt at standardization?
+
+These are valid questions.
+To answer them, we must identify (some) downsides that result from building and working with *pillar-based* observability systems.
+
+#### siloed telemetry is difficult to work with
+
+{{< figure src="images/need_for_correlated_telemetry.drawio.png" width=700 caption="The need for correlated telemetry [[Young21]](https://www.oreilly.com/library/view/the-future-of/9781098118433/)" >}}
+
+First, there are deficits in the *quality* of telemetry data.
+To illustrate this, let's imagine that we want to investigate the root cause of a problem.
+The first indicator of a problem is usually an alert or an anomaly in a metrics dashboard.
+Then, ....... confirms the incident is worth investigating, he/she has to form an initial hypothesis.
+The only information we currently have is that something happened at a particular point in time.
+Therefore, the first step is to use the metrics system to look for other metrics showing temporally correlated, abnormal behavior.
+After making an educated guess about the problem, we want to drill down and investigate the root cause of the problem.
+To gain additional information, we typically switch to the logging system.
+Here, we write queries and perform extensive filtering to find log events related to suspicious metrics.
+After discovering log events of interest, we often want to know about the larger context the operation took place in.
+Unfortunately, traditional logging systems lack the mechanisms to reconstruct the chain of events in that particular transaction.
+Traditional logging systems often fail to capture the full context of an operation, making it difficult to correlate events across different services or components.
+They frequently lack the ability to preserve critical metadata, such as trace IDs or span IDs, which are essential for linking related events together. This limitation results in fragmented views of the system's behavior, where the story of a single operation is spread across multiple logs without a clear narrative. Furthermore, the absence of standardized query languages or interfaces adds to the difficulty of searching and analyzing logs effectively, as operators must rely on custom scripts or manual filtering to uncover patterns and anomalies.
+Switching the perspective from someone building an observability solution to someone using it reveals an inherent disconnect.
+The real world isn't made up of logging, metrics, or tracing problems.
+Instead, we have to move back and forth between different types of telemetry to build up a mental model and reason about the behavior of a system.
+Since observability tools are silos of disconnected data, figuring out how pieces of information relate to one another causes a significant cognitive load for the operator.
+
+#### lack of instrumentation standard leads to low quality data
+
+
+Another factor that makes root cause analysis hard is that telemetry data often suffers from a lack of consistency. This leads to difficulties in correlating events across different services or components, as there is no standardized way to identify related events, such as through trace IDs or span IDs. Additionally, there is no straightforward method to integrate multiple solution-specific logging libraries into a coherent system, resulting in fragmented and disjointed views of the system's behavior.
+
+#### no built-in instrumentation in open-source software
+Let's look at this from the perspective of open-source software developers.
+Today, most applications are built on top of open-source libraries, frameworks, and standalone components.
+With a majority of work being performed outside the business logic of the application developer, it is crucial to collect telemetry from open-source components.
+The people with the most knowledge of what is important when operating a piece of software are the developers and maintainers themselves.
+However, there is currently no good way to communicate through native instrumentation.
+One option would be to pick the instrumentation of an observability solution.
+However, this would add additional dependencies to the project and force users to integrate it into their system.
+While running multiple logging and metrics systems is impractical but technically possible, tracing is outright impossible as it requires everyone to agree on a standard for trace context propagation to work.
+A common strategy for solving problems in computer science is by adding a layer of indirection.
+Instead of embedding vendor-specific instrumentation, open-source developers often provide observability hooks.
+This allows users to write adapters that connect the open-source component to their observability system.
+While this approach provides greater flexibility, it also has its fair share of problems.
+For example, whenever there is a new version of software, users have to notice and update their adapters.
+Moreover, the indirection also increases the overhead, as we have to convert between different telemetry formats.
+
+#### combining telemetry generation with results in vendor lock-in
+
+
+Let's put on the hat of an end user.
+After committing to a solution, the application contains many solution-specific library calls throughout its codebase.
+To switch to another observability tool down the line, we would have to rip out and replace all existing instrumentation and migrate our analysis tooling.
+This up-front cost of re-instrumentation makes migration difficult, which is a form of vendor lock-in.
+
+#### struggleing observability vendors / high barrier for entry
+
+The last part of the equation is the observability vendors themselves.
+At first glance, vendors appear to be the only ones profiting from the current situation.
+In the past, high-quality instrumentation was a great way to differentiate yourself from the competition.
+Moreover, since developing integrations for loads of pre-existing software is expensive, the observability market had a relatively high barrier to entry.
+With customers shying away from expensive re-instrumentation, established vendors faced less competition and pressure to innovate.
+However, they are also experiencing major pain points.
+The rate at which software is being developed has increased exponentially over the last decade.
+Today's heterogeneous software landscape made it infeasible to maintain instrumentation for every library, framework, and component.
+As soon as you start struggling with supplying instrumentation, customers will start refusing to adopt your product.
+As a result, solutions compete on who can build the best n-to-n format converter instead of investing these resources into creating great analysis tools.
+Another downside is converting data that was generated by foreign sources often leads to a degradation in the quality of telemetry.
+Once data is no longer well-defined, it becomes harder to analyze.
diff --git a/tutorial/content/intro/status_quo/images/distributed_system.drawio b/tutorial/content/intro/status_quo/images/distributed_system.drawio
deleted file mode 100644
index d632a07..0000000
--- a/tutorial/content/intro/status_quo/images/distributed_system.drawio
+++ /dev/null
@@ -1,150 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/tutorial/content/intro/status_quo/images/distributed_system.drawio.png b/tutorial/content/intro/status_quo/images/distributed_system.drawio.png
deleted file mode 100644
index 4fe0a28..0000000
Binary files a/tutorial/content/intro/status_quo/images/distributed_system.drawio.png and /dev/null differ
diff --git a/tutorial/content/intro/status_quo/images/distributed_system.drawio.svg b/tutorial/content/intro/status_quo/images/distributed_system.drawio.svg
deleted file mode 100644
index 2f463af..0000000
--- a/tutorial/content/intro/status_quo/images/distributed_system.drawio.svg
+++ /dev/null
@@ -1,4 +0,0 @@
-
-
-
-
\ No newline at end of file
diff --git a/tutorial/content/intro/status_quo/images/metric_types.drawio.png b/tutorial/content/intro/status_quo/images/metric_types.drawio.png
deleted file mode 100644
index b78f94a..0000000
Binary files a/tutorial/content/intro/status_quo/images/metric_types.drawio.png and /dev/null differ
diff --git a/tutorial/content/intro/status_quo/images/metric_types.drawio.svg b/tutorial/content/intro/status_quo/images/metric_types.drawio.svg
deleted file mode 100644
index 45f4b4d..0000000
--- a/tutorial/content/intro/status_quo/images/metric_types.drawio.svg
+++ /dev/null
@@ -1,4 +0,0 @@
-
-
-
-
\ No newline at end of file
diff --git a/tutorial/content/intro/status_quo/images/three_pillars_observability.jpg b/tutorial/content/intro/status_quo/images/three_pillars_observability.jpg
deleted file mode 100644
index 127f42b..0000000
Binary files a/tutorial/content/intro/status_quo/images/three_pillars_observability.jpg and /dev/null differ
diff --git a/tutorial/content/intro/status_quo/index.md b/tutorial/content/intro/status_quo/index.md
deleted file mode 100644
index e2d7eb2..0000000
--- a/tutorial/content/intro/status_quo/index.md
+++ /dev/null
@@ -1,132 +0,0 @@
----
-title: "The current state of observability"
-linktitle: "state of observability"
-date: 2023-12-06T09:43:24+01:00
-draft: false
-weight: 1
----
-
-### How we (traditionally) observe our systems
-
-In every distributed system, there are two things we can observe: *transactions* and *resources*.
-A transaction represents a set of orchestrated operations to fulfill a (larger) cohesive task.
-For example, when a user sends a request to a distributed system, it is typically split up into several subrequests handled by different services.
-Resources refer to the physical and logical components that make up the distributed system.
-
-> Observability is a measure of how well the internal states of a system can be inferred from knowledge of its external outputs.
-
-To make a system observable, we must model its state in a way that lets us reason about its behavior.
-The traditional approach is to observe transactions and resources through a combination of *logs*, *metrics* and (sometimes) *traces*.
-
-{{< figure src="images/metric_types.drawio.png" width=400 caption="The four common types of metrics: counters, gauges, histograms and summaries" >}}
-
-A log is an append-only data structure that records events occurring in a system.
-A log entry consists of a *timestamp* to denote when something happened and a *message* describing the details of a particular event.
-Today, there are countless logging frameworks in existence, resulting in equally many log formats.
-To a certain degree, this is understandable because different types of software often communicate different types of information.
-Log messages of an HTTP web server are bound to look different from those of the kernel.
-In general, we prefer structured logging.
-Representing events as key/value pairs help make logs machine-readable since we can use common data interchange formats to encode and parse the data.
-However, coming up with a log format is part of the problem.
-The increasing degree of distribution in applications and the ephemeral nature of container-based environments meant that it was no longer feasible to log onto individual machines and sift through logs.
-To address this, people developed logging systems and protocols to send logs to a central location.
-This provided persistent storage, making it possible to search and filter logs, and more.
-
-{{< figure src="images/logs.png" width=800 caption="Exemplary log files" >}}
-While logging provides detailed information about individual events, we often want a more high-level representation of the state of the system.
-This is where metrics come in.
-A metric is a single numerical value that was derived by applying a statistical measure to a group of events.
-In other words, metrics represent an aggregate.
-Metrics are useful because their compact representation allows us to graph how our system changes over time.
-Similar to logs, the industry developed systems to define the format of metrics, protocols to send data, time-series databases to store them, and frontends to make this data accessible to end-users.
-
-{{< figure src="images/distributed_system.drawio.png" width=800 caption="Exemplary architecture of a distributed system" >}}
-As distributed systems grew in complexity, it became clear that logging systems struggled with debugging problems efficiently at scale.
-During an investigation, one typically has to reconstruct the chain of events that led to a particular problem.
-On a single machine, stack traces are a great way to track an exception to a line of code.
-In a distributed environment, we don't have this luxury.
-Instead, we spend lots of manual labor filtering events before we find something of interest, followed by cumbersome analysis trying to identify and understand the larger context.
-Recognizing this problem, Google developed [Dapper](https://storage.googleapis.com/pub-tools-public-publication-data/pdf/36356.pdf), which popularized the concept of distributed tracing.
-Essentially, tracing is logging on steroids.
-Adding transactional context to logs and indexing based on this information, we can reconstruct the journey of requests within the system.
-
-{{< figure src="images/three_pillars_observability.jpg" width=400 caption="The three pillars of observability, including metrics, traces and logs" >}}
-
-Together, logs, metrics, and traces are often referred to as *the three pillars of observability*.
-These pillars provide a great mental framework for categorizing different types of telemetry.
-It emphasizes that every telemetry signal has its unique strength and that combining their insights is essential to yield a foundation for an observable system.
-However, the monumental connotation of the term "pillars" is deceptive.
-It suggests that current practices in observability are the result of deliberate design by great architects and are set in stone.
-In reality, it emerged organically through a series of responses to address the limitations of systems that existed at the time.
-
-### Why the current approach to telemetry systems is flawed
-
-{{< figure src="images/xkcd_927_standards.png" width=400 caption="[XKCD](https://xkcd.com/927/)" >}}
-
-With loads of open-source and commercial observability solutions available on the market, you might (rightly) be asking yourself why OpenTelemetry exists or why there is currently so much hype around it.
-
-In the past, telemetry systems were built as standalone end-to-end solutions for a specific purpose.
-The reason is simple.
-The best way to build a capable system is when the scope of the problem is as narrow as possible and everything is under your control.
-For example, a metrics solution typically provides its own instrumentation mechanisms, data models, protocols, and interchange formats to transmit telemetry to a backend, and tools to analyze the collected data.
-This style of thinking is called [vertical integration](https://en.wikipedia.org/wiki/Vertical_integration).
-To answer the question of OpenTelemetry's popularity, let's look at two major problems with this approach.
-
-{{< figure src="images/correlate_across_datasets.png" width=700 caption="Correlated telemetry data" >}}
-
-First, there are deficits in the *quality* of telemetry data.
-To illustrate this, let's look at a typical process of a root cause investigation.
-Often, the first indicator of a potential problem is an anomaly in a metric dashboard.
-After the operator confirms the incident is worth investigating, we have to form an initial hypothesis.
-The only information we currently have is that something happened at a particular point in time.
-Therefore, the first step is to use the metrics system to look for other metrics showing temporally correlated, abnormal behavior.
-After making an educated guess about the problem, we want to drill down and investigate the root cause of the problem.
-To gain additional information, we typically switch to the logging system.
-Here, we write queries and perform extensive filtering to find log events related to suspicious metrics.
-After discovering log events of interest, we often want to know about the larger context the operation took place in.
-Unfortunately, traditional logging systems lack the mechanisms to reconstruct the chain of events in that particular transaction.
-Traditional logging systems often fail to capture the full context of an operation, making it difficult to correlate events across different services or components. They frequently lack the ability to preserve critical metadata, such as trace IDs or span IDs, which are essential for linking related events together. This limitation results in fragmented views of the system's behavior, where the story of a single operation is spread across multiple logs without a clear narrative. Furthermore, the absence of standardized query languages or interfaces adds to the difficulty of searching and analyzing logs effectively, as operators must rely on custom scripts or manual filtering to uncover patterns and anomalies.
-
-Switching the perspective from someone building an observability solution to someone using it reveals an inherent disconnect.
-The real world isn't made up of logging, metrics, or tracing problems.
-Instead, we have to move back and forth between different types of telemetry to build up a mental model and reason about the behavior of a system.
-Since observability tools are silos of disconnected data, figuring out how pieces of information relate to one another causes a significant cognitive load for the operator.
-
-Another factor that makes root cause analysis hard is that telemetry data often suffers from a lack of consistency. This leads to difficulties in correlating events across different services or components, as there is no standardized way to identify related events, such as through trace IDs or span IDs. Additionally, there is no straightforward method to integrate multiple solution-specific logging libraries into a coherent system, resulting in fragmented and disjointed views of the system's behavior.
-
-
-The second major deficit is that vertical integration means that instrumentation, protocols, and interchange formats are tied to a specific solution.
-This creates a maintenance nightmare for everyone involved.
-
-Let's put on the hat of an end user.
-After committing to a solution, the application contains many solution-specific library calls throughout its codebase.
-To switch to another observability tool down the line, we would have to rip out and replace all existing instrumentation and migrate our analysis tooling.
-This up-front cost of re-instrumentation makes migration difficult, which is a form of vendor lock-in.
-
-Let's look at this from the perspective of open-source software developers.
-Today, most applications are built on top of open-source libraries, frameworks, and standalone components.
-With a majority of work being performed outside the business logic of the application developer, it is crucial to collect telemetry from open-source components.
-The people with the most knowledge of what is important when operating a piece of software are the developers and maintainers themselves.
-However, there is currently no good way to communicate through native instrumentation.
-One option would be to pick the instrumentation of an observability solution.
-However, this would add additional dependencies to the project and force users to integrate it into their system.
-While running multiple logging and metrics systems is impractical but technically possible, tracing is outright impossible as it requires everyone to agree on a standard for trace context propagation to work.
-A common strategy for solving problems in computer science is by adding a layer of indirection.
-Instead of embedding vendor-specific instrumentation, open-source developers often provide observability hooks.
-This allows users to write adapters that connect the open-source component to their observability system.
-While this approach provides greater flexibility, it also has its fair share of problems.
-For example, whenever there is a new version of software, users have to notice and update their adapters.
-Moreover, the indirection also increases the overhead, as we have to convert between different telemetry formats.
-
-The last part of the equation is the observability vendors themselves.
-At first glance, vendors appear to be the only ones profiting from the current situation.
-In the past, high-quality instrumentation was a great way to differentiate yourself from the competition.
-Moreover, since developing integrations for loads of pre-existing software is expensive, the observability market had a relatively high barrier to entry.
-With customers shying away from expensive re-instrumentation, established vendors faced less competition and pressure to innovate.
-However, they are also experiencing major pain points.
-The rate at which software is being developed has increased exponentially over the last decade.
-Today's heterogeneous software landscape made it infeasible to maintain instrumentation for every library, framework, and component.
-As soon as you start struggling with supplying instrumentation, customers will start refusing to adopt your product.
-As a result, solutions compete on who can build the best n-to-n format converter instead of investing these resources into creating great analysis tools.
-Another downside is converting data that was generated by foreign sources often leads to a degradation in the quality of telemetry.
-Once data is no longer well-defined, it becomes harder to analyze.
diff --git a/tutorial/content/intro/why_otel/_index.md b/tutorial/content/intro/why_otel/_index.md
new file mode 100644
index 0000000..60527e3
--- /dev/null
+++ b/tutorial/content/intro/why_otel/_index.md
@@ -0,0 +1,45 @@
+---
+title: "Why is OpenTelemetry promising?"
+linktitle: "Why OpenTelemetry"
+draft: false
+weight: 30
+---
+
+
+
+
+At the time of writing, OpenTelmetry is the [second fastest-growing project](https://www.cncf.io/reports/cncf-annual-report-2023/#projects) within the CNCF.
+OpenTelemetry receives so much attention because it promises to be a fundamental shift in the way we produce telemetry.
+It's important to remember that observability is a fairly young discipline.
+In the past, the rate of innovation and conflicts of interest prevented us from defining widely adopted standards for telemetry.
+However, the timing and momentum of OpenTelemetry appears to have a realistic chance of pushing for standardization of common aspects of telemetry.
+
+#### instrument once, use everywhere
+A key promise of OpenTelemetry is that you *instrument code once and never again* and the ability *to use that instrumentation everywhere*.
+OpenTelemetry recognizes that, should its efforts be successful, it will be a core dependency for many software projects.
+Therefore, its follow strict processes to provide [*long-term stability guarantees*](https://opentelemetry.io/docs/specs/otel/versioning-and-stability/).
+Once a signal is declared stable, the promise is that clients will never experience a breaking API change.
+
+#### separate telemetry generation from analysis
+Another core idea of OpenTelemetry is *separate the mechanism that produce telemetry from the systems that analyzes it*.
+Open and vendor-agnostic instrumentation marks a fundamental *change in the observability business*.
+Instead of pouring resources into building proprietary instrumentation and keeping it up to date, vendors must differentiate themselves through feature-rich analysis platforms with great usability.
+OpenTelemetry *fosters competition*, because users no longer stuck with the observability solution they chose during development.
+After switching to OpenTelemtry, you scan switch platforms without having to re-instrument your entire system.
+
+#### make oss software observable by default
+With OpenTelemetry, open-source developers are able to add *native instrumentation to their project without introducing vendor-specific code* that burdens their users.
+The idea is to *make observability a first-class citizen during development*.
+By having software ship with built-in instrumentation, we no longer need elaborate mechanisms to capture and integrate it after the fact.
+
+#### re-shape how we think and use telemetry
+Last (and definitely not least), OpenTelemetry tries to change how we think about and use telemetry.
+Instead of having three separate silos for logs, metrics, and traces, OpenTelemetry follows a paradigm of linking telemetry signals together.
+With context creating touch points between signals, the overall value and usability of telemetry increase drastically.
+For instance, imagine the ability to jump from a conspicuous statistics in a dashboard straight to the related logs.
+Correlated telemetry data helps to reduce the cognitive load on humans operating complex systems.
+Being able to take advantage of linked data will mark a new generation of observability tools.