Skip to content

Commit

Permalink
Add more info to Introduction topic and clean up images (#119)
Browse files Browse the repository at this point in the history
* Clean up content add diagram

* Update images in deploy topic
  • Loading branch information
clayton-cornell authored Apr 4, 2024
1 parent 0f4ac8e commit 6343f5f
Show file tree
Hide file tree
Showing 2 changed files with 25 additions and 43 deletions.
62 changes: 22 additions & 40 deletions docs/sources/introduction/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,46 +30,28 @@ Some of the key features of {{< param "PRODUCT_NAME" >}} include:
* **Security:** {{< param "PRODUCT_NAME" >}} helps you manage authentication credentials and connect to HashiCorp Vaults or Kubernetes clusters to retrieve secrets.
* **Debugging utilities:** {{< param "PRODUCT_NAME" >}} provides troubleshooting support and an embedded [user interface][UI] to help you identify and resolve configuration problems.

<!--
### Compare {{% param "PRODUCT_NAME" %}} with OpenTelemetry and Prometheus
The following tables compare some of the features of {{< param "PRODUCT_NAME" >}} with OpenTelemetry and Prometheus.
#### Core telemetry
| | Grafana Alloy | OpenTelemetry Collector | Prometheus Agent |
|--------------|--------------------------|-------------------------|------------------|
| **Metrics** | [Prometheus][], [OTel][] | OTel | Prometheus |
| **Logs** | [Loki][], [OTel][] | OTel | No |
| **Traces** | [OTel][] | OTel | No |
| **Profiles** | [Pyroscope][] | Planned | No |
#### **OSS features**
| | Grafana Alloy | OpenTelemetry Collector | Prometheus Agent |
|--------------------------|-------------------|-------------------------|------------------|
| **Kubernetes native** | [Yes][helm chart] | Yes | No |
| **Clustering** | [Yes][clustering] | No | No |
| **Prometheus rules** | [Yes][rules] | No | No |
| **Native Vault support** | [Yes][vault] | No | No |
#### Grafana Cloud solutions
| | Grafana Alloy | OpenTelemetry Collector | Prometheus Agent |
|-------------------------------|----------------------|-------------------------|------------------|
| **Official vendor support** | [Yes][sla] | No | No |
| **Cloud integrations** | Some | No | No |
| **Kubernetes monitoring** | [Yes][helm chart] | No | Yes, custom |
| **Application observability** | [Yes][observability] | Yes | No |
<!--
<!--
### BoringCrypto
[BoringCrypto][] is an **EXPERIMENTAL** feature for building {{< param "PRODUCT_NAME" >}}
binaries and images with BoringCrypto enabled. Builds and Docker images for Linux arm64/amd64 are made available.
[BoringCrypto]: https://pkg.go.dev/crypto/internal/boring
-->
## How does {{% param "PRODUCT_NAME" %}} work as an OpenTelemetry collector?

{{< figure src="/media/docs/alloy/flow-diagram-small-alloy.png" alt="Alloy flow diagram" >}}

### Collect

{{< param "PRODUCT_NAME" >}} uses more than 120 components to collect telemetry data from applications, databases, and OpenTelemetry collectors.
{{< param "PRODUCT_NAME" >}} supports collection using multiple ecosystems, including OpenTelemetry and Prometheus.

Telemetry data can be either pushed to {{< param "PRODUCT_NAME" >}}, or {{< param "PRODUCT_NAME" >}} can pull it from your data sources.

### Transform

{{< param "PRODUCT_NAME" >}} processes data and transforms it for sending.

You can use transformations to inject extra metadata into telemetry or filter out unwanted data.

### Write

{{< param "PRODUCT_NAME" >}} sends data to OpenTelemetry-compatible databases or collectors, the Grafana LGTM stack, or Grafana Cloud.

{{< param "PRODUCT_NAME" >}} can also write alerting rules in compatible databases.

## Next steps

Expand Down
6 changes: 3 additions & 3 deletions docs/sources/shared/deploy-alloy.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ This page lists common topologies used for {{% param "PRODUCT_NAME" %}} deployme
Deploying {{< param "PRODUCT_NAME" >}} as a centralized service is recommended for collecting application telemetry.
This topology allows you to use a smaller number of collectors to coordinate service discovery, collection, and remote writing.

![centralized-collection](/media/docs/agent/agent-topologies/centralized-collection.png)
{{< figure src="/media/docs/alloy/collection-diagram-alloy.png" alt="Centralized collection with Alloy">}}

Using this topology requires deploying {{< param "PRODUCT_NAME" >}} on separate infrastructure, and making sure that they can discover and reach these applications over the network.
The main predictor for the size of an {{< param "PRODUCT_NAME" >}} deployment is the number of active metrics series it's scraping. A rule of thumb is approximately 10 KB of memory for each series.
Expand Down Expand Up @@ -52,7 +52,7 @@ You can also use a Kubernetes Deployment in cases where persistent storage isn't

Deploying one {{< param "PRODUCT_NAME" >}} instance per machine is required for collecting machine-level metrics and logs, such as node_exporter hardware and network metrics or journald system logs.

![daemonset](/media/docs/agent/agent-topologies/daemonset.png)
{{< figure src="/media/docs/alloy/host-diagram-alloy.png" alt="Alloy as a host daemon">}}

Each {{< param "PRODUCT_NAME" >}} instance requires you to open an outgoing connection for each remote endpoint it’s shipping data to.
This can lead to NAT port exhaustion on the egress infrastructure.
Expand Down Expand Up @@ -88,7 +88,7 @@ The simplest use case of the host daemon topology is a Kubernetes DaemonSet, and

Deploying {{< param "PRODUCT_NAME" >}} as a container sidecar is only recommended for short-lived applications or specialized {{< param "PRODUCT_NAME" >}} deployments.

![daemonset](/media/docs/agent/agent-topologies/sidecar.png)
{{< figure src="/media/docs/alloy/sidecar-diagram-alloy.png" alt="Alloy as a container sidecar">}}

### Using Kubernetes Pod sidecars

Expand Down

0 comments on commit 6343f5f

Please sign in to comment.