Skip to content

Commit

Permalink
Merge branch 'master' into jack-edmonds-dd/web-frameworks-rename
Browse files Browse the repository at this point in the history
  • Loading branch information
jack-edmonds-dd authored Jan 30, 2025
2 parents 896b938 + 1e8b36b commit 46f9207
Show file tree
Hide file tree
Showing 190 changed files with 20,056 additions and 4,476 deletions.
294 changes: 197 additions & 97 deletions config/_default/menus/main.en.yaml

Large diffs are not rendered by default.

11 changes: 4 additions & 7 deletions content/en/dashboards/widgets/image.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,19 +10,16 @@ further_reading:
text: "Building Dashboards using JSON"
---

The image widget allows you to embed an image on your dashboard. An image can be a PNG, JPG, or animated GIF, hosted where it can be accessed by URL.
The image widget allows you to embed an image on your dashboard. An image can be uploaded to Datadog or hosted where it can be accessed by URL. PNG, JPG, and GIF file formats are supported.

{{< img src="dashboards/widgets/image/image.mp4" alt="Image" video="true" style="width:80%;" >}}

## Setup

{{< img src="dashboards/widgets/image/image_setup.png" alt="Image setup" style="width:80%;">}}
{{< img src="dashboards/widgets/image/image_setup2.png" alt="Image setup" style="width:80%;">}}

1. Enter your image URL.
2. Choose an appearance:
* Zoom image to cover whole title
* Fit image on tile
* Center image on tile
1. Upload your image or enter your image URL.
2. Select a preset template or customize the display options.

## API

Expand Down
2 changes: 1 addition & 1 deletion content/en/logs/guide/azure-logging-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -162,7 +162,7 @@ If you already have a function app configured with an Event Hub connection strin
2. In the **Instance Details** section, configure the following settings:
a. Select the **Code** radio button
b. For **Runtime stack**, select `Node.js`
c. For **Version**, select `20 LTS`.
c. For **Version**, select `18 LTS`.
3. Configure other settings as desired.
4. Click **Review + create** to validate the resource. If validation is successful, click **Create**.

Expand Down
2 changes: 2 additions & 0 deletions content/en/monitors/configuration/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -270,6 +270,8 @@ A `Multi Alert` monitor triggers individual notifications for each entity in a m

For example, when setting up a monitor to notify you if the P99 latency, aggregated by service, exceeds a certain threshold, you would receive a **separate** alert for each individual service whose P99 latency exceeded the alert threshold. This can be useful for identifying and addressing specific instances of system or application issues. It allows you to track problems on a more granular level.

##### Notification grouping

When monitoring a large group of entities, multi alerts can lead to noisy monitors. To mitigate this, customize which dimensions trigger alerts. This reduces the noise and allows you to focus on the alerts that matter most. For instance, you are monitoring the average CPU usage of all your hosts. If you group your query by `service` and `host` but only want alerts to be sent once for each `service` attribute meeting the threshold, remove the `host` attribute from your multi alert options and reduce the number of notifications that are sent.

{{< img src="/monitors/create/multi-alert-aggregated.png" alt="Diagram of how notifications are sent when set to specific dimensions in multi alerts" style="width:90%;">}}
Expand Down
18 changes: 9 additions & 9 deletions content/en/monitors/types/cloud_cost.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ Cloud Cost monitors are evaluated with a 48 hour delayed evaluation window, beca

To create a Cloud Cost monitor in Datadog, use the main navigation: [**Monitors** --> **New Monitor** --> **Cloud Cost**][4].

You can also create Cloud Cost monitors from the [Cloud Cost Explorer][2]. Click **More...** next to the Options button and select **Create monitor**.
You can also create Cloud Cost monitors from the [Cloud Cost Explorer][2]. Click **More...** next to the Options button and select **Create monitor**.

{{< img src="/monitors/monitor_types/cloud_cost/explorer.png" alt="Option to create a monitor from the Cloud Cost Explorer page" style="width:100%;" >}}

Expand Down Expand Up @@ -73,14 +73,14 @@ You can select from the following monitor types.

| Cost Type | Description | Usage Examples |
| --- | ----------- | ----------- |
| Cost Anomalies | Detect anomalies by comparing current costs to historical data, using a defined lookback period. Incomplete days are excluded from analysis to ensure accuracy. Anomaly monitors require at least 4 months of cloud cost data to evaluate since historical data is required to train the algorithm. | Alert if 3 days from the past 30 days show significant cost anomalies compared to historical data. |
| Cost Anomalies | Detect anomalies by comparing current costs to historical data, using a defined lookback period. Incomplete days are excluded from analysis to ensure accuracy. Anomaly monitors require at least 1 month of cloud cost data to evaluate since historical data is required to train the algorithm. | Alert if 3 days from the past 30 days show significant cost anomalies compared to historical data. |

{{% /tab %}}
{{< /tabs >}}

## Specify which costs to track

Any cost type or metric reporting to Datadog is available for monitors. You can use custom metrics or observability metrics alongside a cost metric to monitor unit economics.
Any cost type or metric reporting to Datadog is available for monitors. You can use custom metrics or observability metrics alongside a cost metric to monitor unit economics.

| Step | Required | Default | Example |
|-----------------------------------|----------|----------------------|---------------------|
Expand All @@ -89,35 +89,35 @@ Any cost type or metric reporting to Datadog is available for monitors. You can
| Group by | No | Everything | `aws_availability_zone` |
| Add observability metric | No | `system.cpu.user` | `aws.s3.all_requests` |

Use the editor to define the cost types or exports.
Use the editor to define the cost types or exports.

{{< img src="monitors/monitor_types/cloud_cost/ccm_metrics_source.png" alt="Cloud Cost and Metrics data source options for specifying which costs to track" style="width:100%;" >}}

For more information, see the [Cloud Cost Management documentation][1].
For more information, see the [Cloud Cost Management documentation][1].

## Set alert conditions

{{< tabs >}}
{{% tab "Changes" %}}

If you are using the **Cost Changes** monitor type, you can trigger an alert when the cost `increases` or `decreases` more than the defined threshold. The threshold can be set to either a **Percentage Change** or set to **Dollar Amount**.
If you are using the **Cost Changes** monitor type, you can trigger an alert when the cost `increases` or `decreases` more than the defined threshold. The threshold can be set to either a **Percentage Change** or set to **Dollar Amount**.

If you are using the **Percentage Change**, you can filter out changes that are below a certain dollar threshold. For example, the monitor alerts when there is a cost change above 5% for any change that is above $500.

{{% /tab %}}
{{% tab "Threshold" %}}

If you are using the **Cost Threshold** monitor type, you can trigger an alert when the cloud cost is `above`, `below`, `above or equal`, or `below or equal to` a threshold.
If you are using the **Cost Threshold** monitor type, you can trigger an alert when the cloud cost is `above`, `below`, `above or equal`, or `below or equal to` a threshold.

{{% /tab %}}
{{% tab "Forecast" %}}

If you are using the **Cost Forecast** monitor type, you can trigger an alert when the cloud cost is `above`, `below`, `above or equal`, `below or equal to`, `equal to`, or `not equal to` a threshold.
If you are using the **Cost Forecast** monitor type, you can trigger an alert when the cloud cost is `above`, `below`, `above or equal`, `below or equal to`, `equal to`, or `not equal to` a threshold.

{{% /tab %}}
{{% tab "Anomalies" %}}

If you are using the **Cost Anomalies** monitor type, you can trigger an alert if the observed cost deviates from historical data by being `above`, `below`, or `above or below` a threshold for any provider and service.
If you are using the **Cost Anomalies** monitor type, you can trigger an alert if the observed cost deviates from historical data by being `above`, `below`, or `above or below` a threshold for any provider and service.

The `agile` [anomaly algorithm][101] is used with two bounds and monthly seasonality.

Expand Down
61 changes: 60 additions & 1 deletion content/en/observability_pipelines/destinations/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,12 +29,70 @@ Select and set up your destinations when you [set up a pipeline][1]. This is ste
{{< nextlink href="observability_pipelines/destinations/google_chronicle" >}}Google Chronicle{{< /nextlink >}}
{{< nextlink href="observability_pipelines/destinations/google_cloud_storage" >}}Google Cloud Storage{{< /nextlink >}}
{{< nextlink href="observability_pipelines/destinations/new_relic" >}}New Relic{{< /nextlink >}}
{{< nextlink href="observability_pipelines/destinations/microsoft_sentinel" >}}Microsoft Sentinel{{< /nextlink >}}
{{< nextlink href="observability_pipelines/destinations/opensearch" >}}OpenSearch{{< /nextlink >}}
{{< nextlink href="observability_pipelines/destinations/syslog" >}}rsyslog or syslog-ng{{< /nextlink >}}
{{< nextlink href="observability_pipelines/destinations/sentinelone" >}} SentinelOne {{< /nextlink >}}
{{< nextlink href="observability_pipelines/destinations/splunk_hec" >}}Splunk HTTP Event Collector (HEC){{< /nextlink >}}
{{< nextlink href="observability_pipelines/destinations/sumo_logic_hosted_collector" >}}Sumo Logic Hosted Collector{{< /nextlink >}}
{{< /whatsnext >}}

## Template syntax

Logs are often stored in separate indexes based on log data, such as the service or environment the logs are coming from or another log attribute. In Observability Pipelines, you can use template syntax to route your logs to different indexes based on specific log fields.

When the Observability Pipelines Worker cannot resolve the field with the template syntax, the Worker defaults to a specified behavior for that destination. For example, if you are using the template `{{application_id}}` for the Amazon S3 destination's **Prefix** field, but there isn't an `application_id` field in the log, the Worker creates a folder called `OP_UNRESOLVED_TEMPLATE_LOGS/` and publishes the logs there.

The following table lists the destinations and fields that support template syntax, and what happens when the Worker cannot resolve the field:

| Destination | Fields that support template syntax | Behavior when the field cannot be resolved |
| ----------------- | -------------------------------------| -----------------------------------------------------------------------------------------------|
| Amazon Opensearch | Index | The Worker creates an index named `datadog-op` and sends the logs there. |
| Amazon S3 | Prefix | The Worker creates a folder named `OP_UNRESOLVED_TEMPLATE_LOGS/` and sends the logs there. |
| Azure Blob | Prefix | The Worker creates a folder named `OP_UNRESOLVED_TEMPLATE_LOGS/` and sends the logs there. |
| Elasticsearch | Source type | The Worker creates an index named `datadog-op` and sends the logs there. |
| Google Chronicle | Log type | Defaults to `vector_dev` log type. |
| Google Cloud | Prefix | The Worker creates a folder named `OP_UNRESOLVED_TEMPLATE_LOGS/` and sends the logs there. |
| Opensearch | Index | The Worker creates an index named `datadog-op` and sends the logs there. |
| Splunk HEC | Index<br>Source type | The Worker sends the logs to the default index configured in Splunk. |

#### Example

If you want to route logs based on the log's application ID field (for example, `application_id`) to the Amazon S3 destination, use the event fields syntax in the **Prefix to apply to all object keys** field.

{{< img src="observability_pipelines/amazon_s3_prefix.png" alt="The Amazon S3 destination showing the prefix field using the event fields syntax /application_id={{ application_id }}/" style="width:40%;" >}}

### Syntax

#### Event fields

Use `{{ <field_name> }}` to access individual log event fields. For example:

```
{{ application_id }}
```

#### Strftime specifiers

Use [strftime specifiers][3] for the date and time. For example:

```
year=%Y/month=%m/day=%d
```

#### Escape characters

Prefix a character with `\` to escape the character. This example escapes the event field syntax:

```
\{{ field_name }}
```

This example escapes the strftime specifiers:

```
year=\%Y/month=\%m/day=\%d/
```

## Event batching

Expand All @@ -57,4 +115,5 @@ If the destination receives 3 events within 2 seconds, it flushes a batch with 2
{{% observability_pipelines/destination_batching %}}

[1]: /observability_pipelines/set_up_pipelines/
[2]: https://app.datadoghq.com/observability-pipelines
[2]: https://app.datadoghq.com/observability-pipelines
[3]: https://docs.rs/chrono/0.4.19/chrono/format/strftime/index.html#specifiers
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
---
title: Amazon Security Lake Destination
disable_toc: false
---

Use Observability Pipelines' Amazon Security Lake destination to send logs to Amazon Security Lake.

## Prerequisites

You need to do the following before setting up the Amazon Security Lake destination:

{{% observability_pipelines/prerequisites/amazon_security_lake %}}

## Setup

Set up the Amazon Security Lake destination and its environment variables when you [set up a pipeline][1]. The information below is configured in the pipelines UI.

### Set up the destination

{{% observability_pipelines/destination_settings/amazon_security_lake %}}

### Set the environment variables

{{% observability_pipelines/configure_existing_pipelines/destination_env_vars/amazon_security_lake %}}

## AWS Authentication

{{% observability_pipelines/aws_authentication/amazon_security_lake/intro %}}

{{% observability_pipelines/aws_authentication/instructions %}}

### Permissions

{{% observability_pipelines/aws_authentication/amazon_security_lake/permissions %}}

## How the destination works

### Event batching

A batch of events is flushed when one of these parameters is met. See [event batching][2] for more information.

| Max Events | Max Bytes | Timeout (seconds) |
|----------------|-----------------|---------------------|
| TKTK | TKTK | TKTK |

[1]: https://app.datadoghq.com/observability-pipelines
[2]: /observability_pipelines/destinations/#event-batching
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
---
title: Microsoft Sentinel Destination
disable_toc: false
---

Use Observability Pipelines' Microsoft Sentinel destination to send logs to Microsoft Sentinel.

## Setup

Set up the Microsoft Sentinel destination and its environment variables when you [set up a pipeline][1]. The information below is configured in the pipelines UI.

### Set up the destination

{{% observability_pipelines/destination_settings/microsoft_sentinel %}}

### Set the environment variables

{{% observability_pipelines/configure_existing_pipelines/destination_env_vars/microsoft_sentinel %}}

## How the destination works

### Event batching

A batch of events is flushed when one of these parameters is met. See [event batching][2] for more information.

| Max Events | Max Bytes | Timeout (seconds) |
|----------------|-----------------|---------------------|
| None | 10,000,000 | 1 |

[1]: https://app.datadoghq.com/observability-pipelines
[2]: /observability_pipelines/destinations/#event-batching
41 changes: 41 additions & 0 deletions content/en/observability_pipelines/destinations/sentinelone.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
---
title: SentinelOne Destination
disable_toc: false
---

Use Observability Pipelines' SentinelOne destination to send logs to SentinelOne.

## Setup

Set up the SentinelOne destination and its environment variables when you [set up a pipeline][1]. The information below is configured in the pipelines UI.

### Set up the destination

{{% observability_pipelines/destination_settings/sentinelone %}}

### Set the environment variables

{{% observability_pipelines/configure_existing_pipelines/destination_env_vars/sentinelone %}}

## View logs in a SentinelOne cluster

After you've set up the pipeline to send logs to the SentinelOne destination, you can view the logs in a SentinelOne cluster:

1. Log into the [S1 console][2].
2. Navigate to the Singularity Data Lake (SDL) "Search" page. To access it from the console, click on "Visibility" on the left menu to go to SDL, and make sure you're on the "Search" tab.
3. Make sure the filter next to the search bar is set to **All Data**.
4. This page shows the logs you sent from Observability Pipelines to SentinelOne.

## How the destination works

### Event batching

A batch of events is flushed when one of these parameters is met. See [event batching][3] for more information.

| Max Events | Max Bytes | Timeout (seconds) |
|----------------|-----------------|---------------------|
| None | 1,000,000 | 1 |

[1]: https://app.datadoghq.com/observability-pipelines
[2]: https://usea1-partners.sentinelone.net/login
[3]: /observability_pipelines/destinations/#event-batching
1 change: 1 addition & 0 deletions content/en/observability_pipelines/processors/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,7 @@ Use Observability Pipelines' processors to parse, structure, and enrich your log
{{< nextlink href="observability_pipelines/processors/quota" >}}Quota{{< /nextlink >}}
{{< nextlink href="observability_pipelines/processors/reduce" >}}Reduce{{< /nextlink >}}
{{< nextlink href="observability_pipelines/processors/sample" >}}Sample{{< /nextlink >}}
{{< nextlink href="observability_pipelines/processors/remap_ocsf" >}}Remap to OCSF{{< /nextlink >}}
{{< nextlink href="observability_pipelines/processors/sensitive_data_scanner" >}}Sensitive Data Scanner{{< /nextlink >}}
{{< /whatsnext >}}

Expand Down
8 changes: 8 additions & 0 deletions content/en/observability_pipelines/processors/remap_ocsf.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
---
title: Remap to OCSF Processor
disable_toc: false
---

{{% observability_pipelines/processors/remap_ocsf %}}

{{% observability_pipelines/processors/filter_syntax %}}
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,15 @@ disable_toc: false

{{% observability_pipelines/processors/sensitive_data_scanner %}}

<!-- {{% collapse-content title="Add rules from the library" level="h5" %}}
{{% observability_pipelines/processors/sds_library_rules %}}
{{% /collapse-content %}}
{{% collapse-content title="Add a custom rule" level="h5" %}}
{{% observability_pipelines/processors/sds_custom_rules %}}
{{% /collapse-content %}} -->

{{% observability_pipelines/processors/filter_syntax %}}
Original file line number Diff line number Diff line change
Expand Up @@ -17,11 +17,14 @@ Use Observability Pipelines to route ingested logs to a cloud storage solution (

Select a source to get started:

<!-- - [Amazon Data Firehose][12] -->
- [Amazon S3][11]
- [Datadog Agent][1]
- [Fluentd or Fluent Bit][2]
- [Google Pub/Sub][3]
- [HTTP Client][4]
- [HTTP Server][5]
- [Kafka][13]
- [Logstash][6]
- [Splunk HTTP Event Collector (HEC)][7]
- [Splunk Heavy or Universal Forwarders (TCP)][8]
Expand All @@ -38,3 +41,6 @@ Select a source to get started:
[8]: /observability_pipelines/archive_logs/splunk_tcp
[9]: /observability_pipelines/archive_logs/sumo_logic_hosted_collector
[10]: /observability_pipelines/archive_logs/syslog
[11]: /observability_pipelines/set_up_pipelines/archive_logs/amazon_s3
[12]: /observability_pipelines/set_up_pipelines/archive_logs/amazon_data_firehose
[13]: /observability_pipelines/set_up_pipelines/archive_logs/kafka
Loading

0 comments on commit 46f9207

Please sign in to comment.