-
Notifications
You must be signed in to change notification settings - Fork 7
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Signed-off-by: Weifeng Wang <[email protected]> copy from agent-modules Signed-off-by: Weifeng Wang <[email protected]>
- Loading branch information
Showing
117 changed files
with
14,447 additions
and
0 deletions.
There are no files selected for viewing
73 changes: 73 additions & 0 deletions
73
kubernetes/common/grafana-agent/configs/modules/kubernetes/README.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,73 @@ | ||
# Kubernetes Modules | ||
|
||
## Logs | ||
|
||
The following pod annotations are supported: | ||
|
||
| Annotation | Description | | ||
| :--------------- | :-----------| | ||
| `logs.agent.grafana.com/scrape` | Allow a pod to declare it's logs should be dropped. | | ||
| `logs.agent.grafana.com/tenant` | Allow a pod to override the tenant for its logs. | | ||
| `logs.agent.grafana.com/log-format` | If specified additional processing is performed to extract details based on the specified format. This value can be a comma-delimited list, in the instances a pod may have multiple containers. The following formats are currently supported: <ul><li>common-log<li>donet<li>istio<li>json<li>klog<li>log4j-json<li>logfmt<li>otel<li>postgres<li>python<li>spring-boot<li>syslog<li>zerolog</ul> | | ||
| `logs.agent.grafana.com/scrub-level` | Boolean whether or not the level should be dropped from the log message (as it is a label). | | ||
| `logs.agent.grafana.com/scrub-timestamp` | Boolean whether or not the timestamp should be dropped from the log message (as it is metadata). | | ||
| `logs.agent.grafana.com/scrub-nulls` | Boolean whether or not keys with null values should be dropped from json, reducing the size of the log message. | | ||
| `logs.agent.grafana.com/scrub-empties` | Boolean whether or not keys with empty values (`"", [], {}`) should be dropped from json, reducing the size of the log message. | | ||
| `logs.agent.grafana.com/embed-pod` | Boolean whether or not to inject the name of the pod to the end of the log message i.e. `__pod=agent-logs-grafana-agent-jrqms`. | | ||
| `logs.agent.grafana.com/drop-info` | Boolean whether or not info messages should be dropped (default is `false`), but a pod can override this temporarily or permanently. | | ||
| `logs.agent.grafana.com/drop-debug` | Boolean whether or not debug messages should be dropped (default is `true`), but a pod can override this temporarily or permanently. | | ||
| `logs.agent.grafana.com/drop-trace` | Boolean whether or not trace messages should be dropped (default is `true`), but a pod can override this temporarily or permanently. | | ||
| `logs.agent.grafana.com/mask-ssn` | Boolean whether or not to mask SSNs in the log line, if true the data will be masked as `*SSN*salt*` | | ||
| `logs.agent.grafana.com/mask-credit-card` | Boolean whether or not to mask credit cards in the log line, if true the data will be masked as `*credit-card*salt*` | | ||
| `logs.agent.grafana.com/mask-email` | Boolean whether or not to mask emails in the log line, if true the data will be masked as`*email*salt*` | | ||
| `logs.agent.grafana.com/mask-ipv4` | Boolean whether or not to mask IPv4 addresses in the log line, if true the data will be masked as`*ipv4*salt*` | | ||
| `logs.agent.grafana.com/mask-ipv6` | Boolean whether or not to mask IPv6 addresses in the log line, if true the data will be masked as `*ipv6*salt*` | | ||
| `logs.agent.grafana.com/mask-phone` | Boolean whether or not to mask phone numbers in the log line, if true the data will be masked as `*phone*salt*` | | ||
|
||
--- | ||
## Metrics | ||
|
||
The following pod annotations are supported are supported for gathering of metrics for pods and endpoints: | ||
|
||
| Annotation | Description | | ||
| :--------------- | :-----------| | ||
| `metrics.agent.grafana.com/scrape` <br> `prometheus.io/scrape` | Boolean whether or not to scrape the endpoint / pod for metrics. *Note*: If a pod exposes multiple ports, all ports would be scraped for metrics. To limit this behavior specify the port annotation to limit the scrape to a single port. If the label `prometheus.io/service-monitor` or `metrics.agent.grafana.com/service-monitor` is set to `"false"` that is interpreted as a `scrape: "false"` | | ||
| `metrics.agent.grafana.com/scheme` <br> `prometheus.io/scheme` | The default scraping scheme is `http`, this can be specified as a single value which would override, the schema being used for all ports attached to the endpoint / pod. | | ||
| `metrics.agent.grafana.com/path` <br> `prometheus.io/path` | The default path to scrape is `/metrics`, this can be specified as a single value which would override, the scrape path being used for all ports attached to the endpoint / pod. | | ||
| `metrics.agent.grafana.com/port` <br> `prometheus.io/port` | The default port to scrape is the endpoint port, this can be specified as a single value which would override the scrape port being used for all ports attached to the endpoint, note that even if aan endpoint had multiple targets, the relabel_config targets are deduped before scraping | | ||
| `metrics.agent.grafana.com/tenant` | The tenant their metrics should be sent to, this does not necessarily have to be the actual tenantId, it can be a friendly name as well that is simply used to determine if the metrics should be gathered for the current tenant | | ||
| `metrics.agent.grafana.com/job` | The job label value to use when collecting their metrics, this can be useful as endpoints/pods will be automatically scraped for metrics, separate jobs do not have to be defined. However, it is common to use an integration or community project where rules / dashboards are provided for you. Oftentimes, this provided assets use hard-coded values for a job label i.e. `...{job="integrations/kubernetes/cadvisor"...}` or `...{job="kube-state-metrics"...}` setting this annotation to that value will allow the provided asset to work out of the box. | | ||
| `metrics.agent.grafana.com/interval` <br> `prometheus.io/interval` | The default interval to scrape is `1m`, this can be specified as a single value which would override, the scrape interval being used for all ports attached to the endpoint / pod. | | ||
| `metrics.agent.grafana.com/timeout` <br> `prometheus.io/timeout` | The default timeout for scraping is `10s`, this can be specified as a single value which would override, the scrape interval being used for all ports attached to the endpoint / pod. | | ||
|
||
### Probes (Blackbox) | ||
|
||
The following service / ingress annotations are supported are supported for probes and the gathering of metrics from blackbox exporter: | ||
|
||
| Annotation | Description | | ||
| :--------------- | :-----------| | ||
| `probes.agent.grafana.com/probe` <br> `prometheus.io/probe` | Boolean whether or not to probe the service / ingress for metrics. *Note*: If a pod exposes multiple ports, all ports would be probed. To limit this behavior specify the port annotation to limit the probe to a single port. | | ||
| `probes.agent.grafana.com/port` <br> `prometheus.io/port` | The default port to probe is the service / ingress port, this can be specified as a single value which would override the probe port being used for all ports attached to the service / ingress, note that even if aan service / ingress had multiple targets, the relabel_config targets are deduped before scraping | | ||
| `probes.agent.grafana.com/path` <br> `prometheus.io/path` | The default path to probe is `/metrics`, this can be specified as a single value which would override, the probe path being used for all ports attached to the service / ingress. | | ||
| `probes.agent.grafana.com/module` <br> `prometheus.io/module` | The name of the blackbox module to use for probing the resource, the default value is "unknown" as these values should be determined from your blackbox-exporter configuration file. | | ||
| `probes.agent.grafana.com/tenant` | The tenant their metrics should be sent to, this does not necessarily have to be the actual tenantId, it can be a friendly name as well that is simply used to determine if the metrics should be gathered for the current tenant | | ||
| `probes.agent.grafana.com/job` | The job label value to use when collecting their metrics, this can be useful as service / ingress will be automatically probed for metrics, separate jobs do not have to be defined. However, it is common to use an integration or community project where rules / dashboards are provided for you. Oftentimes, this provided assets use hard-coded values for a job label i.e. `...{job="blackbox-exporter"...}` setting this annotation to that value will allow the provided asset to work out of the box. | | ||
| `probes.agent.grafana.com/interval` | The default interval to probe is `1m`, this can be specified as a single value which would override, the probe interval being used for all ports attached to the service / ingress. | | ||
| `probes.agent.grafana.com/timeout` | The default timeout for scraping is `10s`, this can be specified as a single value which would override, the probe interval being used for all ports attached to the service / ingress. | | ||
|
||
### Probes (json-exporter) | ||
|
||
The following service / ingress annotations are supported are supported for probes and the gathering of metrics from json-exporter exporter: | ||
|
||
| Annotation | Description | | ||
| :--------------- | :-----------| | ||
| `json.agent.grafana.com/probe` | Boolean whether or not to probe the service / ingress for metrics. *Note*: If a pod exposes multiple ports, all ports would be probed. To limit this behavior specify the port annotation to limit the probe to a single port. | | ||
| `json.agent.grafana.com/port` <br> `prometheus.io/port` | The default port to probe is the service / ingress port, this can be specified as a single value which would override the probe port being used for all ports attached to the service / ingress, note that even if aan service / ingress had multiple targets, the relabel_config targets are deduped before scraping | | ||
| `json.agent.grafana.com/path` | The default path to probe is `/metrics`, this can be specified as a single value which would override, the probe path being used for all ports attached to the service / ingress. | | ||
| `json.agent.grafana.com/module` | The name of the json-exporter module to use for probing the resource, the default value is "unknown" as these values should be determined from your json-exporter-exporter configuration file. | | ||
| `json.agent.grafana.com/tenant` | The tenant their metrics should be sent to, this does not necessarily have to be the actual tenantId, it can be a friendly name as well that is simply used to determine if the metrics should be gathered for the current tenant | | ||
| `json.agent.grafana.com/job` | The job label value to use when collecting their metrics, this can be useful as service / ingress will be automatically probed for metrics, separate jobs do not have to be defined. However, it is common to use an integration or community project where rules / dashboards are provided for you. Oftentimes, this provided assets use hard-coded values for a job label i.e. `...{job="json-exporter"...}` setting this annotation to that value will allow the provided asset to work out of the box. | | ||
| `json.agent.grafana.com/interval` | The default interval to probe is `1m`, this can be specified as a single value which would override, the probe interval being used for all ports attached to the service / ingress. | | ||
| `json.agent.grafana.com/timeout` | The default timeout for scraping is `10s`, this can be specified as a single value which would override, the probe interval being used for all ports attached to the service / ingress. | | ||
|
||
See [/example/kubernetes/metrics](../../example/kubernetes/metrics/) for working example configurations. |
198 changes: 198 additions & 0 deletions
198
kubernetes/common/grafana-agent/configs/modules/kubernetes/logs/all.river
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,198 @@ | ||
/* | ||
Module: log-all | ||
Description: Wrapper module to include all kubernetes logging modules and use cri parsing | ||
*/ | ||
argument "forward_to" { | ||
// comment = "Must be a list(LogsReceiver) where collected logs should be forwarded to" | ||
optional = false | ||
} | ||
|
||
argument "tenant" { | ||
// comment = "The tenant to filter logs to. This does not have to be the tenantId, this is the value to look for in the logs.agent.grafana.com/tenant annotation, and this can be a regex." | ||
optional = true | ||
default = ".*" | ||
} | ||
|
||
argument "keep_labels" { | ||
// comment = "List of labels to keep before the log message is written to Loki" | ||
optional = true | ||
default = [ | ||
"app", | ||
"cluster", | ||
"component", | ||
"container", | ||
"deployment", | ||
"env", | ||
"filename", | ||
"instance", | ||
"job", | ||
"level", | ||
"log_type", | ||
"namespace", | ||
"region", | ||
"service", | ||
"squad", | ||
"team", | ||
] | ||
} | ||
|
||
argument "git_repo" { | ||
optional = true | ||
default = coalesce(env("GIT_REPO"), "https://github.com/grafana/agent-modules.git") | ||
} | ||
|
||
argument "git_rev" { | ||
optional = true | ||
default = coalesce(env("GIT_REV"), env("GIT_REVISION"), env("GIT_BRANCH"), "main") | ||
} | ||
|
||
argument "git_pull_freq" { | ||
// comment = "How often to pull the git repo, the default is 0s which means never pull" | ||
optional = true | ||
default = "0s" | ||
} | ||
|
||
module.git "log_targets" { | ||
repository = argument.git_repo.value | ||
revision = argument.git_rev.value | ||
pull_frequency = argument.git_pull_freq.value | ||
path = "modules/kubernetes/logs/targets/logs-from-worker.river" | ||
|
||
arguments { | ||
forward_to = [module.git.log_formats_all.exports.process.receiver] | ||
tenant = argument.tenant.value | ||
git_repo = argument.git_repo.value | ||
git_rev = argument.git_rev.value | ||
git_pull_freq = argument.git_pull_freq.value | ||
} | ||
} | ||
|
||
module.git "log_formats_all" { | ||
repository = argument.git_repo.value | ||
revision = argument.git_rev.value | ||
pull_frequency = argument.git_pull_freq.value | ||
path = "modules/kubernetes/logs/log-formats/all.river" | ||
|
||
arguments { | ||
forward_to = [module.git.log_level_default.exports.process.receiver] | ||
git_repo = argument.git_repo.value | ||
git_rev = argument.git_rev.value | ||
git_pull_freq = argument.git_pull_freq.value | ||
} | ||
} | ||
|
||
module.git "log_level_default" { | ||
repository = argument.git_repo.value | ||
revision = argument.git_rev.value | ||
pull_frequency = argument.git_pull_freq.value | ||
path = "modules/kubernetes/logs/labels/log-level.river" | ||
|
||
arguments { | ||
forward_to = [module.git.label_normalize_filename.exports.process.receiver] | ||
} | ||
} | ||
|
||
module.git "label_normalize_filename" { | ||
repository = argument.git_repo.value | ||
revision = argument.git_rev.value | ||
pull_frequency = argument.git_pull_freq.value | ||
path = "modules/kubernetes/logs/labels/normalize-filename.river" | ||
|
||
arguments { | ||
// here we fork, one branch goes to the log level module, the other goes to the metrics module | ||
// this is because we need to reduce the labels on the pre-metrics but they are still necessary in | ||
// downstream modules | ||
forward_to = [ | ||
module.git.pre_process_metrics.exports.process.receiver, | ||
module.git.drop_levels.exports.process.receiver, | ||
] | ||
} | ||
} | ||
|
||
module.git "pre_process_metrics" { | ||
repository = argument.git_repo.value | ||
revision = argument.git_rev.value | ||
pull_frequency = argument.git_pull_freq.value | ||
path = "modules/kubernetes/logs/metrics/pre-process-bytes-lines.river" | ||
|
||
arguments { | ||
forward_to = [module.git.drop_levels.exports.process.receiver] | ||
keep_labels = argument.keep_labels.value | ||
} | ||
} | ||
|
||
module.git "drop_levels" { | ||
repository = argument.git_repo.value | ||
revision = argument.git_rev.value | ||
pull_frequency = argument.git_pull_freq.value | ||
path = "modules/kubernetes/logs/drops/levels.river" | ||
|
||
arguments { | ||
forward_to = [module.git.scrub_all.exports.process.receiver] | ||
git_repo = argument.git_repo.value | ||
git_rev = argument.git_rev.value | ||
git_pull_freq = argument.git_pull_freq.value | ||
} | ||
} | ||
|
||
module.git "scrub_all" { | ||
repository = argument.git_repo.value | ||
revision = argument.git_rev.value | ||
pull_frequency = argument.git_pull_freq.value | ||
path = "modules/kubernetes/logs/scrubs/all.river" | ||
|
||
arguments { | ||
forward_to = [module.git.embed_pod.exports.process.receiver] | ||
git_repo = argument.git_repo.value | ||
git_rev = argument.git_rev.value | ||
git_pull_freq = argument.git_pull_freq.value | ||
} | ||
} | ||
|
||
module.git "embed_pod" { | ||
repository = argument.git_repo.value | ||
revision = argument.git_rev.value | ||
pull_frequency = argument.git_pull_freq.value | ||
path = "modules/kubernetes/logs/embed/pod.river" | ||
|
||
arguments { | ||
forward_to = [module.git.mask_all.exports.process.receiver] | ||
} | ||
} | ||
|
||
module.git "mask_all" { | ||
repository = argument.git_repo.value | ||
revision = argument.git_rev.value | ||
pull_frequency = argument.git_pull_freq.value | ||
path = "modules/kubernetes/logs/masks/all.river" | ||
|
||
arguments { | ||
forward_to = [module.git.label_keep.exports.process.receiver] | ||
git_repo = argument.git_repo.value | ||
git_rev = argument.git_rev.value | ||
git_pull_freq = argument.git_pull_freq.value | ||
} | ||
} | ||
|
||
module.git "label_keep" { | ||
repository = argument.git_repo.value | ||
revision = argument.git_rev.value | ||
pull_frequency = argument.git_pull_freq.value | ||
path = "modules/kubernetes/logs/labels/keep-labels.river" | ||
|
||
arguments { | ||
forward_to = [module.git.post_process_metrics.exports.process.receiver] | ||
keep_labels = argument.keep_labels.value | ||
} | ||
} | ||
|
||
module.git "post_process_metrics" { | ||
repository = argument.git_repo.value | ||
revision = argument.git_rev.value | ||
pull_frequency = argument.git_pull_freq.value | ||
path = "modules/kubernetes/logs/metrics/post-process-bytes-lines.river" | ||
|
||
arguments { | ||
forward_to = argument.forward_to.value | ||
} | ||
} |
28 changes: 28 additions & 0 deletions
28
kubernetes/common/grafana-agent/configs/modules/kubernetes/logs/drops/level-debug.river
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,28 @@ | ||
/* | ||
Module: drop-debug | ||
Description: The default behavior is to drop debug level messaging automatically, however, debug level | ||
messages can still be logged by adding the annotation: | ||
|
||
logs.agent.grafana.com/drop-debug: false | ||
*/ | ||
argument "forward_to" { | ||
// comment = "Must be a list(LogsReceiver) where collected logs should be forwarded to" | ||
optional = false | ||
} | ||
|
||
export "process" { | ||
value = loki.process.drop_debug | ||
} | ||
|
||
loki.process "drop_debug" { | ||
forward_to = argument.forward_to.value | ||
|
||
// check logs.agent.grafana.com/drop-debug annotation, if not set or set to true then drop | ||
// any log message with level=debug | ||
stage.match { | ||
pipeline_name = "pipeline for annotation || logs.agent.grafana.com/drop-debug: true" | ||
selector = "{level=~\"(?i)debug?\",logs_agent_grafana_com_drop_debug!=\"false\"}" | ||
action = "drop" | ||
drop_counter_reason = "debug" | ||
} | ||
} |
28 changes: 28 additions & 0 deletions
28
kubernetes/common/grafana-agent/configs/modules/kubernetes/logs/drops/level-info.river
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,28 @@ | ||
/* | ||
Module: drop-info | ||
Description: The default behavior is to keep info level messaging automatically, however, info level | ||
messages can dropped by adding the annotation: | ||
|
||
logs.agent.grafana.com/drop-info: true | ||
*/ | ||
argument "forward_to" { | ||
// comment = "Must be a list(LogsReceiver) where collected logs should be forwarded to" | ||
optional = false | ||
} | ||
|
||
export "process" { | ||
value = loki.process.drop_info | ||
} | ||
|
||
loki.process "drop_info" { | ||
forward_to = argument.forward_to.value | ||
|
||
// check logs.agent.grafana.com/drop-info annotation, if not set or set to true then drop | ||
// any log message with level=info | ||
stage.match { | ||
pipeline_name = "pipeline for annotation || logs.agent.grafana.com/drop-info: true" | ||
selector = "{level=~\"(?i)info?\",logs_agent_grafana_com_drop_info=\"true\"}" | ||
action = "drop" | ||
drop_counter_reason = "info" | ||
} | ||
} |
28 changes: 28 additions & 0 deletions
28
kubernetes/common/grafana-agent/configs/modules/kubernetes/logs/drops/level-trace.river
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,28 @@ | ||
/* | ||
Module: drop-trace | ||
Description: The default behavior is to drop trace level messaging automatically, however, trace level | ||
messages can still be logged by adding the annotation: | ||
|
||
logs.agent.grafana.com/drop-trace: false | ||
*/ | ||
argument "forward_to" { | ||
// comment = "Must be a list(LogsReceiver) where collected logs should be forwarded to" | ||
optional = false | ||
} | ||
|
||
export "process" { | ||
value = loki.process.drop_trace | ||
} | ||
|
||
loki.process "drop_trace" { | ||
forward_to = argument.forward_to.value | ||
|
||
// check logs.agent.grafana.com/drop-trace annotation, if not set or set to true then drop | ||
// any log message with level=trace | ||
stage.match { | ||
pipeline_name = "pipeline for annotation || logs.agent.grafana.com/drop-trace: true" | ||
selector = "{level=~\"(?i)trace?\",logs_agent_grafana_com_drop_trace!=\"false\"}" | ||
action = "drop" | ||
drop_counter_reason = "trace" | ||
} | ||
} |
Oops, something went wrong.