Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

internal: rename agent references to Alloy #86

Merged
merged 2 commits into from
Mar 27, 2024
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions CODEOWNERS
Validating CODEOWNERS rules …
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
# The CODEOWNERS file is used to define ownership of individuals or teams
# outside of the core set of Grafana Agent maintainers.
# outside of the core set of Grafana Alloy maintainers.
#
# If a directory is not listed here, it is assumed to be owned by the
# @grafana/grafana-agent-maintainers; they are not explicitly listed as
# @grafana/grafana-alloy-maintainers; they are not explicitly listed as
# CODEOWNERS as a GitHub project board is used instead for PR tracking, which
# helps reduce notification noise of the members of that team.

Expand Down
2 changes: 1 addition & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -55,5 +55,5 @@ RUN chown -R $USERNAME:$USERNAME /etc/alloy
RUN chown -R $USERNAME:$USERNAME /bin/alloy

ENTRYPOINT ["/bin/alloy"]
ENV AGENT_DEPLOY_MODE=docker
ENV ALLOY_DEPLOY_MODE=docker
CMD ["run", "/etc/alloy/config.river", "--storage.path=/etc/alloy/data"]
1 change: 1 addition & 0 deletions Dockerfile.windows
Original file line number Diff line number Diff line change
Expand Up @@ -24,4 +24,5 @@ COPY --from=builder /src/alloy/build/alloy /bin/alloy
COPY --from=builder /src/alloy/example-config.river /etc/alloy/config.river

ENTRYPOINT ["/bin/alloy"]
ENV ALLOY_DEPLOY_MODE=docker
CMD ["run", "/etc/alloy/config.river", "--storage.path=/etc/alloy/data"]
2 changes: 1 addition & 1 deletion docs/sources/reference/components/local.file.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ The read error will be exposed as a log message and in the debug information for

## Debug metrics

* `agent_local_file_timestamp_last_accessed_unix_seconds` (gauge): The timestamp, in Unix seconds, that the file was last successfully accessed.
* `local_file_timestamp_last_accessed_unix_seconds` (gauge): The timestamp, in Unix seconds, that the file was last successfully accessed.

## Example

Expand Down
4 changes: 2 additions & 2 deletions docs/sources/reference/components/loki.source.journal.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,8 +66,8 @@ The final internal label name would be `__journal__systemd_unit`, with _two_ und

## Debug Metrics

* `agent_loki_source_journal_target_parsing_errors_total` (counter): Total number of parsing errors while reading journal messages.
* `agent_loki_source_journal_target_lines_total` (counter): Total number of successful journal lines read.
* `loki_source_journal_target_parsing_errors_total` (counter): Total number of parsing errors while reading journal messages.
* `loki_source_journal_target_lines_total` (counter): Total number of successful journal lines read.

## Example

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -67,8 +67,8 @@ The following are some of the metrics that are exposed when this component is us
* `prometheus_receive_http_request_message_bytes` (histogram): Size (in bytes) of messages received in the request.
* `prometheus_receive_http_response_message_bytes` (histogram): Size (in bytes) of messages sent in response.
* `prometheus_receive_http_tcp_connections` (gauge): Current number of accepted TCP connections.
* `agent_prometheus_fanout_latency` (histogram): Write latency for sending metrics to other components.
* `agent_prometheus_forwarded_samples_total` (counter): Total number of samples sent to downstream components.
* `prometheus_fanout_latency` (histogram): Write latency for sending metrics to other components.
* `prometheus_forwarded_samples_total` (counter): Total number of samples sent to downstream components.

## Example

Expand Down Expand Up @@ -138,4 +138,4 @@ Connecting some components may not be sensible or components may require further
Refer to the linked documentation for more details.
{{< /admonition >}}

<!-- END GENERATED COMPATIBLE COMPONENTS -->
<!-- END GENERATED COMPATIBLE COMPONENTS -->
15 changes: 7 additions & 8 deletions docs/sources/reference/components/prometheus.relabel.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,14 +88,13 @@ values.

## Debug metrics


* `agent_prometheus_relabel_metrics_processed` (counter): Total number of metrics processed.
* `agent_prometheus_relabel_metrics_written` (counter): Total number of metrics written.
* `agent_prometheus_relabel_cache_misses` (counter): Total number of cache misses.
* `agent_prometheus_relabel_cache_hits` (counter): Total number of cache hits.
* `agent_prometheus_relabel_cache_size` (gauge): Total size of relabel cache.
* `agent_prometheus_fanout_latency` (histogram): Write latency for sending to direct and indirect components.
* `agent_prometheus_forwarded_samples_total` (counter): Total number of samples sent to downstream components.
* `prometheus_relabel_metrics_processed` (counter): Total number of metrics processed.
* `prometheus_relabel_metrics_written` (counter): Total number of metrics written.
* `prometheus_relabel_cache_misses` (counter): Total number of cache misses.
* `prometheus_relabel_cache_hits` (counter): Total number of cache hits.
* `prometheus_relabel_cache_size` (gauge): Total size of relabel cache.
* `prometheus_fanout_latency` (histogram): Write latency for sending to direct and indirect components.
* `prometheus_forwarded_samples_total` (counter): Total number of samples sent to downstream components.

## Example

Expand Down
14 changes: 7 additions & 7 deletions docs/sources/reference/components/prometheus.remote_write.md
Original file line number Diff line number Diff line change
Expand Up @@ -262,19 +262,19 @@ information.

## Debug metrics

* `agent_wal_storage_active_series` (gauge): Current number of active series
* `prometheus_remote_write_wal_storage_active_series` (gauge): Current number of active series
being tracked by the WAL.
* `agent_wal_storage_deleted_series` (gauge): Current number of series marked
* `prometheus_remote_write_wal_storage_deleted_series` (gauge): Current number of series marked
for deletion from memory.
* `agent_wal_out_of_order_samples_total` (counter): Total number of out of
* `prometheus_remote_write_wal_out_of_order_samples_total` (counter): Total number of out of
order samples ingestion failed attempts.
* `agent_wal_storage_created_series_total` (counter): Total number of created
* `prometheus_remote_write_wal_storage_created_series_total` (counter): Total number of created
series appended to the WAL.
* `agent_wal_storage_removed_series_total` (counter): Total number of series
* `prometheus_remote_write_wal_storage_removed_series_total` (counter): Total number of series
removed from the WAL.
* `agent_wal_samples_appended_total` (counter): Total number of samples
* `prometheus_remote_write_wal_samples_appended_total` (counter): Total number of samples
appended to the WAL.
* `agent_wal_exemplars_appended_total` (counter): Total number of exemplars
* `prometheus_remote_write_wal_exemplars_appended_total` (counter): Total number of exemplars
appended to the WAL.
* `prometheus_remote_storage_samples_total` (counter): Total number of samples
sent to remote storage.
Expand Down
6 changes: 3 additions & 3 deletions docs/sources/reference/components/prometheus.scrape.md
Original file line number Diff line number Diff line change
Expand Up @@ -179,9 +179,9 @@ scrape job on the component's debug endpoint.

## Debug metrics

* `agent_prometheus_fanout_latency` (histogram): Write latency for sending to direct and indirect components.
* `agent_prometheus_scrape_targets_gauge` (gauge): Number of targets this component is configured to scrape.
* `agent_prometheus_forwarded_samples_total` (counter): Total number of samples sent to downstream components.
* `prometheus_fanout_latency` (histogram): Write latency for sending to direct and indirect components.
* `prometheus_scrape_targets_gauge` (gauge): Number of targets this component is configured to scrape.
* `prometheus_forwarded_samples_total` (counter): Total number of samples sent to downstream components.

## Scraping behavior

Expand Down
2 changes: 1 addition & 1 deletion docs/sources/tutorials/assets/docker-compose.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ services:
volumes:
- ./flow_configs:/etc/agent-config
environment:
AGENT_MODE: "flow"
ALLOY_MODE: "flow"
rfratto marked this conversation as resolved.
Show resolved Hide resolved
entrypoint:
- /bin/grafana-agent
- run
Expand Down
12 changes: 6 additions & 6 deletions docs/sources/tutorials/assets/grafana/dashboards/agent.json
Original file line number Diff line number Diff line change
Expand Up @@ -139,7 +139,7 @@
],
"targets": [
{
"expr": "count by (pod, container, version) (agent_build_info{cluster=~\"$cluster\", namespace=~\"$namespace\", container=~\"$container\"})",
"expr": "count by (pod, container, version) (alloy_build_info{cluster=~\"$cluster\", namespace=~\"$namespace\", container=~\"$container\"})",
"format": "table",
"instant": true,
"intervalFactor": 2,
Expand Down Expand Up @@ -578,7 +578,7 @@
"steppedLine": false,
"targets": [
{
"expr": "sum by (job, instance_group_name) (rate(agent_wal_samples_appended_total{cluster=~\"$cluster\", namespace=~\"$namespace\", container=~\"$container\"}[5m]))",
"expr": "sum by (job, instance_group_name) (rate(prometheus_remote_write_wal_samples_appended_total{cluster=~\"$cluster\", namespace=~\"$namespace\", container=~\"$container\"}[5m]))",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "{{job}} {{instance_group_name}}",
Expand Down Expand Up @@ -666,7 +666,7 @@
"multi": true,
"name": "cluster",
"options": [ ],
"query": "label_values(agent_build_info, cluster)",
"query": "label_values(alloy_build_info, cluster)",
"refresh": 1,
"regex": "",
"sort": 2,
Expand All @@ -690,7 +690,7 @@
"multi": true,
"name": "namespace",
"options": [ ],
"query": "label_values(agent_build_info, namespace)",
"query": "label_values(alloy_build_info, namespace)",
"refresh": 1,
"regex": "",
"sort": 2,
Expand All @@ -714,7 +714,7 @@
"multi": true,
"name": "container",
"options": [ ],
"query": "label_values(agent_build_info, container)",
"query": "label_values(alloy_build_info, container)",
"refresh": 1,
"regex": "",
"sort": 2,
Expand All @@ -738,7 +738,7 @@
"multi": true,
"name": "pod",
"options": [ ],
"query": "label_values(agent_build_info{container=~\"$container\"}, pod)",
"query": "label_values(alloy_build_info{container=~\"$container\"}, pod)",
"refresh": 1,
"regex": "",
"sort": 2,
Expand Down
2 changes: 1 addition & 1 deletion docs/sources/tutorials/chaining.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,5 +81,5 @@ In `multiple-input.alloy` add a new `prometheus.relabel` component that adds a `

[multiple-inputs.alloy]: ../assets/flow_configs/multiple-inputs.alloy
[Filtering metrics]: ../filtering-metrics/
[Grafana]: http://localhost:3000/explore?orgId=1&left=%5B%22now-1h%22,%22now%22,%22Mimir%22,%7B%22refId%22:%22A%22,%22instant%22:true,%22range%22:true,%22exemplar%22:true,%22expr%22:%22agent_build_info%7B%7D%22%7D%5D
[Grafana]: http://localhost:3000/explore?orgId=1&left=%5B%22now-1h%22,%22now%22,%22Mimir%22,%7B%22refId%22:%22A%22,%22instant%22:true,%22range%22:true,%22exemplar%22:true,%22expr%22:%22alloy_build_info%7B%7D%22%7D%5D
[node_exporter]: http://localhost:3000/explore?orgId=1&left=%5B%22now-1h%22,%22now%22,%22Mimir%22,%7B%22refId%22:%22A%22,%22instant%22:true,%22range%22:true,%22exemplar%22:true,%22expr%22:%22node_cpu_seconds_total%22%7D%5D
4 changes: 2 additions & 2 deletions docs/sources/tutorials/collecting-prometheus-metrics.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ The `runt.sh` script does:

Allow {{< param "PRODUCT_NAME" >}} to run for two minutes, then navigate to [Grafana][].

![Dashboard showing agent_build_info metrics](/media/docs/agent/screenshot-grafana-agent-collect-metrics-build-info.png)
![Dashboard showing alloy_build_info metrics](/media/docs/agent/screenshot-grafana-agent-collect-metrics-build-info.png)

This example scrapes the {{< param "PRODUCT_NAME" >}} `http://localhost:12345/metrics` endpoint and pushes those metrics to the Mimir instance.

Expand Down Expand Up @@ -92,7 +92,7 @@ To try out {{< param "PRODUCT_NAME" >}} without using Docker:


[Docker]: https://www.docker.com/products/docker-desktop
[Grafana]: http://localhost:3000/explore?orgId=1&left=%5B%22now-1h%22,%22now%22,%22Mimir%22,%7B%22refId%22:%22A%22,%22instant%22:true,%22range%22:true,%22exemplar%22:true,%22expr%22:%22agent_build_info%7B%7D%22%7D%5D
[Grafana]: http://localhost:3000/explore?orgId=1&left=%5B%22now-1h%22,%22now%22,%22Mimir%22,%7B%22refId%22:%22A%22,%22instant%22:true,%22range%22:true,%22exemplar%22:true,%22expr%22:%22alloy_build_info%7B%7D%22%7D%5D
[prometheus.scrape]: ../../reference/components/prometheus.scrape/
[attribute]: ../../concepts/config-language/#attributes
[argument]: ../../concepts/components/
Expand Down
2 changes: 1 addition & 1 deletion docs/sources/tutorials/filtering-metrics.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ Open the `relabel.alloy` file that was downloaded and change the name of the ser
![Updated dashboard showing api_server_v2](/media/docs/agent/screenshot-grafana-agent-filtering-metrics-transition.png)

[Docker]: https://www.docker.com/products/docker-desktop
[Grafana]: http://localhost:3000/explore?orgId=1&left=%5B%22now-1h%22,%22now%22,%22Mimir%22,%7B%22refId%22:%22A%22,%22instant%22:true,%22range%22:true,%22exemplar%22:true,%22expr%22:%22agent_build_info%7B%7D%22%7D%5D
[Grafana]: http://localhost:3000/explore?orgId=1&left=%5B%22now-1h%22,%22now%22,%22Mimir%22,%7B%22refId%22:%22A%22,%22instant%22:true,%22range%22:true,%22exemplar%22:true,%22expr%22:%22alloy_build_info%7B%7D%22%7D%5D
[relabel.alloy]: ../assets/flow_configs/relabel.alloy/
[prometheus.relabel]: ../../reference/components/prometheus.relabel/
[Collect Prometheus metrics]: ../collecting-prometheus-metrics
2 changes: 1 addition & 1 deletion internal/alloy/internal/controller/loader.go
Original file line number Diff line number Diff line change
Expand Up @@ -743,7 +743,7 @@ func (l *Loader) EvaluateDependants(ctx context.Context, updatedNodes []*QueuedN
})
if err != nil {
level.Error(l.log).Log(
"msg", "failed to submit node for evaluation - the agent is likely overloaded "+
"msg", "failed to submit node for evaluation - Alloy is likely overloaded "+
"and cannot keep up with evaluating components - will retry",
"err", err,
"node_id", n.NodeID(),
Expand Down
12 changes: 6 additions & 6 deletions internal/alloy/internal/controller/metrics.go
Original file line number Diff line number Diff line change
Expand Up @@ -29,14 +29,14 @@ func newControllerMetrics(id string) *controllerMetrics {
evaluationTimesBuckets := []float64{.005, .025, .1, .5, 1, 5, 10, 30, 60, 120, 300, 600}

cm.controllerEvaluation = prometheus.NewGauge(prometheus.GaugeOpts{
Name: "agent_component_controller_evaluating",
Name: "alloy_component_controller_evaluating",
Help: "Tracks if the controller is currently in the middle of a graph evaluation",
ConstLabels: map[string]string{"controller_id": id},
})

cm.componentEvaluationTime = prometheus.NewHistogram(
prometheus.HistogramOpts{
Name: "agent_component_evaluation_seconds",
Name: "alloy_component_evaluation_seconds",
Help: "Time spent performing component evaluation",
ConstLabels: map[string]string{"controller_id": id},
Buckets: evaluationTimesBuckets,
Expand All @@ -47,7 +47,7 @@ func newControllerMetrics(id string) *controllerMetrics {
)
cm.dependenciesWaitTime = prometheus.NewHistogram(
prometheus.HistogramOpts{
Name: "agent_component_dependencies_wait_seconds",
Name: "alloy_component_dependencies_wait_seconds",
Help: "Time spent by components waiting to be evaluated after their dependency is updated.",
ConstLabels: map[string]string{"controller_id": id},
Buckets: evaluationTimesBuckets,
Expand All @@ -58,13 +58,13 @@ func newControllerMetrics(id string) *controllerMetrics {
)

cm.evaluationQueueSize = prometheus.NewGauge(prometheus.GaugeOpts{
Name: "agent_component_evaluation_queue_size",
Name: "alloy_component_evaluation_queue_size",
Help: "Tracks the number of components waiting to be evaluated in the worker pool",
ConstLabels: map[string]string{"controller_id": id},
})

cm.slowComponentEvaluationTime = prometheus.NewCounterVec(prometheus.CounterOpts{
Name: "agent_component_evaluation_slow_seconds",
Name: "alloy_component_evaluation_slow_seconds",
Help: fmt.Sprintf("Number of seconds spent evaluating components that take longer than %v to evaluate", cm.slowComponentThreshold),
ConstLabels: map[string]string{"controller_id": id},
}, []string{"component_id"})
Expand Down Expand Up @@ -104,7 +104,7 @@ func newControllerCollector(l *Loader, id string) *controllerCollector {
return &controllerCollector{
l: l,
runningComponentsTotal: prometheus.NewDesc(
"agent_component_controller_running_components",
"alloy_component_controller_running_components",
"Total number of running components.",
[]string{"health_type"},
map[string]string{"controller_id": id},
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ type ComponentGlobals struct {
MinStability featuregate.Stability // Minimum allowed stability level for features
OnBlockNodeUpdate func(cn BlockNode) // Informs controller that we need to reevaluate
OnExportsChange func(exports map[string]any) // Invoked when the managed component updated its exports
Registerer prometheus.Registerer // Registerer for serving agent and component metrics
Registerer prometheus.Registerer // Registerer for serving Alloy and component metrics
ControllerID string // ID of controller.
NewModuleController func(id string) ModuleController // Func to generate a module controller.
GetServiceData func(name string) (interface{}, error) // Get data for a service.
Expand Down
2 changes: 1 addition & 1 deletion internal/alloy/internal/worker/worker_pool.go
Original file line number Diff line number Diff line change
Expand Up @@ -168,7 +168,7 @@ func (w *workQueue) emitNextTask() {

// Remove the task from waiting and add it to running set.
// NOTE: Even though we remove an element from the middle of a collection, we use a slice instead of a linked list.
// This code is NOT identified as a performance hot spot and given that in large agents we observe max number of
// This code is NOT identified as a performance hot spot and given that in large Alloy instances we observe max number of
// tasks queued to be ~10, the slice is actually faster because it does not allocate memory. See BenchmarkQueue.
w.waitingOrder = append(w.waitingOrder[:index], w.waitingOrder[index+1:]...)
task = w.waiting[key]
Expand Down
2 changes: 1 addition & 1 deletion internal/alloy/logging/logger.go
Original file line number Diff line number Diff line change
Expand Up @@ -175,7 +175,7 @@ func (fw *lokiWriter) Write(p []byte) (int, error) {

select {
case receiver.Chan() <- loki.Entry{
Labels: model.LabelSet{"component": "agent"},
Labels: model.LabelSet{"component": "alloy"},
Entry: logproto.Entry{
Timestamp: time.Now(),
Line: string(p),
Expand Down
2 changes: 1 addition & 1 deletion internal/alloy/logging/logger_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ import (
$ go test -count=1 -benchmem ./internal/alloy/logging -run ^$ -bench BenchmarkLogging_
goos: darwin
goarch: arm64
pkg: github.com/grafana/agent/internal/alloy/logging
pkg: github.com/grafana/alloy/internal/alloy/logging
BenchmarkLogging_NoLevel_Prints-8 722358 1524 ns/op 368 B/op 11 allocs/op
BenchmarkLogging_NoLevel_Drops-8 47103154 25.59 ns/op 8 B/op 0 allocs/op
BenchmarkLogging_GoKitLevel_Drops_Sprintf-8 3585387 332.1 ns/op 320 B/op 8 allocs/op
Expand Down
2 changes: 1 addition & 1 deletion internal/alloy/tracing/tracing.go
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ import (
"go.opentelemetry.io/otel/trace"
)

const serviceName = "grafana-agent"
const serviceName = "alloy"

// Defaults for all Options structs.
var (
Expand Down
4 changes: 2 additions & 2 deletions internal/alloy/tracing/wrap_tracer.go
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,8 @@ import (
)

var (
componentIDAttributeKey = "grafana_agent.component_id"
controllerIDAttributeKey = "grafana_agent.controller_id"
componentIDAttributeKey = "alloy.component_id"
controllerIDAttributeKey = "alloy.controller_id"
)

// WrapTracer returns a new trace.TracerProvider which will inject the provided
Expand Down
2 changes: 1 addition & 1 deletion internal/alloycli/alloycli.go
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ func Run() {
var cmd = &cobra.Command{
Use: fmt.Sprintf("%s [global options] <subcommand>", os.Args[0]),
Short: "Grafana Alloy",
Version: build.Print("agent"),
Version: build.Print("alloy"),

RunE: func(cmd *cobra.Command, args []string) error {
return cmd.Usage()
Expand Down
6 changes: 3 additions & 3 deletions internal/alloycli/cmd_run.go
Original file line number Diff line number Diff line change
Expand Up @@ -49,9 +49,9 @@ import (

func runCommand() *cobra.Command {
r := &alloyRun{
inMemoryAddr: "agent.internal:12345",
inMemoryAddr: "alloy.internal:12345",
httpListenAddr: "127.0.0.1:12345",
storagePath: "data-agent/",
storagePath: "data-alloy/",
minStability: featuregate.StabilityStable,
uiPrefix: "/",
disableReporting: false,
Expand Down Expand Up @@ -205,7 +205,7 @@ func (fr *alloyRun) Run(configPath string) error {

// TODO(rfratto): many of the dependencies we import register global metrics,
// even when their code isn't being used. To reduce the number of series
// generated by the agent, we should switch to a custom registry.
// generated by Alloy, we should switch to a custom registry.
//
// Before doing this, we need to ensure that anything using the default
// registry that we want to keep can be given a custom registry so desired
Expand Down
Loading
Loading