In the previous part, I've indicated what components will make up your monitoring stack for your K3s cluster. Here you'll start with the Kube State Metrics service, by declaring a configuration based on the Kubernetes standard deployment example found in the official GitHub page of the Kube State Metrics project, although with some modifications.
Your monitoring stack components need to be under a common Kustomize project, so let's create the usual folders as you've seen in previous guides. Like in those cases, I'll assume you're working in a special folder for your Kustomize projects, set in a $HOME/k8sprjs
folder of your kubectl client system.
$ mkdir -p $HOME/k8sprjs/monitoring/components/agent-kube-state-metrics/resources
In the command above, the main Kustomize project folder for the monitoring stack is called monitoring
, while the directory for the Kube State Metrics service is called agent-kube-state-metrics
.
To deploy the Kube State Metrics service, you'll need some objects that you didn't need to declare in your Nextcloud or Gitea platforms. One of those objects is a service account, which provides an identity for processes running in a pod. In other words, this is an standard Kubernetes authentication resource which is explained in this official documentation.
-
Create an
agent-kube-state-metrics.serviceaccount.yaml
file in theagent-kube-state-metrics/resources
folder.$ touch $HOME/k8sprjs/monitoring/components/agent-kube-state-metrics/resources/agent-kube-state-metrics.serviceaccount.yaml
-
Fill
agent-kube-state-metrics.serviceaccount.yaml
with the following yaml declaration.apiVersion: v1 automountServiceAccountToken: false kind: ServiceAccount metadata: name: agent-kube-state-metrics
As you see above, it's a really simple resource to declare but also has other parameters available. Check them out in its official API definition.
- Notice the parameter
automountServiceAccountToken
, which is the first time that appears in this guide series. It's a boolean value that here's set explicitly tofalse
as a security measure. The reason for this is well explained in this article, but it has to do with how pods get their ability to interact with the Kubernetes API server and the API bearer token used to connect with it.
- Notice the parameter
For the previous service account to be able to do anything in your cluster, you need to associate it with a role that includes concrete actions to perform. In the case of the Kube State Metrics agents you want to deploy in all your cluster nodes, you'll need a reader role able to act cluster-wide. This means you'll need to declare a ClusterRole resource.
-
Generate the file
agent-kube-state-metrics.clusterrole.yaml
within theagent-kube-state-metrics/resources
directory.$ touch $HOME/k8sprjs/monitoring/components/agent-kube-state-metrics/resources/agent-kube-state-metrics.clusterrole.yaml
-
Put the following yaml declaration in the file
agent-kube-state-metrics.clusterrole.yaml
.apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: agent-kube-state-metrics rules: - apiGroups: - "" resources: - configmaps - secrets - nodes - pods - services - resourcequotas - replicationcontrollers - limitranges - persistentvolumeclaims - persistentvolumes - namespaces - endpoints verbs: - list - watch - apiGroups: - apps resources: - statefulsets - daemonsets - deployments - replicasets verbs: - list - watch - apiGroups: - batch resources: - cronjobs - jobs verbs: - list - watch - apiGroups: - autoscaling resources: - horizontalpodautoscalers verbs: - list - watch - apiGroups: - authentication.k8s.io resources: - tokenreviews verbs: - create - apiGroups: - authorization.k8s.io resources: - subjectaccessreviews verbs: - create - apiGroups: - policy resources: - poddisruptionbudgets verbs: - list - watch - apiGroups: - certificates.k8s.io resources: - certificatesigningrequests verbs: - list - watch - apiGroups: - storage.k8s.io resources: - storageclasses - volumeattachments verbs: - list - watch - apiGroups: - admissionregistration.k8s.io resources: - mutatingwebhookconfigurations - validatingwebhookconfigurations verbs: - list - watch - apiGroups: - networking.k8s.io resources: - networkpolicies - ingresses verbs: - list - watch - apiGroups: - coordination.k8s.io resources: - leases verbs: - list - watch
See how the
agent-kube-state-metrics
cluster role is a collection ofrules
that define what actions (verbs
) can be done on concreteresources
available in concrete apis (apiGroups
). Also notice how the verbs are almost alwayslist
orwatch
, limiting to a read-only behavior this cluster role.BEWARE!
ClusterRole
resources are not namespaced, so you won't see anamespace
parameter in them.
To link the cluster role with your service account, you need a binding resource such as the ClusterRoleBinding.
-
Create the
agent-kube-state-metrics.clusterrolebinding.yaml
file under theagent-kube-state-metrics/resources
path.$ touch $HOME/k8sprjs/monitoring/components/agent-kube-state-metrics/resources/agent-kube-state-metrics.clusterrolebinding.yaml
-
Copy the following yaml in
agent-kube-state-metrics.clusterrolebinding.yaml
.apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: agent-kube-state-metrics roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: agent-kube-state-metrics subjects: - kind: ServiceAccount name: agent-kube-state-metrics
This cluster role binding specifies in
roleRef
what is the role to bind, and insubjects
you have a list of resources which is limited here to theagent-kube-state-metrics
service account. Notice how thekind
of the resources being binded is also specified.BEWARE!
ClusterRoleBinding
resources are not namespaced, so you won't see anamespace
parameter in them.
The Kube State Metric service is just an agent that doesn't store anything, so you can use a deployment resource for deploying it in your K3s cluster.
-
Generate an
agent-kube-state-metrics.deployment.yaml
file inagent-kube-state-metrics/resources
.$ touch $HOME/k8sprjs/monitoring/components/agent-kube-state-metrics/resources/agent-kube-state-metrics.deployment.yaml
-
Put the yaml below in
agent-kube-state-metrics.deployment.yaml
.apiVersion: apps/v1 kind: Deployment metadata: name: agent-kube-state-metrics spec: replicas: 1 template: spec: automountServiceAccountToken: true containers: - name: server image: registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.5.0 securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL readOnlyRootFilesystem: true runAsUser: 65534 ports: - containerPort: 8080 name: http-metrics - containerPort: 8081 name: telemetry resources: requests: cpu: 250m memory: 64Mi limits: cpu: 500m memory: 128Mi livenessProbe: httpGet: path: /healthz port: 8080 initialDelaySeconds: 5 timeoutSeconds: 5 readinessProbe: httpGet: path: / port: 8081 initialDelaySeconds: 5 timeoutSeconds: 5 nodeSelector: kubernetes.io/os: linux serviceAccountName: agent-kube-state-metrics tolerations: - effect: NoExecute operator: Exists
This
Deployment
resource is about just one pod, and comes with some particularities compared with the other deployment objects you've declared in previous guides.-
automountServiceAccountToken
: this parameter appears again in this guide, but here is set totrue
. Why here is true rather than in theServiceAccount
resource? I haven't found a concrete explanation, but again you can assume that it's related to the security concerns related to the token used for connecting with the Kubernetes API available in your cluster. -
server
container: executes the Kube State Metrics service.securityContext
: section for adjusting the security conditions of the container.allowPrivilegeEscalation
: controls whether a process can gain more privileges than its parent process. Here set tofalse
to constrain this container within its given privileges.capabilities
: with this section you can add or drop security related capabilities to the container beyond its default setting. In this case,ALL
possible capabilities aredrop
ped to get a non-privileged container.readOnlyRootFilesystem
: to set or not theroot
filesystem within the container as read-only.runAsUser
: specifies the UID of the user running the container. In this deployment will be one with the UID65534
that it's already prepared in the Kube State Metrics image.
livenessProbe
: enables a periodic probe of the container liveness. If the probe fails, the container will be restarted.readinessProbe
: sets a periodic probe of the container service readiness. The container will be removed from service endpoints if the probe fails.
-
nodeSelector
: a selector that makes the pod run only on nodes that have the specified label. In this case, thekubernetes.io/os
label is ensuring that this pod will be executed only on Linux nodes. -
serviceAccountName
: the name of theServiceAccount
that will be used to run this pod. Here is set the one you declared earlier in this document, theagent-kube-state-metrics
one. -
tolerations
: to allow the Kube State Metrics agent to run in the master node of your K3s cluster, you must allow this pod to tolerate theNoExecute
taint you set in that node.
-
The last resource you need to describe for your Kube State Metrics setup is a Service resource.
-
Create the file
agent-kube-state-metrics.service.yaml
withinagent-kube-state-metrics/resources
.$ touch $HOME/k8sprjs/monitoring/components/agent-kube-state-metrics/resources/agent-kube-state-metrics.service.yaml
-
Fill
agent-kube-state-metrics.service.yaml
with the yaml declaration next.apiVersion: v1 kind: Service metadata: name: agent-kube-state-metrics spec: clusterIP: None ports: - name: http-metrics port: 8080 targetPort: http-metrics - name: telemetry port: 8081 targetPort: telemetry
The main particularity of this service is that is declared to have no cluster IP associated to it. This implies that its internal cluster FQDN will be needed to reach it.
The Prometheus server you'll setup in a later guide needs to know the DNS record assigned to this Service
within your cluster. To deduce it, do as you did with the services of the Gitea platform like the Redis one.
- The string format for any
Service
resource's FQDN is<metadata.name>.<namespace>.svc.<internal.cluster.domain>
. - The namespace for all resources of the monitoring stack will be
monitoring
. - The internal cluster domain that was set back in the G025 guide is
deimos.cluster.io
. - All the components of this monitoring stack will also have a
mntr-
prefix added to theirmetadata.name
string.
Knowing all that, this Service
's FQDN will be the following one.
mntr-agent-kube-state-metrics.monitoring.svc.deimos.cluster.io
Now you need to associate all the resources with a Kustomization project, declared with the corresponding kustomization.yaml
file.
-
Produce a
kustomization.yaml
file in theagent-kube-state-metrics
folder.$ touch $HOME/k8sprjs/monitoring/components/agent-kube-state-metrics/kustomization.yaml
-
In
kustomization.yaml
, copy the following yaml.# Kube State Metrics setup apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization commonLabels: app.kubernetes.io/component: exporter app.kubernetes.io/name: kube-state-metrics app.kubernetes.io/version: 2.5.0 resources: - resources/agent-kube-state-metrics.clusterrolebinding.yaml - resources/agent-kube-state-metrics.clusterrole.yaml - resources/agent-kube-state-metrics.deployment.yaml - resources/agent-kube-state-metrics.serviceaccount.yaml - resources/agent-kube-state-metrics.service.yaml replicas: - name: agent-kube-state-metrics count: 1 images: - name: registry.k8s.io/kube-state-metrics/kube-state-metrics newTag: v2.5.0
Under
commonLabels
I've set three labels that also appear in the resources declared in the official standard example for deploying Kube State Metrics. Be aware of the one namedversion
: whenever you update the Kube State Metrics' image version, you'll have to update that value too.
Let's validate the Kustomize project for your Kube State Metrics service.
-
Dump the output of this Kustomize project in a file named
agent-kube-state-metrics.k.output.yaml
.$ kubectl kustomize $HOME/k8sprjs/monitoring/components/agent-kube-state-metrics > agent-kube-state-metrics.k.output.yaml
-
Open the
agent-kube-state-metrics.k.output.yaml
file and compare your resulting yaml output with the one below.apiVersion: v1 automountServiceAccountToken: false kind: ServiceAccount metadata: labels: app.kubernetes.io/component: exporter app.kubernetes.io/name: kube-state-metrics app.kubernetes.io/version: 2.5.0 name: agent-kube-state-metrics --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: app.kubernetes.io/component: exporter app.kubernetes.io/name: kube-state-metrics app.kubernetes.io/version: 2.5.0 name: agent-kube-state-metrics rules: - apiGroups: - "" resources: - configmaps - secrets - nodes - pods - services - resourcequotas - replicationcontrollers - limitranges - persistentvolumeclaims - persistentvolumes - namespaces - endpoints verbs: - list - watch - apiGroups: - apps resources: - statefulsets - daemonsets - deployments - replicasets verbs: - list - watch - apiGroups: - batch resources: - cronjobs - jobs verbs: - list - watch - apiGroups: - autoscaling resources: - horizontalpodautoscalers verbs: - list - watch - apiGroups: - authentication.k8s.io resources: - tokenreviews verbs: - create - apiGroups: - authorization.k8s.io resources: - subjectaccessreviews verbs: - create - apiGroups: - policy resources: - poddisruptionbudgets verbs: - list - watch - apiGroups: - certificates.k8s.io resources: - certificatesigningrequests verbs: - list - watch - apiGroups: - storage.k8s.io resources: - storageclasses - volumeattachments verbs: - list - watch - apiGroups: - admissionregistration.k8s.io resources: - mutatingwebhookconfigurations - validatingwebhookconfigurations verbs: - list - watch - apiGroups: - networking.k8s.io resources: - networkpolicies - ingresses verbs: - list - watch - apiGroups: - coordination.k8s.io resources: - leases verbs: - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: app.kubernetes.io/component: exporter app.kubernetes.io/name: kube-state-metrics app.kubernetes.io/version: 2.5.0 name: agent-kube-state-metrics roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: agent-kube-state-metrics subjects: - kind: ServiceAccount name: agent-kube-state-metrics --- apiVersion: v1 kind: Service metadata: labels: app.kubernetes.io/component: exporter app.kubernetes.io/name: kube-state-metrics app.kubernetes.io/version: 2.5.0 name: agent-kube-state-metrics spec: clusterIP: None ports: - name: http-metrics port: 8080 targetPort: http-metrics - name: telemetry port: 8081 targetPort: telemetry selector: app.kubernetes.io/component: exporter app.kubernetes.io/name: kube-state-metrics app.kubernetes.io/version: 2.5.0 --- apiVersion: apps/v1 kind: Deployment metadata: labels: app.kubernetes.io/component: exporter app.kubernetes.io/name: kube-state-metrics app.kubernetes.io/version: 2.5.0 name: agent-kube-state-metrics spec: replicas: 1 selector: matchLabels: app.kubernetes.io/component: exporter app.kubernetes.io/name: kube-state-metrics app.kubernetes.io/version: 2.5.0 template: metadata: labels: app.kubernetes.io/component: exporter app.kubernetes.io/name: kube-state-metrics app.kubernetes.io/version: 2.5.0 spec: automountServiceAccountToken: true containers: - image: registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.5.0 livenessProbe: httpGet: path: /healthz port: 8080 initialDelaySeconds: 5 timeoutSeconds: 5 name: server ports: - containerPort: 8080 name: http-metrics - containerPort: 8081 name: telemetry readinessProbe: httpGet: path: / port: 8081 initialDelaySeconds: 5 timeoutSeconds: 5 resources: limits: cpu: 500m memory: 128Mi requests: cpu: 250m memory: 64Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL readOnlyRootFilesystem: true runAsUser: 65534 nodeSelector: kubernetes.io/os: linux serviceAccountName: agent-kube-state-metrics tolerations: - effect: NoExecute operator: Exists
The main thing to notice in the output is how the labels and selectors have been automatically applied on the resources.
My usual reminder. This component is part of a bigger project yet to be completed: your monitoring stack. Wait till you have every component ready for deploying in your cluster.
$HOME/k8sprjs/monitoring
$HOME/k8sprjs/monitoring/components
$HOME/k8sprjs/monitoring/components/agent-kube-state-metrics
$HOME/k8sprjs/monitoring/components/agent-kube-state-metrics/resources
$HOME/k8sprjs/monitoring/components/agent-kube-state-metrics/kustomization.yaml
$HOME/k8sprjs/monitoring/components/agent-kube-state-metrics/resources/agent-kube-state-metrics.clusterrole.yaml
$HOME/k8sprjs/monitoring/components/agent-kube-state-metrics/resources/agent-kube-state-metrics.clusterrolebinding.yaml
$HOME/k8sprjs/monitoring/components/agent-kube-state-metrics/resources/agent-kube-state-metrics.deployment.yaml
$HOME/k8sprjs/monitoring/components/agent-kube-state-metrics/resources/agent-kube-state-metrics.service.yaml
$HOME/k8sprjs/monitoring/components/agent-kube-state-metrics/resources/agent-kube-state-metrics.serviceaccount.yaml
- ServiceAccount
- Configure Service Accounts for Pods
- Abuse Kubernetes with the AutomountServiceAccountToken
- ClusterRole
- ClusterRoleBinding
- Using RBAC Authorization
- Mixing Kubernetes Roles, RoleBindings, ClusterRoles, and ClusterBindings
- Kubernetes Pod. Security context
- Kubernetes SecurityContext Capabilities Explained [Examples]
- Taints and Tolerations
- Working with taints and tolerations in Kubernetes
- Node taint k3s-controlplane=true:NoExecute
- Kube State Metrics
- Kube State Metrics standard deployment example for v2.5.0
- The Guide To Kube-State-Metrics
- How To Setup Kube State Metrics on Kubernetes
- Kube state metrics kubernetes deployment configs
<< Previous (G035. Deploying services 04. Monitoring stack Part 1) | +Table Of Contents+ | Next (G035. Deploying services 04. Monitoring stack Part 3) >>