-
Once we installed GKE and ECK, we can proceed to deploy a simple example.
-
We will now install an Elastic Stack defined in the following file: basic-complete-elastic-stack.yaml. You can check basic samples here and the documentation here.
-
It's a simple definition for an Elastic Stack version
7.12.1
, with a one-node Elasticsearch cluster, an APM server, EnterpriseSearch and a single Kibana instance.- The Elasticsearch nodes in the example are configured to limit container resources to 2G memory and 1 CPU.
- Pods can be customized modifying the
pod template
to add parameters like the Elasticsearch heap. - The deployment will also mount on a 50Gb volume claim. Check the documentation for Volume Claim Templates. Starting in version
1.3.0
ECK supports Elasticsearch volume expansion.
kubectl apply -f basic-complete-elastic-stack.yaml
elasticsearch.elasticsearch.k8s.elastic.co/elasticsearch-sample created kibana.kibana.k8s.elastic.co/kibana-sample created apmserver.apm.k8s.elastic.co/apm-server-sample created enterprisesearch.enterprisesearch.k8s.elastic.co/enterprise-search-sample created
-
We can monitor the deployment via
kubectl
until the components are healthy:kubectl get elasticsearch,kibana,apmserver,enterprisesearch
NAME HEALTH NODES VERSION PHASE AGE elasticsearch.elasticsearch.k8s.elastic.co/elasticsearch-sample unknown 7.12.1 ApplyingChanges 32s NAME HEALTH NODES VERSION AGE kibana.kibana.k8s.elastic.co/kibana-sample red 7.12.1 32s NAME HEALTH NODES VERSION AGE apmserver.apm.k8s.elastic.co/apm-server-sample 31s NAME HEALTH NODES VERSION AGE enterprisesearch.enterprisesearch.k8s.elastic.co/enterprise-search-sample red 7.12.1 31s
NAME HEALTH NODES VERSION PHASE AGE elasticsearch.elasticsearch.k8s.elastic.co/elasticsearch-sample yellow 1 7.12.1 Ready 2m47s NAME HEALTH NODES VERSION AGE kibana.kibana.k8s.elastic.co/kibana-sample green 1 7.12.1 2m47s NAME HEALTH NODES VERSION AGE apmserver.apm.k8s.elastic.co/apm-server-sample green 1 7.12.1 2m46s NAME HEALTH NODES VERSION AGE enterprisesearch.enterprisesearch.k8s.elastic.co/enterprise-search-sample green 1 7.12.1 2m46s
-
And check on the associated pods:
kubectl get pods --selector='elasticsearch.k8s.elastic.co/cluster-name=elasticsearch-sample' kubectl get pods --selector='kibana.k8s.elastic.co/name=kibana-sample' kubectl get pods --selector='apm.k8s.elastic.co/name=apm-server-sample' kubectl get pods --selector='enterprisesearch.k8s.elastic.co/name=enterprise-search-sample'
-
Or with kubernetic. Since we did not specify a
namespace
, the elastic stack was deployed on thedefault
namespace (remember to change the selected namespace at the top). We can first visit ourServices
and make sure they are all started. -
Since we deployed kibana with a service type
LoadBalancer
, we should be able to retrieve the external public IP GKE provisioned for us and access Kibana.http: service: spec: type: LoadBalancer
-
In order to do that, either get the external IP executing
kubectl get svc --selector='kibana.k8s.elastic.co/name=kibana-sample'
. In the example:NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kibana-sample-kb-http LoadBalancer 10.64.2.42 34.77.8.197 5601:32061/TCP 2m52s
-
Or get it with kubernetic, viewing the
kibana-sample-kb-http
service. When the external Load Balancer is provisioned, we will see the external IP understatus.loadBalancer.ingress.ip
. -
Once the external IP is available, we can visit our kibana at https://<EXTERNAL_IP>:5601/. In the example: https://34.77.8.197:5601/.
-
The certificate presented to us is self-signed one and we'll have to bypass the browser warnings. We could have assigned a valid http certificate. See the docs.
-
The operator has created a secret for our superuser elastic. Let's go get it. Two options: via kubectl retrieve the password:
echo $(kubectl get secret elasticsearch-sample-es-elastic-user -o=jsonpath='{.data.elastic}' | base64 --decode)
3KrrN58D83BfjdB4DX49m93z
-
Or visit the secrets section using kubernetic. Select
elasticsearch-samples-es-elastic-user
and get the password in plain text under theSpecifications
section. -
Now we can log-in to our kibana with user
elastic
, and the retrieved password. We recommend loading some kibana sample data, and enabling self Stack Monitoring, so we can use the monitoring data in the following sections. Self-monitoring is not recommended in production, and we'll use it here for demonstration purposes. See the monitoring example. -
We should also be able to log into our Enterprise Search. We can use port forwarding
kubectl port-forward service/enterprise-search-sample-ent-http 3002
to access it from our local workstation. And access the url https://localhost:3002/. The certificate will not be a valid one, accept it and you should see the following screen:-
You can login using the same
elastic
user and password we used for Kibana. Where you can choose between Elastic App Search and Elastic Workplace Search.
-
-
Elasticsearch has not been published, though we can always use port forwarding to access our cluster directly. In production, consider trafic splitting to allow clients to access Elaticsearch.
kubectl port-forward service/elasticsearch-sample-es-http 9200
-
And then we can access our cluster:
curl -k https://localhost:9200/_cluster/health\?pretty -u elastic
{ "cluster_name" : "elasticsearch-sample", "status" : "yellow", "timed_out" : false, "number_of_nodes" : 1, "number_of_data_nodes" : 1, "active_primary_shards" : 106, "active_shards" : 106, "relocating_shards" : 0, "initializing_shards" : 0, "unassigned_shards" : 6, "delayed_unassigned_shards" : 0, "number_of_pending_tasks" : 0, "number_of_in_flight_fetch" : 0, "task_max_waiting_in_queue_millis" : 0, "active_shards_percent_as_number" : 94.64285714285714 }
-
-
Now that we have our stack up and running, we can scale Elasticsearch from 1 to 3 nodes. If we look at the pod section in kubernetes, or at the deployed pods, we will see that our Elasticsearch has only one node.
kubectl get pods
NAME READY STATUS RESTARTS AGE apm-server-sample-apm-server-d5d9b44cf-r9cvr 1/1 Running 0 46m elasticsearch-sample-es-default-0 1/1 Running 0 46m enterprise-search-sample-ent-6ccc8798dc-ct628 1/1 Running 1 46m kibana-sample-kb-757d7cd667-dshlm 1/1 Running 0 46m
-
To scale the Elastic Stack, edit the file basic-complete-elastic-stack.yaml and set the Elasticsearch
nodeCount
to 3.apiVersion: elasticsearch.k8s.elastic.co/v1 kind: Elasticsearch metadata: name: elasticsearch-sample spec: version: 7.12.1 nodeSets: - name: default count: 3
-
Apply the changes:
kubectl apply -f basic-complete-elastic-stack.yaml
elasticsearch.elasticsearch.k8s.elastic.co/elasticsearch-sample configured kibana.kibana.k8s.elastic.co/kibana-sample unchanged apmserver.apm.k8s.elastic.co/apm-server-sample unchanged enterprisesearch.enterprisesearch.k8s.elastic.co/enterprise-search-sample unchanged
-
And monitor until the 3 pods are up and running. There is different options. Using kubectl command line, in the pods section of kubernetic, or in Kibana monitoring.
kubectl get pods --selector='elasticsearch.k8s.elastic.co/cluster-name=elasticsearch-sample' NAME READY STATUS RESTARTS AGE elasticsearch-sample-es-default-0 1/1 Running 0 49m elasticsearch-sample-es-default-1 0/1 Init:0/1 0 12s NAME READY STATUS RESTARTS AGE elasticsearch-sample-es-default-0 1/1 Running 0 54m elasticsearch-sample-es-default-1 1/1 Running 0 5m26s elasticsearch-sample-es-default-2 1/1 Running 0 4m27s
-
When ready, we will have a 3-node elasticsearch cluster, and the health should now be green - all shard replicas assigned.
-
We can now proceed to upgrade the whole stack. It will just require to edit the file basic-complete-elastic-stack.yaml and replace all the
version: 7.12.1
with, for example,version: 7.13.0
on all services (elasticsearch, apm, kibana, enterprise search).kubectl apply -f basic-complete-elastic-stack.yaml
elasticsearch.elasticsearch.k8s.elastic.co/elasticsearch-sample configured kibana.kibana.k8s.elastic.co/kibana-sample configured apmserver.apm.k8s.elastic.co/apm-server-sample configured enterprisesearch.enterprisesearch.k8s.elastic.co/enterprise-search-sample configured
-
When we apply the changes, the operator will take care of the dependencies. For example, it will first update Elasticsearch and APM, and wait for Elasticsearch to finish before upgrading Kibana. We can follow the process of pod creation using kubectl or kubernetic.
kubectl get pods
-
As a default, the operator will do a rolling upgrade, one Elasticsearch instance at a time. It will terminate an instance and restart it in the newer version.
- ECK uses StatefulSet-based orchestration from version
1.0+
. StatefulSets with ECK allow for even faster upgrades and configuration changes, since upgrades use the same persistent volume, rather than replicating data to the new nodes. - We could also have changed the default update strategy or the Pod disruption budget.
- ECK uses StatefulSet-based orchestration from version
-
After upgrading Elasticsearch, ECK will take care of upgrading the whole stack. For Kibana, Enterprise Search and APM it will create new pods in version
7.13.0
to replace the old ones version7.12.1
. -
We can check the deployed instances using kubernetic. Visualizing any of the pod specifications we can see that they are now running version
7.13.0
. -
We can also see in the kibana monitoring UI the health of our cluster (green) with 3 nodes on version
7.13.0
-
When we are done with the testing, it is recommended to remove the deployments to release resources.
-
Remove the Stack deployment:
kubectl delete -f basic-complete-elastic-stack.yaml
-
Make sure we removed all the resources:
> kubectl get elastic No resources found in default namespace. > kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.23.240.1 <none> 443/TCP 7d11h > kubectl get pvc No resources found in default namespace.