SECRET OF CSS

Managing Elasticsearch Resources in Kubernetes | by Marek Hornak | Jun, 2022


How to deploy the Elasticsearch and Kibana in Kubernetes

0*cDrzRCLaISKUCHKQ
Photo by James Harrison on Unsplash

The Elastic Cloud on Kubernetes (in short, ECK) brought the possibility to deploy Elasticsearch clusters in Kubernetes with just a few lines of code.

Unfortunately, the related resources such as indices, index templates, and lifecycle policies are not handled by the ECK operator, so up until now, we had to rely on lifecycle hooks or init containers. To solve this issue, we will use an additional operator, that reconciles these resources in a proper declarative way.

In this article, we will show how to deploy a new Elasticsearch cluster and how to easily provision it with users, indices, and other commonly used resources.

To show how to provision Elasticsearch with additional resources, we first need to create Elasticsearch. Considering we have a Kubernetes cluster up and running, kubectl configured so it can connect to the cluster, and ECK operator installed it’s just a matter of deploying Elasticsearch specification:

Running kubectl apply -f elasticsearch.yaml will deploy a single-node Elasticsearch cluster and after a few moments, your cluster should be ready to accept connections.

To verify the cluster health, you can run the kubectl get Elasticsearch quickstart. The cluster health is reported in the output:

$ kubectl get Elasticsearch quickstart
NAME HEALTH NODES VERSION PHASE AGE
quickstart green 1 8.1.0 Ready 5m

Optionally, we can install the Kibana, by deploying Kibana specifications such as:

with kubectl apply -f kibana.yaml.

The easiest access to Elasticsearch running in the local Kubernetes cluster is through a port-forwarding session. First, we should get the name of the Elasticsearch service. Running kubectl get services will give a list of services:

$ kubectl get services
NAME TYPE ... PORT(S) AGE
quickstart-es-default ClusterIP ... 9200/TCP 5m
quickstart-es-http ClusterIP ... 9200/TCP 5m
quickstart-es-internal-http ClusterIP ... 9200/TCP 5m
quickstart-es-transport ClusterIP ... 9300/TCP 5m
quickstart-kb-http ClusterIP ... 5601/TCP 5m

The quickstart-es-http is the one used for HTTP communication with Elasticsearch and we will use it in port-forward command:

$ kubectl port-forward svc/quickstart-es-http 9200:9200
Forwarding from 127.0.0.1:9200 -> 9200
Forwarding from [::1]:9200 -> 9200

Before we try to query the https://localhost:9200, we should retrieve the password for the default elastic user:

$ kubectl get secret quickstart-es-elastic-user -o go-template='{{.data.elastic | base64decode}}'yEz6QpRW0w005H3q2B738ugx

Now we are ready to query the Elasticsearch:

$ curl https://localhost:9200 -k -u elastic:yEz6QpRW0w005H3q2B738ugx
{
"name" : "quickstart-es-default-0",
"cluster_name" : "quickstart",
"version" : {
"number" : "8.1.0",
"build_flavor" : "default",
"build_type" : "docker",
...
},
"tagline" : "You Know, for Search"
}

To access the kibana, we have to do a port-forward to quickstart-kb-http service, listening by default on 5601 port:

$ kubectl port-forward svc/quickstart-kb-http 5601:5601
Forwarding from 127.0.0.1:5601 -> 5601
Forwarding from [::1]:5601 -> 5601

Afterward, we can access the Kibana through https://localhost:5601 in the browser.

Now we have the Elasticsearch and Kibana up & running in our Kubernetes cluster and we are ready to provision it with additional resources.

The installation of the ECK-Custom-Resources (ECK-CR) operator can be done with provided helm charts:

$ helm repo add eck-custom-resources https://xco-sk.github.io/eck-custom-resources/
$ helm install eck-cr eck-custom-resources/eck-custom-resources-operator

The default configuration of the operator works well with the examples above, so no additional configuration is needed. You can always change the Elasticsearch and Kibana URLs and authentication details by passing the proper values, see default values.yaml and chart readme file.

We can now check if the ECK-CR pod is ready by calling kubectl get pods.

The helm chart of ECK-CR installed various custom resource definitions, representing Elasticsearch and Kibana objects. The list of supported resources with their respective documentation is present in the GitHub repository. As an example of the workflow, we will show the creation of the Index template and Index.

We will start with the simple Index template, defining the index template json in the body field:

$ kubectl apply -f indextemplate.yamlindextemplate.es.eck.github.com/indextemplate-sample created

The operator will create (or update) the index template with the name indextemplate-sample in our Elasticsearch cluster, which can be verified by calling Elasticsearch REST API:

$ curl "https://localhost:9200/_cat/templates/indextemplate*?v" -k -u elastic:yEz6QpRW0w005H3q2B738ugxname                 index_patterns order version composed_of
indextemplate-sample [index-*] 1 []

Similarly, we can deploy the index. To ensure, the indices are deployed after the index templates, the Index specification defines the optional dependencies to list the required index templates. The operator then waits for all required templates to be present in the cluster before applying the given Index:

$ kubectl apply -f index.yaml
index.es.eck.github.com/index-sample created

The ECK-CR operator will create the Index in our Elasticsearch cluster. To verify, we can again query the Elasticsearch:

The operator also reports the reconciliation status into the object events, so in case something went wrong, we can easily get the info about the root cause:

$ kubectl describe index index-sample-failure

Events:
Type Reason From Message
---- ------ ---- -------
Warning Missing dependencies index_controller Some of declared dependencies are not present yet: dependencies not fulfilled. Missing indices:[index-base-sample]. Missing index templates:[]. Errors:[]

To delete resources, we will use the kubectl delete command:

$ kubectl delete IndexTemplate indextemplate-sampleindextemplate.es.eck.github.com "indextemplate-sample" deleted

This will delete the Index template from both Kubernetes and also Elasticsearch. The same applies also to Index, however, there is a check for Index emptiness to prevent data loss. If the Index is not empty, the operator won’t delete it from Elasticsearch.

There are also other Elasticsearch resources supported, see the documentation on how to work with each one of them: Index lifecycle policies, Ingest pipelines, Snapshot repositories, Snapshot lifecycle policies, Users, and Roles.

Resources related to Kibana are handled in the same fashion as those for Elasticsearch, see the example of visualization:

The operator then creates the object in Kibana using Kibana REST API. The reconciliation status can be monitored using the events, in the case of the example above:

$ kubect describe visualization visualization-sampleEvents:
Type Reason From Message
---- ------ ---- -------
Normal Created visualization_controller Created/Updated kibana.eck.github.com/v1alpha1/Visualization visualization-sample

We can also define the required resources in the dependencies field — this is specifically useful to match with the references field in visualization JSON. See the documentation of Kibana resources for more examples and a detailed description of their specification.

The ECK-CR operator allows us to provision the Elasticsearch and Kibana clusters in a declarative way, that brings clarity and transparency into our Infrastructure-as-a-Code. The operator itself was prepared for easy installation and maximum compatibility with ECK, but it supports also the standalone Elasticsearch and Kibana installations.

In the current state, the operator should cover Elasticsearch and Kibana resources that are used the most, but in case you find some important type of resource is missing, feel free to create an issue in the GitHub repo, and generally, we will be more than thankful for any kind of feedback.



News Credit

%d bloggers like this: