ELK on top of AWS EKS

Raji Satti
6 min readJun 10, 2021

--

ELK is an acronym for Elasticsearch-Logstash-Kibana on top of one another to maintain logs.

Below is a brief description of every module that would be required to establish a pipeline to directly store logs from your machines or projects directly to the centralized server.

  1. Elasticsearch
    Elasticsearch is the most vital part of the complete pipeline and performs the functioning of storing all the logs. It is basically a search and analysis engine which provides functionalities for managing and searching logs through Elasticsearch queries.

2. Logstash
Logstash is the server-side component of this pipeline that looks into incoming logs and data processing. It can ingest data from multiple sources simultaneously which enhances the areas of applicability for this ELK stack. It passes the logs to Elasticsearch and can even filter them before passing.

3. Kibana
Looking or analyzing logs is quite cumbersome if logs are stored as raw JSON objects in files. Kibana comes to the rescue by providing a web interface for searching and visualizing logs. Basically, Kibana is the flexible visualization tool that brings Graphic User Interface to all the Elasticsearch functionalities easing it for the end users.

4. Filebeat
Filebeat is installed at the client server that is responsible to send all the logs to Logstash which can further pass it down the pipeline.

Now to create ELK environment on top of EKS Cluster. Here we are using HELM to create this environment. Each Helm chart contains all the specifications needed to be deployed on Kubernetes in the form of files describing a set of Kubernetes resources and configurations. Charts can be used to deploy very basic applications but also more complex systems such as the ELK Stack.

Now we will see the steps to configure ELK Stack on top of Kubernetes cluster using AWS EKS.

Step 1: Configure HELM Command.

We need to configure HELM in our system. So that we can use this to set up ELK Stack. You can download HELM software from the below given link.

https://get.helm.sh/helm-v3.3.0-rc.1-windows-amd64.zip

Now we need to run the below commands to initialize HELM.

$ helm init
$ helm repo add stable https://kubernetes-charts.storage.googleapis.com/
$ helm repo list
$ helm repo update

Step 2: Deploying an Elasticsearch Cluster with HELM.

It’s time to start deploying the different components of the ELK Stack.

Let’s start with Elasticsearch. As mentioned above, we’ll be using Elastic’s Helm repository.

$ Helm repo add elastic https://Helm.elastic.co
$ curl -O https://raw.githubusercontent.com/elastic/Helm
/charts/master/elasticsearch/examples/minikube/values.yaml
$ Helm install --name elasticsearch elastic/elasticsearch -f ./values.yaml

After running this will will get the below output.

NAME:   elasticsearch
LAST DEPLOYED: Wed Jun 9 22:25:03 2021
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
elasticsearch-master-0 0/1 Pending 0 1s
elasticsearch-master-1 0/1 Pending 0 1s
elasticsearch-master-2 0/1 Pending 0 1s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch-master ClusterIP 10.100.12.233 <none> 9200/TCP,9300/TCP 1s
elasticsearch-master-headless ClusterIP None <none> 9200/TCP,9300/TCP 1s
==> v1/StatefulSet
NAME READY AGE
elasticsearch-master 0/3 1s
==> v1beta1/PodDisruptionBudget
NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE
elasticsearch-master-pdb N/A 1 0 1s
NOTES:
1. Watch all cluster members come up.
$ kubectl get pods --namespace=default -l app=elasticsearch-master -w
2. Test cluster health using Helm test.
$ helm test elasticsearch --cleanup

As noted at the end of the output, you can verify your Elasticsearch pods status with the below command.

$ kubectl get pods --namespace=default -l app=elasticsearch-master -w

It might take some time but eventually, three Elasticsearch pods will be shown as running.

$ kubectl get pods
NAME READY STATUS RESTARTS AGE
elasticsearch-master-0 1/1 Running 0 1m
elasticsearch-master-2 1/1 Running 0 1m
elasticsearch-master-1 1/1 Running 0 1m

Now we need to apply port forward concept to connect Elasticsearch from outside.

$ kubectl port-forward svc/elasticsearch-master 9200

We can see the below output if we browse 127.0.0.1:9200 IP address.

Step 3: Deploying Kibana with HELM

After successful configuration of elasticsearch, we can deploy the Kibana using the below mentioned command.

$ Helm install --name kibana elastic/kibana

You can view the output as shown below:

NAME:   kibana
LAST DEPLOYED: Wed Jun 9 23:04:33 2020
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
kibana-kibana 0/1 1 0 1s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
kibana-kibana-6948cf498c-tdgdb 0/1 ContainerCreating 0 1s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kibana-kibana ClusterIP 10.109.229.135 <none> 5601/TCP 1s

It takes some time to get into running state.

$ kubectl get pods
NAME READY STATUS RESTARTS AGE
elasticsearch-master-0 1/1 Running 0 18m
elasticsearch-master-1 1/1 Running 0 18m
elasticsearch-master-2 1/1 Running 0 18m
kibana-kibana-6948cf498c-tdgdb 1/1 Running 0 8m13s

Now set up port forwarding for Kibana using the below command.

$ kubectl port-forward deployment/kibana-kibana 5601

Forwarding from 127.0.0.1:5601 -> 5601

Forwarding from [::1]:5601 -> 5601

You can now access Kibana from your browser at: http://localhost:5601

Step 4: Deploying Metricbeat with HELM.

We need to set up a data pipeline to monitor the metrics of Kubernetes Cluster. Here we’re going to deploy the Metricbeat Helm chart.

$ Helm install --name metricbeat elastic/metricbeat

After running this will will get the below output.

NAME:   metricbeat
LAST DEPLOYED: Wed Jun 9 23:24:49 2021
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/ConfigMap
NAME DATA AGE
metricbeat-metricbeat-daemonset-config 1 0s
metricbeat-metricbeat-deployment-config 1 0s
==> v1/DaemonSet
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
metricbeat-metricbeat 1 1 0 1 0 <none> 0s
==> v1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
metricbeat-kube-state-metrics 0/1 1 0 1s
metricbeat-metricbeat-metrics 0/1 1 0 1s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
metricbeat-kube-state-metrics-6f88454b98-mfzpk 0/1 ContainerCreating 0 1s
metricbeat-metricbeat-hxtbg 0/1 ContainerCreating 0 1s
metricbeat-metricbeat-metrics-fbbc87648-5wx87 0/1 ContainerCreating 0 1s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
metricbeat-kube-state-metrics ClusterIP 10.96.1.249 <none> 8080/TCP 0s
==> v1/ServiceAccount
NAME SECRETS AGE
metricbeat-kube-state-metrics 1 0s
metricbeat-metricbeat 1 0s
==> v1beta1/ClusterRole
NAME CREATED AT
metricbeat-kube-state-metrics 2020-07-11T17:54:49Z
metricbeat-metricbeat-cluster-role 2020-07-11T17:54:49Z
==> v1beta1/ClusterRoleBinding
NAME ROLE AGE
metricbeat-kube-state-metrics ClusterRole/metricbeat-kube-state-metrics 0s
metricbeat-metricbeat-cluster-role-binding ClusterRole/metricbeat-metricbeat-cluster-role 0s
NOTES:
1. Watch all containers come up.
$ kubectl get pods --namespace=default -l app=metricbeat-metricbeat -w

At the end we can see the indices in Elasticsearch Web-UI.

Step 5: Set index in Kibana.

We can see one index is automatically created in Kibana as shown below.

After we create the index we can see the metrics of Kubernetes Cluster.

The above project depicts on how to create a Kubernetes Cluster on top of AWS EKS and deploy ELK Stack for monitoring the logs of Kubernetes Cluster.

--

--

Raji Satti
Raji Satti

Written by Raji Satti

Programming | Scripting | Cloud | DevOps | Security..Enthusiast💙