
The primary goal of this blog series is to provide an easy-to-follow walk through which by the end of the series enable the reader to:
- spin up a minimal ECK cluster,
- configure networking, load balancer, and third-party certificates,
- configure SSO/SAML authentication,
- and setup the cluster to perform snapshot and recovery
Background
Recently, I had the opportunity to work with Elastic’s Elastic Cloud on Kubernetes (ECK) and once I had the process figured out, it makes setting up an Elastic Cluster super easy. Getting to the point of where I had a usable cluster for my given use-case though was no easy feat. My use-case involved having two working Elasticsearch Clusters with high-availability, remote snapshot backups, single sign-on enabled, and third-party certificates all while staying within the ERU (Enterprise Resource Units) constraints of an ECK Enterprise subscription.
Most of what I’ll cover in this series is focused on using Microsoft Azure as the Kubernetes host, but these steps should be able to be easily translated over to other cloud platforms or on-premises solutions.
Subscriptions
The first thing you’ll need to decide is what Elastic Stack Subscription you will require; the main determining factors to keep in mind here will be the desired level of support from Elastic, the need for single sign-on or LDAP support, and the amount of hardware resources that will be available. Other available features may come into play depending upon your use-case.
For my use-case I was provided with an Enterprise Subscription. The problem I had was that the ECK operator license included only covers 3 ERU for a combined amount of 192 GB TAM (Total Addressable Memory, 1 ERU = 64 GB TAM). This made it a challenge to spin up a production environment and staging environment that shared the 192 GB memory while still meeting the base cluster requirements recommended for storing the vendor generated data. The vendor requirements were initially geared more toward virtual machines specifications rather than containerization, which if I’d been building an Elasticsearch Cluster on virtual machines wouldn’t have been a problem. After much testing, I finally found a balance that worked but was below the vendor resource recommendations.
Kubernetes
Before we can install our Elastic Cluster, we need a Kubernetes environment to run it on. If one isn’t already running from Microsoft Azure, we’ll need to log in and create a Kubernetes resource.
Under Kubernetes Services, click on the Create dropdown and select Create a Kubernetes cluster. Fill out the required information, most of the non-required fields can be left to their default values. If you must create node pools, for a basic cluster, create an auto-scaling 3-node default pool. If your use-case is going to require hot/warm strategy, then you may want to consider separate node pools for each grouping of Elastic nodes that will be in use on the cluster. Once the Kubernetes cluster is deployed, connect to it through the OS terminal or Cloud Shell.
Tools
I use the following tools when working with Kubernetes or YAML to help make things easier for me. Use whatever tool(s) you are most comfortable with and are available in your environment.
- macOS (or Linux)
- iTerm2
- Microsoft Visual Studio Code
- Microsoft Kubernetes plugin
- Microsoft Docker plugin (in the event you are building a Docker file for uploading)
- Microsoft Azure Tools plugin
- Red Hat YAML language plugin
- Prettier – Code formatter plugin
- GitHub Theme
- kubectx – Kubernetes context switcher
- k9s – Kubernetes CLI for easy management
Preparing
Before we can install and setup Elastic Cloud on Kubernetes, we need to first install the operator and rules for it under the default namespace. Follow the documentation by Elastic for Deploy ECK in your Kubernetes cluster to do this.
The documentation provided by Elastic also includes a minimal YAML file that can be used to spin up a very basic Elastic Cluster. With the all-in-one example, you get the following:
- an Elastic Cluster running under the Basic License, self-signed certificates, and local authentication
- A Monitoring instance of Elastic
- Beats (filebeat, metricbeat, and heartbeat)
Having an all-in-one file makes it easy to get the cluster running, but it is a bit of a hassle later on having to scroll through multiple lines of code that doesn’t change after initial setup to update something just for Elastic or Kibana.
For my examples, I am not going to be using the all-in-one approach. I will be breaking down building the cluster into a multi-phase approach. I plan to cover each of the following phases providing examples for each step.
- Install a basic ECK cluster with Kibana
- Configure Networking, Load Balancing, and third-party certificates
- Configure the monitoring cluster and beats
- Configure SSO/SAML authentication
- Configure remote snapshot and restore
- Configuring for a hot/warm/cold/frozen strategy
Install ECK basic cluster
Create a Namespace
We are going to use the same quickstart namespace as used by the Elastic examples, so we’ll want to go ahead and create that using the following YAML. I created a file called namespace.yaml and ran it using kubectl apply -f namespace.yaml.
---
apiVersion: v1
kind: Namespace
metadata:
name: quickstart
...
Create Elasticsearch Cluster
I next created a 6-node Elastic cluster consisting of 3 master nodes and 3 data nodes. I created a file named elasticsearch.yaml with the contents below and ran it using kubectl -n quickstart apply -f elasticsearch.yaml. If you see an error with contents no matches for kind “Elasticsearch” in version “elasticsearch.k8s.elastic.co/v1”, this indicates that you did not follow the steps provided in Deploy ECK in your Kubernetes cluster and Kind/Type being reference cannot be found. Run those steps to Deploy ECK in your Kubernetes cluster then re-run the command above again. The example below utilizes the built-in resource allocation limitations provided by the ECK operator, see Default behavior under Elastic documentation Manage compute resources.
---
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: quickstart
labels:
app: testing
spec:
version: 7.15.0
volumeClaimDeletePolicy: DeleteOnScaledownOnly
nodeSets:
- name: masters
count: 3
config:
node.roles: [ "master" ]
node.store.allow_mmap: false
podTemplate:
spec:
initContainers:
- name: sysctl
securityContext:
privileged: true
command: [ "sh", “-c", "sysctl -w vm.max_map_count=262144” ]
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: azurefile
- name: data
count: 3
config:
node.roles: [ "data", "ingest", "ml", "transform" ]
node.store.allow_mmap: false
podTemplate:
spec:
initContainers:
- name: sysctl
securityContext:
privileged: true
command: [ "sh", “-c", "sysctl -w vm.max_map_count=262144” ]
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: azurefile
...
You can monitor the setup process by running either k9s or by running the command kubectl -n quickstart get pods -o wide -w.
Once Elasticsearch is running, you can retrieve the default password with the following command, kubectl get secret quickstart-es-elastic-user -o=jsonpath=‘{.data.elastic}’ -n quickstart | base64 -d.
Install Kibana
For the final step, install Kibana using the following YAML. I created a file named kibana.yaml and installed it using the command kubectl -n quickstart apply -f kibana.yaml.
---
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: kibana # Prefix assigned to K8s pod
spec:
version: 7.15.0 # Version of Elasticsearch to use
count: 1 # Number of replicas/pods
elasticsearchRef:
name: "quickstart" # Elasticsearch Cluster reference
podTemplate:
spec:
nodeSelector:
agentpool: default # assign to the default node pool
...
Verifying Cluster
You should now be able to login to Kibana and query your Elasticsearch Cluster. For now, from a terminal run the following command to be able to login to Kibana using the default elastic credentials, kubectl port-forward services/kibana-kb-http -n quickstart 5601:5601. You should be able to login to https://localhost:5601 in a web browser now.

In Part 2, we’ll cover Networking, Load Balancing, and updating our cluster to use third-party certificates.
***
This blog was written by Christopher Hayes, Senior Security Consultant at Set Solutions.