Continuing from where I left off, we’ll be covering configuring network and load-balancing, applying third-party certificates, and configuring the cluster for SAML authentication using Azure.
When setting up our basic cluster, we used the default network settings. All of the default internal cluster IP addresses should get assigned automatically via DHCP using a subnet range made available via the Kubernetes host environment. You’ll need four (4) reserved IP addresses for static IP assignment to the load balancer. These will be used for DNS mapping and access to the Elasticsearch and Kibana HTTP services. Two (2) will be used now and two (2) will be used later when setting up the monitoring cluster.
We will assign the two static IP addresses as part of the LoadBalancer setup next.
To apply load balancing to the Elasticsearch and Kibana instances, the following YAML will need to be added under the base spec: section. By default, the Load Balancer setup below routes traffic across the entire cluster since all nodes are coordinator nodes. If the desire is to balance solely on the master nodes or over dedicated coordinator nodes, then traffic shaping can be applied to the Load Balancer to accommodate this. This is done by adding a selector section to the below example and specifying which nodes the selector applies to.
http: service: metadata: annotations: service.beta.kubernetes.io/azure-load-balancer-internal: “true" finalizers: - service.kubernetes.io/load-balancer-cleanup spec: type: LoadBalancer loadBalancerIP: <reserved static IP> (Optional) Add selector under spec: section for traffic shaping. selector: elasticsearch.k8s.elastic.co/cluster-name: quickstart # For master only use this line below elasticsearch.k8s.elastic.co/node-master: “true" # For dedicated coordinator nodes only use the following elasticsearch.k8s.elastic.co/node-master: “false" elasticsearch.k8s.elastic.co/node-data: “false" elasticsearch.k8s.elastic.co/node-ingest: “false" elasticsearch.k8s.elastic.co/node-ml: “false" elasticsearch.k8s.elastic.co/node-transform: "false"
Once the Load Balancer has been configured and applied, it can be easily tested by running the curl command below several times. Each time, the response should come back from one of the load balanced nodes only.
curl -k -u elastic -XGET “https://<internal/intranet IP>:9200/“
Enabling the load balancer for Kubernetes running on Microsoft Azure requires the addition of the annotations section provided above. If your instance is running under another provider, you may need to adjust the annotation for that cloud provider settings. For self-hosted Kubernetes, you should be able to remove it completely.
Applying third-party certificates, or bring-your-own-certificate, can be both fairly easy or fairly complex depending upon how your certificates are generated. If your certificate provider provides CRT and KEY files, then great, you can proceed to the section for adding these as a Kubernetes secret. If your certificate provider provides PFX files, then you may need to do some extra steps to get the files that you will need.
Extract Certificate Files
There are plenty of guides available for how to extract/convert certificates using openssl, so I’m going to provide the steps only and not go in depth for each step. You will need three (3) files when setting up the Kubernetes secret for the certificate; tls.key, tls.crt, and ca.crt.
Extract the key file from the PFX file using:
openssl pkcs12 -in your.PFX -nocerts -nodes -out tls.key
Next, extract the CRT certificate from the PFX file using:
openssl pkcs12 -in your.PFX -clcerts -nokeys -out tls.crt
Finally, extract the CA certificate from the PFX file using:
openssl pkcs12 -in your.PFX -cacerts -nokeys -chain -out ca.crt
Create Kubernetes Secret
Now that we have our three (3) files, we are going to store them in a Kubernetes secret which will be used by our YAML files when setting up the cluster. For our example, we are going to name the Kubernetes secret super-secret-es-cert, this is the name that we will reference in the YAML file.
kubectl -n quickstart create secret generic super-secret-es-cert --from-file=ca.crt --from-file=tls.crt --from-file=tls.key
http: tls: certificate: secretName: super-secret-es-cert service: metadata: annotations: service.beta.kubernetes.io/azure-load-balancer-internal: "true" finalizers: - service.kubernetes.io/load-balancer-cleanup spec: type: LoadBalancer loadBalancerIP: <reserved static IP>
As a reminder, these instructions are for Azure AD for SSO since that was the environment I was working in at the time. We’ll first need to create two (2) Enterprise Applications in Azure, if we don’t have existing ones already that can be used; one will be for the cluster we just built, the other will be for the pending monitoring cluster to be added later. You’ll need at least two (2) groups under each Enterprise Application, these will be used for access mapping in Elasticsearch.
Under the Single sign-on section, configure the Basic SAML Configuration panel with the information for the cluster. This should be the DNS name mapped to the static IP for the Kibana HTTP service.
Information we will need from here will be the Federation Metadata XML (or URL) and the AD Identifier. If you use the Federation Metadata XML file, you’ll need to store it in a Kubernetes secret and mount it as part of the YAML volumes and volumeMount sections. The path to this file would be used rather than the Metadata URL as shown below.
Under each nodeset in the Elasticsearch cluster, add the following:
xpack.security.authc.token.enabled: true xpack.security.authc.realms: native: native1: order: 0 saml: quickstart_saml: order: 2 idp.entity_id: <Azure AD Identifier> idp.metadata.path: <App Federation Metadata URL> sp.acs: https://<kibana-url>:5601/api/security/v1/saml sp.entity_id: https://<kibana-url>:5601 sp.logout: https://<kibana-url>:5601/logout attributes.principal: http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name attributes.groups: http://schemas.microsoft.com/ws/2008/06/identity/claims/role
Now for the fun part, enabling SAML/SSO in Kibana and customizing how the login screen looks. With the following snippet added to our Kibana YAML, we display two logon options to the user. The first option is our SAML/SSO logon with custom logo and description. The second is the local logon but with a custom hint.
xpack.security.authc.providers: saml.quickstart_saml: order: 0 realm: "quickstart_saml" description: "Log in with Quickstart SSO" icon: “<URL to logo to show next to description>" basic.basic1: order: 1 hint: "Typically for local administrators"
You should now be able to login to Kibana and query your Elasticsearch Cluster. For now, from a terminal run the following command to be able to login to Kibana using the default elastic credentials, kubectl port-forward services/kibana-kb-http -n quickstart 5601:5601. You should be able to login to https://localhost:5601 in a web browser now. You should also be able to login using either the static IP or DNS.
In Part 3, we’ll cover setting up a monitoring cluster and beats to send data to it, configuring the cluster for snapshot and restore using an Azure repository, and configuring the cluster nodes for a hot/warm/cold architecture.
This blog was written by Christopher Hayes, Senior Security Consultant at Set Solutions.