Posted: October 1, 2020

I have been playing around with Elasticsearch and Kibana for a little while now.  I have installed them and run through a couple of demos. Like cooking with a recipe, things usually work out when you are following instructions. The problem is that I do not have good use case to help me explore how Elastic and Kibana work in the real world.  In the previous posts in this series I explained how I stumbled upon my reusable use case, and how I built this dashboard using InfluxDB and Grafana. Now I am going to create this dashboard using Elasticsearch and Kibana.  I know what I want to build, I do not have any instructions for how to do it, and I am mostly unsupervised. This is where all learning really happens.

Elasticsearch, Kibana, and Logstash make up the Elastic (ELK) stack.  Logstash won’t be making an appearance in this story, but it is pretty handy for moving and parsing data before it goes to Elasticsearch.  Elasticsearch handles the storage and retrieval of data, Kibana handles analytics, visualization, and management tools for the stack.

Installing Elasticsearch and Kibana on a single VM is easy.  Download and untar the files, start the processes, and start playing.  For my setup, I needed to mess with the configuration files a little to get the IP/ports to accept connections, but these are one-line changes in the Elasticsearch and Kibana configuration files.  I’m not going to spend a great deal of time on the installation and configuration because Elastic does a good job of that already in their installation documentation.

Elasticsearch exposes everything you need with a REST API interface.  This is one of the parts of Elastic that I thought was nice.  As a first-timer, once I got Elasticsearch running, I didn’t really have to worry about it all that much.  In a larger distributed environment this is probably more complex than my little VM, but that is a discussion for another series.

Kibana includes a Dev Tools console that wraps your API calls for you and provides command completion hints in a simple but powerful interface.  I played around in this interface for a long time, trying out queries, learning some of the basics about how Elastic does search.

It is always nice to work with raw data isn’t it? No?  Ok, maybe it isn’t all that great, but it does make you feel smarter to work with raw text input and output, doesn’t it?  If you look at the image above, in the upper right corner there is a little green “200 – Ok” indicator.  I smiled when I saw that the first time, even though I had nothing to do with that connection success.  Yes, I am easily amused.

As you may be able to guess from the image, Elasticsearch stores data as JSON documents.  Complex JSON documents are not a problem during ingestion. As I found when trying to build my first test dashboard, giving some thought about how the data is going to be used is a good idea.

Getting data into Elasticsearch

In my first iteration, my script was sending Elasticsearch a complex JSON document that contained all of the data properties such as IP addresses, gateway, DNS and DHCP server, as well as the various upstream and channel metrics for a single point in time. This document has 3 sets of child documents, each set has 5-40 child documents.

I tried for a few hours to make Elastic retrieve and shape the data the way that I wanted it to. The search results were right, but the aggregation of values just would not work the way I wanted.  I was basically asking Elasticsearch to take one part of the document, slice that part into smaller parts, and group them with the smaller parts from other documents, based on the point in time that the containing document had.

If that sounds complicated, it is.  Imagine taking 5 bags of mixed vegetables out of the freezer, slicing them open, tossing the bags in the air, and then attempting to sort the vegetables by type, and sequence them by the best-by date on the bags, before they hit the ground.  I thought this was a reasonable request of Elasticsearch at the time.

After realizing that I was going about this the hard way, I changed the structure of the data being sent.  After the change the channel metrics are sent as individual documents, one document per channel, per point in time. The searches are easier to construct, the data is much easier to read, and it works.  3 hours trying to be clever, or about 10 minutes of code changes.  Looking back, I probably should have tried changing the data structure earlier.

The moral of the story is that getting things done is better than being clever.  If I had managed to get the searches working with the original data structure, it would be much more complex and harder to maintain going forward. This tends to be true in all data platforms.  Whether it is a text log file on a client machine being pushed to a graphing tool, or a data lake holding petabytes of rocket-launch telemetry.  Formatting your data for how it is going to be used is significantly easier than constantly transforming the data when you are using it.  When I am struggling with something that seems impossible, it is not a failing in the tool. The fault is in my assumption about that something should be done.

And then there were the heat maps

Almost all the panels for the dashboard came together without much fuss. The heat maps are the exception, I struggled to get them to look the way that I wanted.  When I stopped fighting Kibana and let it build the heat map the way that it wanted to, things started going a little easier.  Wait, didn’t I just say something like that?  Like lessons about reading documentation, some things are easier said than done.

Here is the heat map that I almost settled on.  Getting to this point took a couple of hours.  Most of that time spent on that greenish area.

It looks great doesn’t it?  It is ok, I know it is ugly.  This abomination of grid lines is the result of trying to work around a quirk in the way that Kibana renders a heat map.  The numeric y-axis values would not sort.  As far as I was able to tell, it bubbles up the common values, and sorts them correctly.  Values that are seen later are tacked on. Instead of a numerical sort of the values from 0 to 7, it was 1, 4, 5, 6, 2, 7, etc.  It could be a lexicographical sort as well.  Either way, it is not the numeric sort that I am looking for, as you can see in the image below.

After poking around at this for a while, the solution involved a Google search and a few minutes of time. A post in the Elastic community explained how to resolve the issue.  In the settings for the chart, flipping the setting to enable “Show empty buckets”, and defining the boundaries for the axis gives the sort order I am looking for.  So now the chart basically works, I feel accomplished.  I had considered just moving along and ignoring the weird sort order.  But, eventually someone would point it out, or I’d run into the problem again somewhere, so I am glad I worked through it.

Here is where we ask the question about what the chart is trying to show.  Do I want to see a trend, or the anomalous hotspots, or both?  Since I am the one in control of the requirements, I am going to make the decision to see both the trend and outliers. However, I do not need to see this on 10-minute intervals.  Seeing how this data trends in 30-minute blocks should be fine.  After a few quick changes to the settings, the mass of grid lines is under control. Now I have a dashboard that checks the boxes for the requirements.

If a dark mode exists, I haven’t found it yet.  Brightness aside, I am happy with the results.  I learned a few things about getting data into Elasticsearch, and I got to kick the tiers within Kibana’s Visualization and Dashboards areas.  There are a lot of visualization options to work with, and they all seem to work well during my testing.

Building dashboards in Kibana does not require a lot of technical know-how.  It is a point-and-click process to add the panels and define the characteristics of the charts.  Even adding filtering criteria within a chart is an assisted point-and-click process. The only thing that I was not able to figure out how to do, is adding a trendline to a metric visualization (the top row of my dashboard).  It may be something I just have not figured out yet, or it may be functionality that does not exist within Kibana at this time.  Also, there is that dark mode thing.

I think that building dashboards is easy in Kibana because of the work that is being done under the hood by Kibana and Elasticsearch.  To say nothing of the work being put into the products themselves by the Elastic team, index Patterns, field mappings, and the KQL language all make things just little easier once you understand them.

That’s a wrap on the Elasticsearch and Kibana version of this dashboard.  Be on the lookout for the next post in a couple of weeks.  Next time I will be building this dashboard using Humio

***

This blog was written by Greg Porterfield, Senior Security Consultant at Set Solutions.