Placeholder image

Ferenc Hernadi

Fri, Mar 30, 2018

Centralized logging under Kubernetes

For our Pipeline PaaS, monitoring is an essential part of operating distributed applications in production. We are placing large efforts to monitor large and federated clusters and automating all these with Pipeline so all our users are getting out of the box monitoring for free. You can read about our monitoring series below here:

Monitoring series:
Monitoring Apache Spark with Prometheus
Monitoring multiple federated clusters with Prometheus - the secure way
Application monitoring with Prometheus and Pipeline
Building a cloud cost management system on top of Prometheus
Monitoring Spark with Prometheus, reloaded

Logging series:
Centralized logging under Kubernetes
Secure logging on Kubernetes with Fluentd and Fluent Bit

However monitoring best friend is logging, and they go hand in hand. If something does not looks good on our Grafana dashboards or we get an alert from Prometheus than we need to investigate the applications and to do that usually first we check the logs. In a monolithic environment it’s relatively easy to check the logs. There is a few number of machines and it’s enough to have file logs or a syslog endpoint when we need do the investigation. Also for smaller or simpler deployments Kubernetes also provides a simple command to check the output of an application.

$ kubectl logs pipeline-traefik-7c47dc7bd7-5mght
time="2018-03-29T21:44:59Z" level=info msg="Using TOML configuration file /config/traefik.toml"
time="2018-03-29T21:44:59Z" level=info msg="Traefik version v1.4.3 built on 2017-11-14_11:14:24AM"

You can read more about the official Kubernetes logging here

But things gets rather complicated when we have quite a few (or in our case quite a lot) more containers. Moreover due to the ephemeral nature of the containers they could be already terminated when you search for logs. So we needed a solution to overcome this situation, get the logs out from all the cloud virtual machines, Kubernetes itself and of course the deployed applications.

Logging the Kubernetes way

Unluckily Kubernetes doesn’t provide to many configuration options regarding logging. Docker offers multiple logging drivers but we can’t configure them via Kubernetes. But no worries there are several solutions already in the open source world and when it comes to logging our favorite tools are Fluentd and Fluent-bit.


Fluentd to do the job

Fluentd is an open source data collector for a unified logging layer. It’s written in Ruby with a plug-in oriented architecture. It helps to collect, route and store different logs from different sources.

Under the hood

To run fluentd is pretty simple. It has a straightforward and simple configuration file to describe the pipeline of logs.

There are three main type of plugins: Source, Filter and Output. As you guess the input filter is where the logs come from. There are several different solutions like tailing files, accepts http or syslog, etc.

  @type tail
  path /var/log/httpd-access.log
  pos_file /var/log/td-agent/httpd-access.log.pos
  tag apache.access
  format apache2

After ingressing, the logs are treated as records with metadata. To transform the log data or the attached metadata you can use any filter plugin.

<filter apache.*>
  @type record_transformer
    hostname "#{Socket.gethostname}"

And finally, output plugins than can archive the log to files, S3, ElasticSearch and a lot more. For more information check the fluentd plugin catalog

<match **>
  @type file
  path /var/log/fluent/myapp
  time_slice_format %Y%m%d
  time_slice_wait 10m
  time_format %Y%m%dT%H%M%S%z
  compress gzip

Why is this cool? Because we can create structured logs from any kind of applications.

Routing basics

You may notice the patterns (apache.*) next to the plugin declarations. These are called tags and they help to route different logs from the same or different source. Patterns are matched via some simple rules.

Pattern Action Example
* matches a single tag part a.* matches a.b, but does not match a or a.b.c
** matches zero or more tag parts. a.** matches a, a.b and a.b.c
{X,Y,Z} matches X, Y, or Z, where X, Y, and Z are match patterns. {a,b} matches a and b, but does not match c

This can be used in combination with the * or ** patterns. Examples include a.{b,c}.* and a.{b,c.**} The patterns <match a b> match a and b. The patterns <match a.** b.*> match a, a.b, a.b.c (from the first pattern) and b.d (from the second pattern).


Why we need another tool? While Fluentd is optimized to be extended with ease using the plugin architecture, fluent-bit is designed for performance. It’s compact and written in C to enable on minimalistic IOT devices and fast enough to transfer huge amount of logs. Moreover it has built-in Kubernetes support. As a compact tool it is designed to transport logs from all nodes.

Fluent Bit is an open source and multi-platform Log Processor and Forwarder


How fluent-bit handles Kubernetes logs

As Kubernetes does not provide logging configurations we can’t transfer logs directly into the fluent protocol. But all containers log is available on the host’s /var/log/container/* directory. An example Kubernetes enabled fluent-bit configuration looks like this:

    Flush        1
    Daemon       Off
    Log_Level    info
    Parsers_File parsers.conf

    Name          tail
    Path          /var/log/containers/*.log
    Parser        docker
    Tag           kube.*
    Mem_Buf_Limit 5MB

    Name            kubernetes
    Match           kube.*
    Merge_JSON_Log  On

There is an official Kubernetes filter bundled that enrich the logs with metadata. For more details check the installation manual and configuration manual

To use the Kubernetes filter plugin you need to ensure that fluent-bit has sufficient permissions to get, watch and list Pods.

Fluent-bit will enrich log with the following metadata:

  • POD Name
  • POD ID
  • Container Name
  • Container ID
  • Labels
  • Annotations

If you need help configuring Fluent-bit there are excellent official examples available.

Putting it together

So we have all the tools now, so we need to set-up them. To collect all logs we need to deploy fluent-bit as a DemonSet. These pods will mount the Docker containers log from the Host machine and transfer to the Fluentd service for further transformation.


A DaemonSet ensures that all (or some) Nodes run a copy of a Pod.

Monitoring the log deployment

Apparently this looks like a complex architecture to manage logs, so we didn’t forget to monitor it as well. As we are working with cloud native tools we use Prometheus everywhere for monitoring. And luckily this is the case here as well. Fluentd supports Prometheus via the fluent-plugin-prometheus plugin and Fluent-bit will support from version 0.13.

See below an example Fluentd plugin configuration to enable Prometheus scraping. For more details please read the plugin’s github page. This will provide an HTTP prometheus scraping endpoint at http://<service-host>:24231/metrics.

  @type prometheus
  port 24231

  @type prometheus_monitor

  @type prometheus_output_monitor

To following example enables the HTTP server for scraping metrics. The endpoint will be http://<service-host>:2020/api/v1/metrics/prometheus.

    HTTP_Server  On
    HTTP_Port    2020

To show this feature we have built a Fluent-bit container from/available on Banzai Cloud GitHub repository. This image is not recommended to use it in production yet (next post will be about security)!

We just need to set-up the Prometheus annotations on our Pods and collect the metrics. A little teaser from the upcoming post, a simple Grafana dashboard with the metrics:


To be continued …

In the following blog post we will show how to enable TLS security and authentication. There will be a complete example to collect, transform and store logs. As usual all the above, is used and automated by Pipeline.

If you are interested in our technology and open source projects, follow us on GitHub, LinkedIn or Twitter:



comments powered by Disqus