Technical Resources
Educational Resources
Connect with Us
To send logs from applications running in a Kubernetes cluster, get started quickly, or customize a logging option based on your setup and deployment preferences. For Google Container Engine, see GKE logging.
If only output from the standard docker logs
streams is needed, choose a logspout DaemonSet. This adapts typical Docker logging for Kubernetes.
Run the following commands, replacing logsN
and XXXXX
with the Papertrail host and port from Log Destinations.
$ kubectl create secret generic papertrail-destination --from-literal=papertrail-destination=syslog+tls://logsN.papertrailapp.com:XXXXX
$ kubectl create -f https://papertrailapp.com/tools/papertrail-logspout-daemonset.yml
Then, deploy apps that log to the standard Docker streams into the cluster, and they’ll appear in the Papertrail account.
Logspout does not run on the Kubernetes master node by default. To gather logs from kube-apiserver
, kube-controller-manager
, and kube-scheduler
, add the below toleration to the spec.template.spec
section of the papertrail-logspout-daemonset.yml
file:
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
SolarWinds (Papertrail’s parent company) has created a Docker image that uses Fluentd to catch all Kubernetes and Docker logs and forward them to Papertrail, with minimal configuration required beyond the use of the image. The image was created using a fork of fluentd’s official Kubernetes DaemonSet config, which can autogenerate Docker images and configurations for Fluent with many log aggregators.
FLUENT_PAPERTRAIL_HOST
and FLUENT_PAPERTRAIL_PORT
, and a meaningful value for FLUENT_HOSTNAME
, if desired. Edit or remove the namespace
value, according to the cluster needs.kubectl create -f fluentd-daemonset-papertrail.yaml
.See Configuring loggers: Fluentd for other possibilities.
Use one of Papertrail’s suggested configurations for the language or framework (such as Rails, Java, PHP, Node, or .NET) to log directly from the application. This method doesn’t capture logs at the level of the cluster or node, but is adequate for many setups.
A fairly straightforward option when capturing logs for an application container and cluster is to use a logger “sidecar” container within the Pod, so that the primary container(s) run the app, and the logging container is a somewhat separated concern. Most commonly, the sidecar connects to a socket or mounted volume on the application containers and runs a logger such as fluentd
, logspout
, or remote_syslog2
.
To use this method, add a second entry to the containers
section for the Pod. Since Kubernetes writes the kubelet and container logs (usually Docker logs) to journald
if available, and otherwise to /var/log/*.log
(see Logging and Monitoring Cluster Activity), that directory is used as an example throughout.
The approach for a Deployment (the preferred way to manage Pod state) also uses a sidecar container, but requires adjustments for some loggers to prevent log duplication.
Ensure that the logger sidecar container mounts the Docker container logs directory in the volumeMounts
section:
- name: ...
image: ...
volumeMounts:
- name: dockercontainers
mountPath: /var/lib/docker/containers
- name: varlog
mountPath: /var/log
volumes:
- name: dockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
Fill in the ...
entries with the desired name and relevant logger image. In some setups, the logger container may need to run in privileged mode. If permissions errors occur for access to the Docker container logs, add
securityContext:
privileged: true
above volumeMounts
.
Running a DaemonSet is a more complex but potentially more resilient option, and is typically more suitable for Deployments. As with the sidecar container, the DaemonSet may need be run in privileged mode. This example configuration uses host networking to simplify communication between the DaemonSet and the application container, and mounts a shared Docker socket:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: ...
spec:
selector:
matchLabels:
name: ...
template:
metadata:
labels:
name: ...
spec:
hostPID: true
hostIPC: true
hostNetwork: true
containers:
- resources:
securityContext:
privileged: true
image: ...
name: ...
volumeMounts:
- name: log
mountPath: /var/run/docker.sock
volumes:
- name: log
hostPath:
path: /var/run/docker.sock
Fill in the ...
entries with the desired name and relevant logger image.
Logspout is the easiest to set up, but the least flexible: it reads only from the Docker socket. It only requires specifying the image to use and the Papertrail host and port details:
env:
- name: ROUTE_URIS
value: syslog+tls://logsN.papertrailapp.com:XXXXX
image: gliderlabs/logspout
Substitute your own Papertrail log destination details for logsN
and XXXXX
and you’re good to go.
The Papertrail host and port details can also be supplied as args:
rather than through the env:
variable ROUTE_URIS
. Note that the container already has a command:
, so only args:
should be supplied. Optionally, custom sender and program names can be set with the env variables SYSLOG_TAG
and SYSLOG_HOSTNAME
. One example using custom Kubernetes values is:
env:
- name: SYSLOG_TAG
value: '{{ index .Container.Config.Labels "io.kubernetes.pod.namespace" }}[{{ index .Container.Config.Labels "io.kubernetes.pod.name" }}]'
- name: SYSLOG_HOSTNAME
value: '{{ index .Container.Config.Labels "io.kubernetes.container.name" }}'
Another option is to set named environment variables and use those, like this:
- name: SYSLOG_TAG
value: '{{ index .Container.Config.Env "PROGRAM_NAME" }}'
When using logspout as a sidecar, the Pod or Deployment should not run multiple replicas; instead, use a DaemonSet for that configuration.
Credit to filipegiusti and mashayev for contributions to these configurations.
To run Papertrail’s small remote_syslog2 log collection daemon as a sidecar or DaemonSet, use or create a Docker image that contains the most recent remote_syslog2
release for the container’s base OS. To configure logging, map the directory (or directories) where the application’s files are found, and/or map the Docker logs directory.
There are a number of ways to implement the configuration, including copying a log_files.yml
into the container’s /etc
directory. A more flexible option is a ConfigMap, as shown in this sample configuration:
- name: ...
image: ...
volumeMounts:
- name: varlog
mountPath: /var/log
- name: dockercontainers
mountPath: /var/lib/docker/containers
- name: config-volume
mountPath: /etc/rs2
volumes:
- name: varlog
emptyDir: {}
- name: dockercontainers
hostPath:
path: /var/lib/docker/containers
- name: config-volume
configMap:
name: rs2-config
Fill in the ...
entries with the desired name and relevant remote_syslog2
image. A bare-bones log_files.yml
that can be used to create the ConfigMap is:
files:
- /var/log/*.log
- /var/lib/docker/containers/*/*.log
destination:
host: logsN.papertrailapp.com
port: XXXXX
protocol: tls
Replace logsN
and XXXXX
with the details from the Papertrail log destination.
Any other log files the application uses can be added as separate items under files:
.
When using a ConfigMap, the Docker container CMD
or ENTRYPOINT
should point to the mount location for the resulting config file. In the example, the command to run would be:
CMD /usr/local/bin/remote_syslog -D -c /etc/rs2/log_files.yml
Fluentd is the most flexible, but also the most complex, logger to set up, and is commonly used in other Kubernetes logging configurations. To use it easily but effectively with Papertrail, start with a base fluentd
image, and install the kubernetes_remote_syslog
plugin:
$ sudo -u fluent gem install fluent-plugin-kubernetes_remote_syslog
This gem allows output of logs to a remote syslog destination with some additional setup for Kubernetes.
To configure Fluentd, either create a fluentd.conf
file in the container, or use a ConfigMap:
- name: ...
image: ...
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
env:
- name: FLUENTD_CONF
value: fluentd.conf
volumeMounts:
- name: dockercontainers
mountPath: /var/lib/docker/containers
- name: varlog
mountPath: /var/log
- name: config-volume
mountPath: /fluentd/etc
volumes:
- name: dockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
- name: config-volume
configMap:
name: fluentd-config
There are two key configurations in the fluentd.conf
file, one to read the Docker log files and other existing log files:
<source>
@type tail
path /var/lib/docker/containers/*/*.log
pos_file /var/log/fluentd-docker.pos
tag docker.*
format json
read_from_head true
</source>
and one to output them to Papertrail:
<match docker.**>
@type kubernetes_remote_syslog
host logsN.papertrailapp.com
port XXXXX
tag kubernetes-docker
output_include_tag false
output_include_time false
</match>
In the output section, replace logsN
and XXXXX
with the details from the Papertrail log destination.
The full scope of Fluentd configuration is beyond the scope of this article, but essentially, this reads in existing log files live, starting from the top using read_from_head
and tracking position with the pos_file
. After reading, the output’s docker.*
tag is matched by the match
directive and output using the kubernetes_remote_syslog
plugin. That plugin is configured with the Papertrail host and port details, a custom tag, and two output settings that suppress redundant placements of time and tag (program) information. Similar stanzas can be used to read other local files such as /var/log/*.log
.
This configuration doesn’t attempt to augment or format the JSON of the Docker logs, but Fluentd is highly configurable if further processing or formatting is of interest. Check out the potentially useful gem fluent-plugin-kubernetes_metadata_filter, which allows annotating the containers’ Docker logs with useful metadata, or feel free to contact us to discuss other options.
Fluentd can effectively run as a sidecar even when the Pod or Deployment runs multiple replicas.
If you have full control over the configuration of the system’s logging, configuring a logger at the system level is also an option, although not a suggested Kubernetes configuration because it runs outside the view of the Kubernetes components. This type of setup would most commonly use the system syslog or remote_syslog2 on the node. For CoreOS, check the systemd instructions.
Another low-level logging configuration option is to modify the log driver of the Docker daemon:
$ dockerd --log-driver=syslog --log-opt syslog-address=tcp+tls://logsN.papertrailapp.com:XXXXX
As usual, replace logsN
and XXXXX
with your log destination details. With these options, any container spun up will use the syslog driver, rather than Docker’s default log driver. This has a similar effect to using --log-opt
at the container level, which is not currently possible in Kubernetes.
To configure system-level logging and keep using journalctl
, but still get useful host and program labels:
$ dockerd --log-driver=journald --log-opt labels=SYSLOG_IDENTIFIER
If none of these options seems suitable, contact us. We love to help.
The scripts are not supported under any SolarWinds support program or service. The scripts are provided AS IS without warranty of any kind. SolarWinds further disclaims all warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The risk arising out of the use or performance of the scripts and documentation stays with you. In no event shall SolarWinds or anyone else involved in the creation, production, or delivery of the scripts be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the scripts or documentation.