Logging and monitoring with Grafana, Prometheus and Loki on Docker Swarm
Deploy grafana and loki to monitor your docker swarm cluster
This one is going to be a bit beefy. I have a stack that I created for monitoring. I used loki to easily export logs from containers that I can then view with Grafana. So there is a plethora of articles and docs explaining how these services work so I won’t waste my time with that. What I will do is give you a known good working configuration which I had a hard time trying to put together.
This setup will export container and node metrics to prometheus which you can query from grafana as well as export logs from containers with loki that also can be queried from Grafana. Once you do have it set up I recommend going to grafana’s dashboards and pick a few that you’d like to use
version: "3.9"
services:
grafana:
image: grafana/grafana:10.2.0
deploy:
placement:
constraints:
- node.role == manager
resources:
reservations:
memory: 256M
limits:
memory: 256M
volumes:
- /shares/docker/monitoring/grafana:/var/lib/grafana
- /shares/docker/monitoring/grafana/grafana.ini:/etc/grafana/grafana.ini:ro
networks:
- logging
prometheus:
image: prom/prometheus:v2.47.2
deploy:
resources:
reservations:
memory: 512M
limits:
memory: 512M
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--log.level=error'
- '--storage.tsdb.path=/prometheus'
- '--storage.tsdb.retention.time=30d'
volumes:
- /shares/docker/monitoring/prometheus:/prometheus
- /shares/docker/monitoring/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml:ro
networks:
- logging
cadvisor:
image: gcr.io/cadvisor/cadvisor:v0.47.0
command: -logtostderr -docker_only
deploy:
mode: global
resources:
limits:
memory: 128M
reservations:
memory: 128M
volumes:
- type: bind
source: /
target: /rootfs
read_only: true
- type: bind
source: /var/run
target: /var/run
read_only: true
- type: bind
source: /sys
target: /sys
read_only: true
- type: bind
source: /var/lib/docker
target: /var/lib/docker
read_only: true
networks:
- logging
node-exporter:
image: prom/node-exporter:v1.5.0
command:
- '--path.sysfs=/host/sys'
- '--path.procfs=/host/proc'
- '--collector.filesystem.ignored-mount-points=^/(sys|proc|dev|host|etc)($$|/)'
- '--no-collector.ipvs'
deploy:
mode: global
resources:
limits:
memory: 128M
reservations:
memory: 128M
volumes:
- type: bind
source: /
target: /rootfs
read_only: true
- type: bind
source: /proc
target: /host/proc
read_only: true
- type: bind
source: /sys
target: /host/sys
read_only: true
networks:
- logging
loki:
image: grafana/loki
command:
- -config.file=/etc/loki/loki-config.yaml
ports:
- 3100:3100
volumes:
- /shares/docker/monitoring/loki:/loki
networks:
- logging
configs:
- source: loki_config
target: /etc/loki/loki-config.yaml
logging:
driver: json-file
deploy:
resources:
reservations:
memory: 512M
limits:
memory: 512M
networks:
logging:
driver: overlay
configs:
loki_config:
external: true
promtail_config:
external: true
Extra setup
You’ll notice that we need 2 external config files. One for Loki and one for Promtail. These config files will tell our containers how to scrape for data.
Unfortunately this blog post isn’t complete. I no longer use Docker Swarm and I lost these config files. Sorry about that, you’ll have to look elsewhere. Once you do have it you can plug them in, add this service and then access grafana with its port