In this lab we will install Keepalived and HAProxy in front of our Docker containers.
Start at least 2 containers with Agama app on each VM. Reuse role from lab12.
Keepalived will assign to pair of your VMs additional virtual IP. That IP will be assigned to one VM at a time, that VM will be MASTER. Second VM will become BACKUP and in case MASTER is dead will promote itself to MASTER and assign that additional virtual IP to its own interface.
Some configuration should be done before. After you install keepalived
with APT module there won't be any configuration template provided, but in order to start, Keepalived needs non-empty /etc/keepalived/keepalived.conf
.
Here is an example of keepalived.conf
with some comments:
vrrp_script check_haproxy {
script "path-to-check-script"
weight 20
interval 1
}
vrrp_instance XXX {
interface ens3
virtual_router_id XXX
priority XXX
advert_int 1
virtual_ipaddress {
192.168.100.XX/24
}
unicast_peer {
192.168.42.XX
}
track_script {
check_haproxy
}
}
Some comments to config example:
vrrp_script
will add some weight to node priority if it was executed sucessfully. Put check script to keepalived_script
user home folder. Script that will success in case port 88 is open and return 1 in case nothing listens on that port:
#!/bin/bash
ss -ntl | grep -q ':88 '
virtual_router_id
should be the same on different VMs.
priority
should be different of different VMs. Use if-else-endif statements in Jinja2 template.
virtual_ipaddress
should be the same on different VMs.
If your-name-1
VM has IP 192.168.42.35, virtual IP will be 192.168.100.35 (3rd octet changed from 42 to 100).
If your-name-1
VM has IP 192.168.43.35, virtual IP will be 192.168.101.35 (3rd octet changed from 43 to 101).
unicast_peer
should contain IP of another VM. Multicast is default message format for VRRP, but it doesn't work in most of public clouds, you should specify IPs of your other VMs here that VRRP can start use unicast messages. Use Ansible facts to get IPs.
If all done correctly, command ip a
on VM with higher priority will show that there are 2 IPs on ens3 interface. No changes on VM with lower priority.
Hints:
After service keepalived stop
on MASTER, BACKUP should become a MASTER and ip a
will show that 192.168.10X.Y
was assigned to another VM.
If everything done correctly you should NOT see these logs on keepalived start:
WARNING - default user 'keepalived_script' for script execution does not exist - please create.
SECURITY VIOLATION - scripts are being executed but script_security not enabled.
That's one of the 2 roles, where IPs are allowed in configuration. Another is bind
.
Can be installed with APT module as easy as Keepalived.
Clear installation will provide you config template in /etc/haproxy/haproxy.cfg
.
Copy blocks global
and default
to your template.
Add section listen
to template. Example of section:
listen my_ha_frontend
bind :88
server docker1 web-server1:8081 check
server docker2 web-server2:7785 check
Port should be 88
because our NAT is configured to forward all requests to 192.168.100.X:88
and 192.168.101.X:88
.
88 is not the default HTTP port, but in our labs ports 80 and 8080 already have some services running, so we decided to use 88 to avoid any binding conflicts.
Usage of IPs is not allowed here.
If all done correctly, Public HA URLs
of your-name-1
should show you Agama app. Stopping HAProxy service on Keepalived MASTER should not affect Agama service reachability.
Install prometheus-haproxy-exporter
using APT module. Add correct haproxy.scrape-uri
to ARGS in /etc/default/prometheus-haproxy-exporter
. Don't forget to expose HAProxy stats on that uri, find examples here.
Use port 9188 to expose HAProxy stats.
There are a few Keepalived exporters available, we propose to use this one: https://github.com/cafebazaar/keepalived-exporter. Sometimes we get banned by GitHub, so you can download the file from our local backup server: http://backup/keepalived-exporter-1.2.0.linux-amd64.tar.gz
Download a binary, create systemd unit. Same as for pinger
a few labs back. Exporter user should be root
because keepalived runs from root
.
Add new metrics to your main Grafana dashboard. Should be panels for each node with those metrics:
- haproxy_up (last value)
- haproxy_server_up (last value for each backend)
- keepalived_vrrp_state (last value)
Hint:
If you don't see these metrics in Grafana drop-down, make sure you have added HAProxy and Keepalived exporters to Prometheus configuration.
Don't forget to update your Grafana provisioning files after dashboard changes.
Your repository contains these files:
infra.yaml
roles/haproxy/tasks/main.yaml
roles/keepalived/tasks/main.yaml
Your Agama application is accessible on its public HA URL.
Your Agama application is accessible on both public non-HA URLs.
Your Grafana and Prometheus are accessible on its public non-HA URLs.