Metallb and HAProxy-ingress
HAProxy-ingress is a very powerful ingress controller, especially for those coming from environments that already leverage HAProxy for load balancing. Unfortunately some of the more advanced features do not work properly when using Metallb as the kubernetes load balancer layer.
Metallb primarily works by announcing configured addresses via ARP and then relaying the packets into the cluster. On the surface level, this works perfectly fine and everything “just works”, but it does mean that the client IP gets lost in translation and breaks ACLs.
Some research on the subject suggests that the default value of externalTrafficPolicy (cluster) is to blame.
Under most circumstances cluster is exactly what you want as it allows cross-node communication within the kubernetes environment.
Switching the policy to local will preserve the client’s IP, but prevents routing between nodes.
Fixing HAProxy-ingress
I needed to make two changes to my values.yaml:
- switch to DaemonSet
- Change the service’s external traffic policy
The First is very straightforward:
maxReplicas: 20
minReplicas: 2
pollingInterval: 30
restoreToOriginalReplicaCount: false
scaledObject:
annotations: {}
triggers: null
- kind: Deployment
+ kind: DaemonSet
kubernetesGateway:
enabled: false
gatewayControllerName: haproxy.org/gateway-controller
lifecycle: {}
livenessProbe:
This changes the deployment style of the HAProxy instances and ensures that a copy of HAProxy is running on every worker node in the cluster. This ensures that, no matter which node is announcing the load balancer IP, there is a copy of the ingress controller available to handle the traffic.
In order to change the external traffic policy, we need to use a hidden variable not included in the standard values.yaml file for HAProxy-ingress:
controller:
...
service:
...
tcpPorts: null
type: LoadBalancer
+ externalTrafficPolicy: Local
serviceMonitor:
enabled: false
While I was editing my values.yaml file, I enabled traffic logging to further aid troubleshooting:
timeoutSeconds: 1
logging:
level: info
- traffic: {}
+ traffic:
+ address: stdout
+ facility: daemon
+ format: raw
minReadySeconds: 0
name: controller
nodeSelector: {}
This configures the instances to print their logs to stdout, which can be monitored in a few ways:
- in Rancher by clicking “view logs” on the Daemon running on the node announcing the load balancer IP
- running
kubectl logs -n kube-system -l app.kubernetes.io/name=kubernetes-ingressto retrieve the logs for every ingress controller.
Enabling HAProxy access control
Now that the ingress-controllers receive the client’s real IP instead of a cluster node’s IP, we can properly deploy ACLs.
HAProxy-ingress supports both white and blacklisting addresses through haproxy.org/allow-list and haproxy.org/deny-list
Both annotations accept a comma separated list of IPs or subnets in CIDR notation.
ingress:
annotations:
cert-manager.io/cluster-issuer: letsencrypt-staging
haproxy.org/whitelist: "10.1.0.0/16,172.16.253.0/23"
hosts:
- host: stage.hugoblog.zerosla.com
paths:
- path: /
pathType: ImplementationSpecific
tls:
- secretName: blog-stage-tls
hosts:
- stage.hugoblog.zerosla.com
For a complete list of HAProxy-ingress annotation options, please read the configuration reference they provide. If you have prior HAProxy experience the naming of the annotations and their formatting should be quit intuitive