Overview
Now that I have containers being published to my gitea package registry, I can move onto deploying the application onto my homelab. The objective is simple:
- Changes to master rebuild the :master and :latest tags
- Changes to any other branch rebuild the :$branch tag
- Changes to master should upgrade the production copy of the blog
- Changes to any other branch should upgrade the staging copy of the blog
- The production copy of the blog should be secured via SSL
Helm Chart
Building the helm chart was far easier than I expected.
When I created my PoC tautulli chart I built my templates from scratch rather than use helm create tautulli.
Chart Skeleton
I created the new helm chart skeleton using helm create hugoblog. This generates almost everything you need to get a basic app up and running. In fact, if you are publishing a simple web app, there are nearly no modifications necessary.
The only notable change was that I modified templates/service.yaml to target port 80 instead of port http for simplicities sake.
Pull from a secured registry
In order to pull from a private registry, helm needs to know how to log into the target. This only requires two steps: create a docker credentials secret, and instruct helm to use the secret.
Creating the credentials secret is straightforward - Kubernetes natively supports the docker-registry secret type, which contains all the necessary login information for a registry. I made mine on the command line instead of a yaml file.
kubectl create secret docker-registry gitea-creds \
--docker-server=repo.zerosla.com \
--docker-username=ameyer \
--docker-password='your-token' \
--docker-email=you@example.com
Pro tip: put a space in front of kubectl in order to prevent the command from being saved to your history file. This way your token isn’t saved in a conspicuous location.
Generating a docker-registry secret from a yaml file is somewhat less-efficient.
You can generate the file automatically by running the kubectl command with --dry-run=true -o yaml and saving the output to a file.
Keep in mind that kubernetes secrets must be in base64, so you will need to write a json blob with all of the information you provided via kubectl and then run it through base64
Now that the docker-registry secret is in place, simply instruct the deployment to use the secrets by updating the values.yaml file
image:
repository: repo.zerosla.com/ameyer/hugoblog
# This sets the pull policy for images.
pullPolicy: Always
# Overrides the image tag whose default is the chart appVersion.
tag: "master"
# This is for the secretes for pulling an image from a private repository more information can be found here: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
imagePullSecrets:
- name: gitea-creds
Because I do not increment the application version (yet) pullPolicy is set to Always to ensure that the newest image and tag are always pulled
Using the HAProxy Ingress Controller
I prefer to use the HAProxy Ingress Controller over Nginx, but it is personal preference. Configuring the ingest for the blog was simple thanks to the chart skeleton:
ingress:
enabled: true
className: "haproxy"
annotations:
kubernetes.io/ingress.class: haproxy
# kubernetes.io/tls-acme: "true"
hosts:
- host: hugoblog.zerosla.com
paths:
- path: /
pathType: ImplementationSpecific
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
SSL Termination
Now that the helm chart was working, it was time to address SSL Termination.
First up was deploying the Jetstack cert-manager app via helm. Next I created two certificate manager configs: one for staging and the other for production. This is a fairly common practice in order to avoid rate limiting on the production certbot API.
stage.yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-stage
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
email: ameyer@zerosla.com
privateKeySecretRef:
name: letsencrypt-stage
solvers:
- http01:
ingress:
class: haproxy
prod.yaml
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: ameyer@zerosla.com
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: haproxy
The only difference being stage uses the staging api which issues untrusted certificates.
Enabling automatic SSL certificate generation and SSL termination in the helm chart was easy thanks to the helm skeleton.
ingress:
enabled: true
className: "haproxy"
annotations:
kubernetes.io/ingress.class: haproxy
# kubernetes.io/tls-acme: "true"
+ cert-manager.io/cluster-issuer: letsencrypt-staging
hosts:
- host: hugoblog.zerosla.com
paths:
- path: /
pathType: ImplementationSpecific
- tls: []
- # - secretName: chart-example-tls
- # hosts:
- # - chart-example.local
+ tls:
+ - secretName: blog-prod-tls
+ hosts:
+ - hugoblog.zerosla.com
I hit several snags finishing the SSL Implementation, nearly all of them due to user-error. Cert-manager uses the ACME challenge method of certificate generation, requiring that the FQDN be fully routable to the public internet.
The final challenge was that redirects were not working as one would expect. At the time of writing, the HAProxy Ingress Controller redirects non-ssl requests to port 8443 instead of port 443. This is explained in the bug report as expected behavior, though many disagree that the default redirect port should be 8443. As ivanmatmati explained, the solution is to add a ssl-redirect-port annotation:
ingress:
enabled: true
className: "haproxy"
annotations:
kubernetes.io/ingress.class: haproxy
+ haproxy.org/ssl-redirect: "true"
+ haproxy.org/ssl-redirect-code: "302"
+ haproxy.org/ssl-redirect-port: "443"
# kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: letsencrypt-staging
hosts:
- host: hugoblog.zerosla.com
paths:
- path: /
pathType: ImplementationSpecific
With this additional configuration the ingress controller properly redirects non-https connections to port 443 via a 302
Conclusion
The final steps for the blog are to set up continuous deployment of the staging and production environments, which will be covered in part 4