Overview
We’re finally at the end! The blog is built, it compiles a container, I can deploy that container via helm, it terminates SSL, and finally CI/CD can be configured. The objective is simple:
- Download kubectl
- Download helm
- Configure kubectl
- Update stage
- Update production
Gitea Actions
At the time of writing, I could not find a good helm action image, I will probably build my own later. Installing kubectl and helm is pretty straightforward:
- name: Set up Helm
run: curl -fsSL https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
- name: Install kubectl
run: |
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin/kubectl
kubectl version --client
Injecting a kubectl config file into the worker requires the creation of another build secret that stores the config file to use. Some browsing on the internet strongly recommends multi-line secrets be base64 encoded in order to ensure the formatting is preserved.
In my previous helm automation attempts, I overlooked that one cannot simply use the “Download Kubeconfig” button in rancher. While this will provide you with a working config file, the tokens it uses are short lived. That leaves two primary options:
- Copy the rke2.yaml kubectl config file from one of the cluster members
- Set up a user and generate a token for them.
Generating the Config File
The easiest option by far is to simply copy the rke2.yaml file.
sudo cat /etc/rancher/rke2/rke2.yaml will output something like this:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: (some data)
server: https://127.0.0.1:6443
name: default
contexts:
- context:
cluster: default
user: default
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
user:
client-certificate-data: (some more data)
client-key-data: (even more data)
Pipe this to a file and edit server: https:// to be the IP or DNS entry for one of your kubernetes nodes.
You may need to open firewall ports in order to connect properly.
Now that we have a config file, we need to encode it:
cat kube.config | base64 -w0
-w0 disables line wrapping
Copy the encoded text and paste it into a new actions secret in Gitea that you will reference later.
I used KUBECONFIG_CONFIG.
We can now inject the config file into our runner:
- name: Configure kubectl
run: |
mkdir -p ~/.kube
echo -n "${{ KUBECONFIG_CONTENT }}" | base64 -d > ~/.kube/config
chmod 600 ~/.kube/config
We should now have a functional kube environment that can execute commands against our cluster.
- name: Deploy stage
run: helm upgrade --install --namespace hugoblog blog-stage helm/ --history-max=5 --timeout=10m0s -f helm/values.yaml -f helm/values-stage.yaml
Deploying Stage and Production
The separate jobs for deploying stage and production can be gated to only execute if the build job exits successfully. Furthermore, we can configure the deploy-prod job to only run on a push to master instead of on every commit.
deploy-stage:
needs: build
name: Deploy stage environment
runs-on: ubuntu-latest
steps:
- name: Checkout Repository
uses: actions/checkout@v4
- name: Set up Helm
run: curl -fsSL https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
- name: Install kubectl
run: |
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin/kubectl
kubectl version --client
- name: Configure kubectl
run: |
mkdir -p ~/.kube
echo -n "${{ secrets.KUBECONFIG_CONTENT }}" | base64 -d > ~/.kube/config
chmod 600 ~/.kube/config
- name: Deploy stage
run: helm upgrade --install --namespace hugoblog blog-stage helm/ --history-max=5 --timeout=10m0s -f helm/values.yaml -f helm/values-stage.yaml --set image.tag="${{ gitea.sha }}"
deploy-prod:
needs: build
if: gitea.ref_name == 'master'
name: Deploy production environment
runs-on: ubuntu-latest
steps:
- name: Checkout Repository
uses: actions/checkout@v4
- name: Set up Helm
run: curl -fsSL https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
- name: Install kubectl
run: |
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin/kubectl
kubectl version --client
- name: Configure kubectl
run: |
mkdir -p ~/.kube
echo -n "${{ KUBECONFIG_CONTENT }}" | base64 -d > ~/.kube/config
chmod 600 ~/.kube/config
- name: Deploy production
run: helm upgrade --install --namespace hugoblog blog-prod helm/ --history-max=5 --timeout=10m0s -f helm/values.yaml --set image.tag="${{ gitea.sha }}"
Pay particular attention to the needs section and the if section under production as this is how the build order and logic is handled.
I later realized that, because I do not increment my app version in my chart, I need to set image.tag at deployment time to something unique to ensure builds trigger.
Helm
Instead of packaging my helm chart and publishing it to Gitea, I elected to simply store it in the blog’s git repo, allowing me to execute deployments easier. I leveraged helm’s config override feature for stage, redefining what few values were necessary:
image:
pullPolicy: Always
# Overrides the image tag whose default is the chart appVersion.
tag: "bleedingedge"
ingress:
annotations:
cert-manager.io/cluster-issuer: letsencrypt-staging
hosts:
- host: stage.hugoblog.zerosla.com
paths:
- path: /
pathType: ImplementationSpecific
tls:
- secretName: blog-stage-tls
hosts:
- stage.hugoblog.zerosla.com