In this part of our Kubernetes setup series, we explore the integration of ArgoCD to manage our Kubernetes cluster through GitOps. This approach not only automates deployment processes but also significantly improves our ability to manage configurations and maintain consistency across environments.
What is GitOps?
GitOps is a paradigm or set of practices that empowers developers to perform tasks traditionally done by IT operations. GitOps uses Git as a single source of truth for declarative infrastructure and applications. With GitOps, the Git repository holds the entire state of the system, and changes to the system are made through commits. This approach automates the application of configurations directly from the repository, enhancing both productivity and transparency in operations.
Overview
Initially, this part was going to be just installing ArgoCD, but to fully integrate and utilize its capabilities, we also need to install additional support systems like Sealed Secrets and the Cloudflare Tunnel Ingress Controller.
- Sealed Secrets: Allows you to encrypt your secrets into ‘sealed’ versions, which are safe to store – even to a public repository. Sealed Secrets on GitHub
- Cloudflare Tunnel Ingress Controller: Manages inbound connections to your cluster, making it unnecessary to expose your Kubernetes services to the internet directly. Cloudflare Tunnel Ingress Controller on GitHub
Initial Steps with Git Repository Setup
While you can use any Git hosting service that suits your needs, for this setup, we’ve chosen GitLab due to its offer of free private groups and repositories. This feature is particularly beneficial for managing Kubernetes configurations in a secure and organized manner.
- Selecting a Git Service: Although options like GitHub and Bitbucket provide similar functionalities, We decided to go with Gitlab since we’re familiar with it and we’re on a budget.
- Create a Repository: Within your GitLab, create a repository named
k8s-cluster-state
. This repository will store all ArgoCD configurations and other Kubernetes manifests, serving as the single source of truth for the state of your cluster. - Organize the Repository: After setting up your repository, clone it to your local environment and establish a structured directory layout. We went with the following layout:
k8s-cluster-state/
├── app-manifests/
├── apps/
│ └── kube-system/
└── argocd/
This structured approach not only aids in maintaining a clean repository but also simplifies the process of locating and managing specific configurations.
Installing ArgoCD
Many of these instructions are within Getting Started with ArgoCD, but here’s a simplified version:
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
Install the ArgoCD CLI
Depending on your operating system, you can install it using brew
or curl
. For simplicity, we use curl
:
curl -sSL -o argocd-linux-amd64 https://github.com/argoproj/argo-cd/releases/latest/download/argocd-linux-amd64
sudo install -m 555 argocd-linux-amd64 /usr/local/bin/argocd
rm argocd-linux-amd64
Accessing ArgoCD UI
Initially, we’ll access the ArgoCD UI via port forwarding. Later, we’ll switch to a more robust method using Cloudflare tunnels.
kubectl port-forward svc/argocd-server -n argocd 8080:443
Initial Login
Use the following command to retrieve the initial admin password:
argocd admin initial-password -n argocd
After logging in, change the admin password through the UI under User Info > Update Password. Remember to delete the argocd-initial-admin-secret
secret afterwards for since it’s not really needed. You can do that with kubectl delete -n argocd secret argocd-initial-admin-secret
.
Integrating Git Repository with ArgoCD
The simplest method without initial complications is to set up the repository connection directly in the ArgoCD UI. Within the UI you set this up under Settings > Repositories. I used SSH as the connection method and set up a dedicated SSH key for ArgoCD.
Enabling ArgoCD to Self-Manage Through GitOps
To ensure that ArgoCD can manage its own configuration effectively, we set up ArgoCD with the ability to autonomously update itself based on changes committed to specific configuration files in our Git repository. This process allows for straightforward updates to ArgoCD’s configuration, including its operational parameters and version, simply by modifying and committing changes to the repository.
Setting Up the ArgoCD Application Resource
Begin by defining an ArgoCD application that specifically targets ArgoCD’s own deployment and configuration settings:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: argocd-app
namespace: argocd
spec:
project: kube-system
source:
repoURL: '[email protected]:my-org/k8s-cluster-state.git'
path: 'argocd'
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: argocd
syncPolicy:
automated:
selfHeal: true
prune: true
This configuration ensures that any updates to the argocd
directory in the Git repository triggers ArgoCD to automatically apply these changes, allowing for seamless upgrades and configuration adjustments without manual intervention.
Implementing Kustomization for ArgoCD Resources
To streamline ArgoCD’s resource management, we’ll use Kustomize. This tool simplifies configuration updates by enabling easy application of patches and resource definitions through declarative files.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: argocd
resources:
- github.com/argoproj/argo-cd//manifests/cluster-install?ref=v2.10.9
- kube-system-project.yaml
Setting Up an ArgoCD Project Configuration
In ArgoCD, a “project” serves as a management and security context for a group of applications. It defines boundaries for what repository sources are permissible, which clusters can be targeted, and which namespaces applications can reside in. This setup ensures controlled deployments and maintains security standards.
Creating the ArgoCD Project Configuration
We’re going to create an ArgoCD project to contain ArgoCD itself along with other Kubernetes support applications. This project helps organize and secure all associated applications under a unified management umbrella, making it easier to oversee their deployment and operation.
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
name: kube-system
namespace: argocd
spec:
description: A project designed to manage Kubernetes support applications including ArgoCD itself
sourceRepos:
- '*'
destinations:
- namespace: "*"
server: "*"
clusterResourceWhitelist:
- group: '*'
kind: '*'
Automating Application Deployment with ArgoCD’s App of Apps Strategy
The “App of Apps” strategy in ArgoCD is an efficient way to manage multiple Kubernetes applications through a hierarchical structure of ArgoCD Application resources. This method allows one “root” application to manage several “child” applications, automating deployment and updates in a scalable manner.
Benefits of the Strategy
- Centralized Management: Monitor and manage all applications from a single dashboard.
- Scalability: Easily add new components by defining new applications under the root.
- Continuous Deployment: Automatically sync and apply updates from the Git repository.
We’ll be using that same approach here to set up the the discovery of our ArgoCD Applications. Below you’ll find the root application manifest that we’ll be using. This application will automatically add any apps we placed within app/
directory.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: root-app
namespace: argocd
spec:
project: default
source:
repoURL: '[email protected]:my-org/k8s-cluster-state.git'
path: 'apps'
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: argocd
syncPolicy:
automated:
selfHeal: true
prune: true
- Manage Child Applications: Each child application managed under the
apps
directory can independently define its deployment settings, allowing for modular updates and configurations.
Applying the Configuration
To implement the configurations using ArgoCD:
- Commit and Push Changes: Save and commit the following configuration files to your Git repository:
apps/argocd-app.yaml
argocd/kustomization.yaml
argocd/kube-system-project.yaml
apps/root-app.yaml
- Apply the Configuration: Deploy the root application by running the following command:
kubectl apply -f apps/root-app.yaml
Use the ArgoCD dashboard to monitor the deployment process. Check that all applications defined are being correctly synchronized and managed by ArgoCD.
Adding a System For Managing Secrets
Sealed Secrets provides a secure way to store and manage Kubernetes secrets in a Git repository by encrypting them into a “sealed” format. This setup allows us to use our existing GitOps workflows for secret management, ensuring that sensitive data remains protected even if the repository is public.
Deploying Sealed Secrets
To deploy our universal Kubernetes services that will potentially need to be deployed to multiple clusters like Sealed Secrets in a scalable and future-proof manner, we’ll use an ArgoCD ApplicationSet. This allows us to easily replicate the sealed secrets setup across multiple clusters if needed in the future.
This configuration outlines how to use an ApplicationSet in ArgoCD to deploy an ApplicationSet to various clusters, enhancing scalability and flexibility.
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: kube-system-apps
namespace: argocd
spec:
generators:
- list:
elements:
- cluster: ord
url: https://kubernetes.default.svc
template:
metadata:
name: '{{cluster}}-kube-system'
spec:
project: kube-system
source:
repoURL: '[email protected]:my-org/k8s-cluster-state.git'
path: 'apps/kube-system'
targetRevision: HEAD
destination:
server: '{{url}}'
syncPolicy:
automated:
prune: true
selfHeal: true
syncPolicy:
preserveResourcesOnDeletion: false
Sealed Secrets Application Manifest: This manifest defines the deployment specifics for the Sealed Secrets controller, specifying the Helm chart and its version:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: sealed-secrets
namespace: argocd
spec:
project: kube-system
source:
chart: sealed-secrets
repoURL: https://bitnami-labs.github.io/sealed-secrets
targetRevision: 2.15.3
helm:
releaseName: sealed-secrets-controller
valuesObject:
fullnameOverride: sealed-secrets-controller
destination:
server: "https://kubernetes.default.svc"
namespace: kube-system
syncPolicy:
automated:
selfHeal: true
prune: true
Commit and push these configuration files to your Git repository to enable ArgoCD to manage the deployment automatically.
Installing Kubeseal
To create sealed secrets locally that can be decrypted only by the controller running in your cluster, you must install the kubeseal
CLI:
KUBESEAL_VERSION='0.26.2'
wget "https://github.com/bitnami-labs/sealed-secrets/releases/download/v${KUBESEAL_VERSION:?}/kubeseal-${KUBESEAL_VERSION:?}-linux-amd64.tar.gz"
tar -xvzf kubeseal-${KUBESEAL_VERSION:?}-linux-amd64.tar.gz kubeseal
sudo install -m 755 kubeseal /usr/local/bin/kubeseal
This tool is essential for creating sealed secrets that are checked into your Git repository securely, ensuring that sensitive data such as credentials, API keys, and more, can be safely stored and managed via GitOps workflows.
Continuing from setting up Sealed Secrets, let’s proceed with deploying the Cloudflare Tunnel Ingress Controller. This setup involves securing API tokens and establishing a secure connection method for services within our Kubernetes cluster, leveraging Cloudflare’s capabilities.
Deploying Cloudflare Tunnel Ingress Controller
The Cloudflare Tunnel Ingress Controller provides a secure method to expose our services without requiring a publicly accessible endpoint.
Preparing Cloudflare Setup
- Generate API Token on Cloudflare:
- Navigate to the Cloudflare dashboard and create an API token we named ours
ord-k8s-tunnel-ingress
with the following permissions:Zone:Zone:Read
Zone:DNS:Edit
Account:Cloudflare Tunnel:Edit
- Restrict the token to the zones where you will be creating ingresses.
- Navigate to the Cloudflare dashboard and create an API token we named ours
- Gather Required Information:
- Obtain your Cloudflare Account ID.
- Decide on a name for your tunnel, for instance,
ord-k8s-tunnel-ingress
.
Creating and Sealing Cloudflare Secrets
Use kubectl
to create a Kubernetes secret, then seal it using kubeseal
(Take note of the --dry-run=client
flag):
kubectl create secret generic cloudflare-external-secret \
--from-literal=account_id='your_account_id' \
--from-literal=tunnel_name='ord-k8s-tunnel-ingress' \
--from-literal=api_token='your_api_token' \
--dry-run=client \
--namespace='cloudflare-tunnel-ingress-controller' \
-o yaml > secret.yaml
kubeseal -f secret.yaml -w sealed-secret.yaml
Place the sealed-secret.yaml
file within the app-manifests/cloudflare-tunnel-ingress-controller
directory.
ArgoCD Application for Cloudflare Tunnel
Define an ArgoCD Application to manage the deployment of the Cloudflare Tunnel Ingress Controller:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: cloudflare-tunnel-ingress-controller
namespace: argocd
spec:
project: kube-system
sources:
- repoURL: '[email protected]:my-org/k8s-cluster-state.git'
path: app-manifests/cloudflare-tunnel-ingress-controller
targetRevision: HEAD
- chart: cloudflare-tunnel-ingress-controller
repoURL: https://helm.strrl.dev
targetRevision: 0.0.9
helm:
valuesObject:
cloudflare:
secretRef:
name: cloudflare-external-secret
accountIDKey: account_id
tunnelNameKey: tunnel_name
apiTokenKey: api_token
destination:
namespace: cloudflare-tunnel-ingress-controller
server: 'https://kubernetes.default.svc'
syncPolicy:
automated:
selfHeal: true
prune: true
syncOptions:
- CreateNamespace=true
Commit and push these files to your Git repository.
Configuring ArgoCD and Creating Ingress
Finally, configure ArgoCD to handle non-TLS traffic and set up the ingress for ArgoCD:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: argocd-server-http-ingress
namespace: argocd
annotations:
cloudflare-tunnel-ingress-controller.strrl.dev/backend-protocol: "http"
cloudflare-tunnel-ingress-controller.strrl.dev/proxy-ssl-verify: "off"
spec:
ingressClassName: cloudflare-tunnel
rules:
- host: argocd.example.com
http:
paths:
- backend:
service:
name: argocd-server
port:
name: http
path: /
pathType: Prefix
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cmd-params-cm
data:
# Run server without TLS
server.insecure: "true"
I would also recommend adding this patch since the ArgoCD health checks don’t seem to work on the new cloudflare-tunnel
ingress class:
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
data:
resource.customizations: |
networking.k8s.io/Ingress:
health.lua: |
hs = {}
hs.status = "Healthy"
return hs
Add these configurations to your ArgoCD setup and update your kustomization.yaml
to include these overlays. Your kustomization.yaml
should look something like this:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: argocd
resources:
- github.com/argoproj/argo-cd//manifests/cluster-install?ref=v2.10.9
- kube-system-project.yaml
- ingress.yaml
patches:
- path: overlays/argocd-cmd-params-cm.yaml
- path: overlays/argocd-cm.yaml
Once everything is in place, you can securely expose ArgoCD through a Cloudflare tunnel, enhancing your deployment’s security and accessibility. In order for the server.insecure = "true"
to take effect I did have have to delete the ArgoCD server pod to effectively restart ArgoCD.
If needed, you can use the ArgoCD CLI with the --grpc-web
flag due to the Cloudflare Tunnel Ingress Controller not currently supporting GRPC:
argocd login argocd.example.com --grpc-web
This setup completes the integration of the Cloudflare Tunnel Ingress Controller with ArgoCD, providing a secure and scalable way to manage Kubernetes ingress using Cloudflare’s tunnel technology.
Setting Up a GitLab Webhook for ArgoCD
Webhooks in ArgoCD can significantly enhance your CI/CD workflow by automatically triggering synchronization whenever changes are pushed to your Git repository. This setup reduces the need for manual sync operations and ensures that your cluster configuration is always up to date with your repository.
Retrieve the ArgoCD Webhook URL
First, you need to obtain the webhook URL from ArgoCD that GitLab will call to trigger the sync. According to the ArgoCD documentation, the webhook URL is typically in the following format:
https://<argocd-server-domain>/api/webhook
Configure the Webhook in GitLab
- Go to Your Repository Settings in GitLab: Navigate to the settings of the repository you want to integrate with ArgoCD.
- Webhooks: Under the “Webhooks” section, add a new webhook.
- Webhook Details:
- URL: Enter the ArgoCD webhook URL you retrieved earlier.
- Secret Token: Set a preshared secret token that ArgoCD will use to validate requests from GitLab. You will need to configure this secret in ArgoCD as well.
- Trigger: Select “Push events” as the trigger for the webhook.
- Add Webhook: Save the webhook.
Configure the Preshared Secret in ArgoCD
To enhance security, configure ArgoCD to use a preshared secret token that matches the one you set in GitLab:
- ArgoCD stores its webhook secrets in a Kubernetes Secret. While it’s possible to manage this Secret with SealedSecrets, the process can get complex since it involves other sensitive data. For the time being, I’ll manually update the Secret using
kubectl
. If you manage to update this Secret using SealedSecrets without disrupting other keys, feel free to share your approach in the comments below. - To modify the the existing secret you can use the following command and then add the following yaml to add your pre-shared key:
kubectl edit secret argocd-secret -n argocd
stringData:
webhook.gitlab.secret: MyPresharedKey
You may need to restart ArgoCD again for this go into effect but your webhook should work fine now.
What’s Next?
In our upcoming blog posts, we will delve into several exciting topics designed to further enhance our Kubernetes environment:
- Longhorn for Persistent Storage: We’ll explore how to integrate Longhorn to manage persistent storage more effectively, ensuring robust data handling capabilities.
- Kubernetes Database Operator: Look forward to a post on deploying a Kubernetes database operator, which can simplify the management of database instances within our cluster.
- WordPress Demo Project: We will demonstrate setting up WordPress on Kubernetes, providing a practical example of managing web applications in our configured environment.
- Security Hardening: Future posts will cover additional security measures to harden our Kubernetes setup against potential vulnerabilities.
If there are specific topics you are interested in or questions you’d like us to address in future posts, please leave a comment below. We value your input and look forward to making our content as helpful and relevant as possible.