My GitOps Kubernetes setup using FluxCD.
The Git repository contains the following top directories:
- apps dir contains Helm releases with a custom configuration per cluster
- infrastructure dir contains common infra tools such as ingress-nginx and cert-manager
- clusters dir contains the Flux configuration per cluster
├── apps
│ ├── base
│ ├── production
│ └── staging
├── infrastructure
│ ├── configs
│ └── controllers
└── clusters
├── production
└── staging
The apps configuration is structured into:
- apps/base/ dir contains namespaces and Helm release definitions
- apps/production/ dir contains the production Helm release values
- apps/staging/ dir contains the staging values
./apps/
├── base
│ └── podinfo
│ ├── kustomization.yaml
│ ├── namespace.yaml
│ ├── release.yaml
│ └── repository.yaml
├── production
│ ├── kustomization.yaml
│ └── podinfo-patch.yaml
└── staging
├── kustomization.yaml
└── podinfo-patch.yaml
This allows production and staging environments to apply custom patches.
export GITHUB_TOKEN=<your-token>
Verify that your cluster satisfies the prerequisites with:
flux check --pre
Bootstrap Flux:
flux bootstrap github \
--context=production \
--owner=Guibi1 \
--repository=homelab \
--branch=main \
--path=clusters/production
Create secrets:
kubectl create secret generic cloudflare-api-token-secret --namespace cert-manager --from-literal=api-token='YOUR_API_TOKEN'
Setup minio tenant:
kubectl get secrets -n cnpg cnpg-minio-tenant-ca-tls -o=jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt
kubectl create secret generic -n minio operator-ca-tls-cnpg --from-file=ca.crt
kubectl port-forward -n cnpg svc/minio 9000:443 &
mc --insecure admin user add local cnpg cnpg1234
mc --insecure admin policy attach local readwrite -u cnpg
You can see all the HelmReleases' status using this command:
$ watch flux get helmreleases --all-namespaces
NAMESPACE NAME REVISION SUSPENDED READY MESSAGE
cert-manager cert-manager v1.11.0 False True Release reconciliation succeeded
ingress-nginx ingress-nginx 4.4.2 False True Release reconciliation succeeded
podinfo podinfo 6.3.0 False True Release reconciliation succeeded
Watch the pods reconciliation:
$ flux get kustomizations --watch
NAME REVISION SUSPENDED READY MESSAGE
apps main/696182e False True Applied revision: main/696182e
flux-system main/696182e False True Applied revision: main/696182e
infra-configs main/696182e False True Applied revision: main/696182e
infra-controllers main/696182e False True Applied revision: main/696182e
If you want to add a cluster to your fleet, first clone your repo locally:
git clone https://github.com/${GITHUB_USER}/${GITHUB_REPO}.git
cd ${GITHUB_REPO}
Create a dir inside clusters
with your cluster name:
mkdir -p clusters/dev
Copy the sync manifests from staging:
cp clusters/staging/infrastructure.yaml clusters/dev
cp clusters/staging/apps.yaml clusters/dev
You could create a dev overlay inside apps
, make sure
to change the spec.path
inside clusters/dev/apps.yaml
to path: ./apps/dev
.
Push the changes to the main branch:
git add -A && git commit -m "add dev cluster" && git push
Set the kubectl context and path to your dev cluster and bootstrap Flux:
flux bootstrap github \
--context=dev \
--owner=${GITHUB_USER} \
--repository=${GITHUB_REPO} \
--branch=main \
--personal \
--path=clusters/dev
Any change to the Kubernetes manifests or to the repository structure should be validated in CI before a pull requests is merged into the main branch and synced on the cluster.
This repository contains the following GitHub CI workflows:
- the test workflow validates the Kubernetes manifests and Kustomize overlays with kubeconform
- the e2e workflow starts a Kubernetes cluster in CI and tests the staging setup by running Flux in Kubernetes Kind. DISABLED BECAUSE OF HOSTPATH