Skip to content

Kubernetes Deployment Example

Avital Baral edited this page Feb 2, 2023 · 6 revisions

You can run Faktory in your Kubernetes cluster quite easily. A Helm chart is the easiest way to get up and running. However, if you'd like to write Kubernetes definitions yourself, here are some tips and samples.

deployment.yml

Here we tell Kubernetes to deploy a single replica of the Faktory Server. A few things to note:

  • A volume is mounted to store Faktory's Redis data so it will persist across restarts
  • A ConfigMap is used to store Faktory's configuration files. That ConfigMap is mounted as a volume inside Faktory's container.
  • There is a sidecar container that watches the configuration files, and if they change, sends a SIGHUP to the Faktory server process to hot-reload configuration (thanks @jbielick)
  • The deployment strategy is Recreate so we only ever have one instance of Faktory
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: faktory-server
  labels:
    app: faktory-server
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: faktory-server
  template:
    metadata:
      labels:
        app: faktory-server
    spec:
      shareProcessNamespace: true
      terminationGracePeriodSeconds: 10
      containers:
      - name: faktory-server-config-watcher
        image: busybox
        command:
        - sh
        - "-c"
        - |
          sum() {
            current=$(find /conf -type f -exec md5sum {} \; | sort -k 2 | md5sum)
          }
          sum
          last="$current"
          while true; do
            sum
            if [ "$current" != "$last" ]; then
              pid=$(pidof faktory)
              echo "$(date -Iseconds) [conf.d] changes detected - signaling Faktory with pid=$pid"
              kill -HUP "$pid"
              last="$current"
            fi
            sleep 1
          done
        volumeMounts:
        - name: faktory-server-configs-volume
          mountPath: "/conf"
      - image: docker.contribsys.com/contribsys/faktory:1.2.0
        name: faktory-server
        command:
        - "/faktory"
        - "-b"
        - ":7419"
        - "-w"
        - ":7420"
        - "-e"
        - "production"
        imagePullPolicy: Always
        envFrom:
        - configMapRef:
            name: production-config
        volumeMounts:
        - name: faktory-server-configs-volume
          mountPath: "/etc/faktory/conf.d"
        - name: faktory-server-storage-volume
          mountPath: "/var/lib/faktory/db"
      volumes:
      - name: faktory-server-configs-volume
        configMap:
          name: faktory-server-configmap
      - name: faktory-server-storage-volume
        persistentVolumeClaim:
          claimName: faktory-server-storage-pv-claim

configmap.yml

An example configmap that will be mounted into the deployment above

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: faktory-server-configmap
data:
  cron.toml: |2

    [[cron]]
    schedule = "*/1 * * * *"

    [cron.job]
    queue = "default"
    reserve_for = 60
    retry = -1
    type = "Cron::SomeRandomCron"

  throttles.toml: |2

    [throttles.default]
    concurrency = 1
    timeout = 60

  statsd.toml: |2

    [statsd]
    location = "datadog-agent-svc.default.svc.cluster.local:8125"
    namespace = "faktory"
    tags = ["env:production"]

volume.yml

The volume that will be mounted to store the faktory data.

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: faktory-server-storage-pv-claim
spec:
  accessModes:
  - ReadWriteOnce
  storageClassName: [storage_class_here]
  resources:
    requests:
      storage: 5Gi

service.yml

This exposes the Faktory Server to the rest of your cluster. You can then use for example: tcp://faktory-server-svc.default.svc.cluster.local:7419 as your host for the Faktory clients.

kind: Service
apiVersion: v1
metadata:
  name: faktory-server-svc
spec:
  selector:
    app: faktory-server
  ports:
  - name: faktory
    protocol: TCP
    port: 7419
  - name: dashboard
    protocol: TCP
    port: 7420
Clone this wiki locally