Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sample of running multiple runners (scale) easily? #72

Closed
woutersamaey opened this issue Nov 4, 2020 · 11 comments
Closed

Sample of running multiple runners (scale) easily? #72

woutersamaey opened this issue Nov 4, 2020 · 11 comments

Comments

@woutersamaey
Copy link

Can someone help me out with an example of how to set up multiple runners easily?

I know docker-composer has a parameter scale that can be used to create multiple & identical runners.

My main concern is that the RUNNER_WORKDIR must be unique for each runner, so it needs to take into consideration the scale parameter, or the runner should just create subdirs in the RUNNER_WORKDIR based on the container name?

Or, is there an easier solution?

Thanks in advance

@myoung34
Copy link
Owner

myoung34 commented Nov 4, 2020

At scale you wouldnt be ideally using docker-compose, so youd probably automate using temp dirs in your orchestrator using configuration management, cloud-init, netboot, etc. I'm going to close this as its a generic docker question and not really related as an issue to this project.

@myoung34 myoung34 closed this as completed Nov 4, 2020
@woutersamaey
Copy link
Author

My idea was to have a (simple) way of having multiple runners, so actions are not run 1 by 1, but 4 could be run simultaniously since GHA does not do parallel processing on it's own. Or do you see another solution for this?

I could also clone the VM I'm using if there's no other way.

@myoung34
Copy link
Owner

myoung34 commented Nov 4, 2020

A single vm can do it youll just need to automate how to create and use directories
I dont know a solution for your use case because it's too ephemeral to guess.

@zhigang1992
Copy link

@woutersamaey this is an alternative that you can look into

https://testdriven.io/blog/github-actions-docker/

@woutersamaey
Copy link
Author

Very nice article @zhigang1992 tnx!

@zhigang1992
Copy link

The problem that this repo solves that the usage of docker in the actions steps.

version: '3'

services:
  runner:
    image: myoung34/github-runner:latest
    environment:
      - ORG_RUNNER=true
      - ORG_NAME=***
      - ACCESS_TOKEN=***
      - RUNNER_WORKDIR=/tmp/runner/work_{{.Task.Slot}}
    volumes:
      - '/var/run/docker.sock:/var/run/docker.sock'
      - '/tmp/runner:/tmp/runner'
    deploy:
      replicas: 3

Then when you do scale it with docker stack deploy -c docker-compose.yml then each instance would have their own folder to work with.

@mrmachine
Copy link

Could the entrypoint script just be configured to do:

export _RUNNER_WORKDIR="${RUNNER_WORKDIR:-/_work}/${HOSTNAME}"

In containers spun up by docker-compose (without swarm etc) the hostname is a unique ID already.

@zhigang1992 @myoung34

@mrmachine
Copy link

mrmachine commented Mar 23, 2021

I've worked around this issue (and the missing ARM docker-compose binary) with:

github-actions.entrypoint.sh:

#!/usr/bin/dumb-init /bin/bash

# Append a persistent random unique ID to the work dir and runner name so we can scale
# with Docker Compose.
if [[ ! -f /tmp/github-actions/runner_id.txt ]]; then
    head /dev/urandom | tr -dc A-Za-z0-9 | head -c 13 > /tmp/github-actions/runner_id.txt
fi
_RUNNER_ID="$(cat /tmp/github-actions/runner_id.txt)"
RUNNER_NAME="${RUNNER_NAME}-${_RUNNER_ID}"
RUNNER_WORKDIR="${RUNNER_WORKDIR}/${_RUNNER_ID}"

exec /entrypoint.sh

docker-compose.yml:

version: '2.4'

services:
  github-actions:
    entrypoint: /github-actions.entrypoint.sh
    environment:
      ACCESS_TOKEN: ${GITHUB_ACTIONS_ACCESS_TOKEN}
      ORG_NAME: ${GITHUB_ACTIONS_ORG_NAME}
      ORG_RUNNER: 'true'
      RUNNER_NAME: ${GITHUB_ACTIONS_RUNNER_NAME}
      RUNNER_WORKDIR: ${PWD}/github-actions
    image: myoung34/github-runner:latest
    restart: always
    scale: ${GITHUB_ACTIONS_SCALE:-1}
    volumes:
      - /tmp/github-actions
      # This is the `run.sh` shim from https://hub.docker.com/r/linuxserver/docker-compose
      # installed on the host where I am deploying the GitHub Actions runner.
      - /usr/local/bin/docker-compose:/usr/local/bin/docker-compose
      - /var/run/docker.sock:/var/run/docker.sock
      - ${PWD}/github-actions:${PWD}/github-actions
      - ${PWD}/github-actions.entrypoint.sh:/github-actions.entrypoint.sh

@mrmachine
Copy link

@myoung34 what do you think about rolling this behaviour into the official entrypoint? The anonymous volume could be defined in Dockerfile to store the random runner ID persistently. Plus using a subdir inside the bind mounted workdir. This should make the runner scalable by default.

@myoung34
Copy link
Owner

myoung34 commented Mar 23, 2021

I won't tie this so heavily to docker-compose. The ability to set the RUNNER_WORKDIR is there for whatever orchestrator specific needs are, having it default to random is not ideal.

If you'd like to add a PR to add an env var that will cause it to generate a random directory path I'd be happy to merge that

@syrinscape-admin
Copy link

Random but persistent storage of a unique runner ID as implemented above should work equally with any orchestrator, so long as the anonymous volume is defined in the runner image and not a required configuration in the compose or helm file. There is nothing compose specific about the entrypoint wrapper. I'll make a PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants