GitHub Actions is a popular tool for developers and release engineers to run automated pipelines to test, build, and deploy application code on their environments. The extensive number of available third-party integrations with which engineers can easily compose their pipelines into their software configuration management (SCM) creates a seamless user experience.
There are potential deal breakers for some organizations, such as the shared infrastructure environment in which jobs run. However, GitHub Action Runner Controller (ARC) is available for self-hosted deployments to run CI/CD workloads in your own environment. Out of the box, ARC supports deployment to Docker and Kubernetes, but there is no official support for deploying to Red Hat OpenShift. With a few modifications to the security context constraints (SCCs), ARC will deploy and run without issues on OpenShift. This article will demonstrate the necessary steps to securely deploy and run ARC on OpenShift.
ARC architecture
Before we deploy ARC to OpenShift, let's briefly discuss the overall architecture. ARC consists of three components deployed in two (or more) namespaces.
The controller: The controller orchestrates the lifecycle of the various custom resources:
AutoscalingListener
,AutoscalingRunnerSet
,EphemeralRunner
, andEphemeralRunnerSet
.The listener: The listener is responsible for queuing and assigning jobs to the
EphemeralRunner
s. These jobs are assigned when actions are triggered from GitHub that reference a specific, named runner.The runners: The Helm chart refers to them as scale sets. These are the containers that spin up on demand to execute work as part of an action workflow. There can be multiple runner namespaces deployed and controlled via a single system namespace.
The controller and listener are called the ARC system. The system components are deployed together into a single namespace. There is a single primary controller and N listeners depending upon the number of runners deployed (more on this later).
How to run ARC images on OpenShift
Out-of-the-box ARC runner images will not successfully run on OpenShift because the container images explicitly specify running as a user ID of 1001
and a group ID of 123
. While these values may be permissible in a typical Docker or non-hardened Kubernetes environment, OpenShift by default does not allow containers to execute with these IDs.
The simplest, though incorrect, way to solve this problem is to relax the SecurityContextConstraint
for the image and allow the use of anyid
. While this is a quick solution, it opens the door to various security vulnerabilities, which is why OpenShift does not run containers with this by default. At the moment, the correct way to solve this issue is to create our own SecurityContextConstraint
that will provide just enough privilege for the runner image.
We'll make a few modifications, using the default restricted
SecurityContextConstraint
as a guide. Specifically, we will change the fsGroup
to run in the range of 123
to 123
, forcing a single group ID. We will also set the runAsUser
value to 1001. We will end up with the following:
---
kind: SecurityContextConstraints
metadata:
annotations:
kubernetes.io/description: Based on restricted SCC, but forces uid/gid 1001/123
name: github-arc
allowHostDirVolumePlugin: false
allowHostIPC: false
allowHostNetwork: false
allowHostPID: false
allowHostPorts: false
allowPrivilegeEscalation: true
allowPrivilegedContainer: false
allowedCapabilities: null
apiVersion: security.openshift.io/v1
defaultAddCapabilities: null
fsGroup:
ranges:
- max: 123
min: 123
type: MustRunAs
groups: []
priority: null
readOnlyRootFilesystem: false
requiredDropCapabilities:
- KILL
- MKNOD
- SETUID
- SETGID
runAsUser:
type: MustRunAs
uid: 1001
seLinuxContext:
type: MustRunAs
supplementalGroups:
ranges:
- max: 123
min: 123
type: MustRunAs
users: []
volumes:
- configMap
- csi
- downwardAPI
- emptyDir
- ephemeral
- persistentVolumeClaim
- projected
- secret
Define a cluster role
Now that we have a custom SecurityContextConstraint
we'll define a ClusterRole
that references this SCC, as follows:
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:openshift:scc:github-arc
rules:
- apiGroups:
- security.openshift.io
resourceNames:
- github-arc
resources:
- securitycontextconstraints
verbs:
- use
In a later step, we will create a RoleBinding
to bind the ServiceAccount
used by the runners to this cluster role, which allows the runner image to spin up and execute jobs.
Deploy the ARC system
Now we will deploy the ARC system.
First let's export the variables to reference in the Helm charts, as follows:
export GITHUB_ARC_SYSTEM_NAMESPACE="github-arc-system"
export GITHUB_ARC_SYSTEM_INSTALLATION_NAME="github-arc-system"
export GITHUB_ARC_RUNNER_NAMESPACE="github-arc-runners"
export GITHUB_ARC_RUNNER_INSTALLATION_NAME="github-arc-runners"
As you can see, we are defining values for both the system and runner installations and namespaces. Feel free to change these values, but once you've defined the system values, you'll need to keep them the same and reuse them when deploying your runners.
Note: The value for GITHUB_ARC_RUNNER_INSTALLATION_NAME
is the value that will be surfaced in the GitHub Actions user interface (UI) and referenced within Action YAML definitions to target these runners for job execution.
GitHub's documentation uses the namespace and installation name of arc-system
when deploying the system components. While this appears innocent enough, deviating from their suggestion by changing the values of the variables, multiple issues will occur unless you make several overrides. To avoid these "gotchas" and allow the renaming of arc-system to any value of our choosing, we are going to explicitly name the ServiceAccount
and later reference this when deploying the runners as follows::
helm install ${GITHUB_ARC_SYSTEM_INSTALLATION_NAME} \
--namespace "${GITHUB_ARC_SYSTEM_NAMESPACE}" \
--create-namespace \
--set serviceAccount.name="${GITHUB_ARC_SYSTEM_INSTALLATION_NAME}-gha-rs-controller" \
oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set-controller
Once this is deployed, we should have a single pod running in our GITHUB_ARC_SYSTEM_NAMESPACE
for the controller. There will not be a listener pod yet.
Deploy the ARC runners
Now we will deploy the ARC runners, but with a few modifications. The first modification is to override the spec
for the runner image. We are dropping all capabilities from the container image and explicitly setting the runAsNonRoot
, runAsUser
, and runAsGroup
values. To make it easier to override these values, we will create a values.yaml
file to be passed on the command-line interface (CLI) as follows:
# values.yaml
---
template:
spec:
containers:
- name: runner
image: ghcr.io/actions/actions-runner:latest
command: ["/home/runner/run.sh"]
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
runAsNonRoot: true
runAsUser: 1001
runAsGroup: 123
To install the runners, we'll need to override the default values of the Helm chart for the controllerServiceAccount.name
and controllerServiceAccount.namespace
. Notice that we are referencing the location of our ARC system that we deployed in the prior step:
GITHUB_CONFIG_URL="https://github.com/<your_enterprise/org/repo>"
GITHUB_PAT="<PAT>"
helm install "${GITHUB_ARC_RUNNER_INSTALLATION_NAME}" \
--namespace "${GITHUB_ARC_RUNNER_NAMESPACE}" \
--create-namespace \
--set githubConfigUrl="${GITHUB_CONFIG_URL}" \
--set githubConfigSecret.github_token="${GITHUB_PAT}" \
--set controllerServiceAccount.name="${GITHUB_ARC_SYSTEM_INSTALLATION_NAME}-gha-rs-controller" \
--set controllerServiceAccount.namespace="${GITHUB_ARC_SYSTEM_NAMESPACE}" \
-f values.yaml \
oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set
Next we'll add a policy binding to the ServiceAccount
for the runners to use our custom ClusterRole
:
oc policy add-role-to-user system:openshift:scc:github-arc -z ${GITHUB_ARC_RUNNER_INSTALLATION_NAME}-gha-rs-no-permission -n ${GITHUB_ARC_RUNNER_NAMESPACE}
Now that the runners are deployed, you should see both a controller and listener pod in the ARC systems namespace as follows:
oc get pods -n "${GITHUB_ARC_SYSTEM_NAMESPACE}"
Test the ARC
Let's go ahead and make sure that ARC is working properly. In our previous GitHub repository, we'll create a simple test action and execute it as follows:
name: ARC Demo
on:
workflow_dispatch:
jobs:
Explore-GitHub-Actions:
# You need to use the INSTALLATION_NAME from the previous step
runs-on: github-arc-runners
steps:
- run: echo "🎉 This job uses runner scale set runners on OpenShift!"
Once we execute the action within the GitHub UI, we should see a pod spin up in the ARC runners namespace on our OpenShift cluster where the job is executed.
Deploy additional runners
When deploying additional runners, we need to reference the ARC system namespace that we used in previous steps. We do not need to deploy additional system instances because a single system installation can handle multiple runner instances. We also do not need to create new security context constraints or cluster roles. Instead, we will reuse the ones we previously created.
Ensure that the required environment variables are set for the ARC system deployment, as follows:
export GITHUB_ARC_SYSTEM_NAMESPACE="github-arc-system"
export GITHUB_ARC_SYSTEM_INSTALLATION_NAME="github-arc-system"
We'll set new environment variables for the additional runners we'd like to deploy:
export GITHUB_ARC_RUNNER_NAMESPACE_ADDITIONAL="github-arc-runners-additional"
export GITHUB_ARC_RUNNER_INSTALLATION_NAME_ADDITIONAL="github-arc-runners-additional"
Our Helm command will now be updated to use these new environment variables:
GITHUB_CONFIG_URL="https://github.com/<your_enterprise/org/repo>"
GITHUB_PAT="<PAT>"
helm install "${GITHUB_ARC_RUNNER_INSTALLATION_NAME_ADDITIONAL}" \
--namespace "${GITHUB_ARC_RUNNER_NAMESPACE_ADDITIONAL}" \
--create-namespace \
--set githubConfigUrl="${GITHUB_CONFIG_URL}" \
--set githubConfigSecret.github_token="${GITHUB_PAT}" \
--set controllerServiceAccount.name="${GITHUB_ARC_SYSTEM_INSTALLATION_NAME}-gha-rs-controller" \
--set controllerServiceAccount.namespace="${GITHUB_ARC_SYSTEM_NAMESPACE}" \
-f values.yaml \
oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set
We will bind the ClusterRole
to the ServiceAccount
in this newly deployed namespace:
oc policy add-role-to-user system:openshift:scc:github-arc -z ${GITHUB_ARC_RUNNER_INSTALLATION_NAME_ADDITIONAL}-gha-rs-no-permission -n ${GITHUB_ARC_RUNNER_NAMESPACE_ADDITIONAL}
Once this is deployed, we should now see an additional listener
pod in our ARC system namespace:
oc get pods -n "${GITHUB_ARC_SYSTEM_NAMESPACE}"
Following this example, you can test these additional runners by updating the runs-on: github-arc-runners
to runs-on: github-arc-runners-additional
, or whatever you've named these newly deployed runners.
Wrap up
Once the GitHub Action Runner Controller is securely deployed to OpenShift with a system controller and N runner groups, organizations can run their action pipelines on their own infrastructure instead of shared instances hosted by GitHub. In addition, you can provide more security by enabling independent runner groups with isolation between projects and teams.
For more information, refer to our GitHub repository.