Back
Blog Post
|

Running Cypress tests in parallel in GitHub Actions without Cypress Dashboard

Cully Larson
Cully Larson
September 16, 2022
Running Cypress tests in parallel in GitHub Actions without Cypress Dashboard

End-to-end tests can take forever to run. And what are they really doing with their time? Clicking some buttons, typing some text, and engaging in a lot of waiting around. E2e tests have got to spend—I don’t know for sure, so I’ll make up a number—half their time just waiting for things to happen. If they’re essentially idle a lot of the time, it makes sense to run e2e tests in parallel.

Cypress supports parallelization. However, and I’ll leave it up to the reader to decide why this is the case, Cypress doesn’t make it very clear that in order to run tests in parallel you need a Cypress Dashboard account. Cypress Dashboard is what coordinates processes and load balances tests.

Cypress Dashboard has a free tier. That might work for some projects. It could be worth paying for a plan for enterprise projects. They have some features that are appealing beyond parallelization. But what if you don’t want to pay for a plan?

There’s a project called Sorry Cypress that does something very similar to Cypress Dashboard. They have paid plans if you want them to host the dashboard. Or you can host your own for free. But, stick with me here, what if you don’t want to pay anything and you don’t want to host your own dashboard service? Read on, dear reader.

Split up our jobs

This is an article about running tests in GitHub Actions. So let's talk about jobs. By the end of the article, we’ll create a job that runs the e2e tests in parallel. Each of those jobs will likely need a production build of the project to run against. But rather than have each individual e2e job build the project, we’ll perform the project build in its own job and share the build with the e2e jobs.

name: CI on: [push] env: NODE_ENV: test DATABASE_URL: postgresql://postgres:postgres@localhost/myapp_test?schema=ci jobs: build: name: Build runs-on: ubuntu-latest steps: - name: Checkout code uses: actions/checkout@v2 - name: Setup Node uses: actions/setup-node@v2.1.2 with: node-version: 16.13.2 cache: yarn - name: Install packages run: yarn install --frozen-lockfile - name: ESLint run: yarn lint - name: Check TypeScript run: yarn typecheck # Build app - name: Build App run: yarn build # Save build for other jobs - name: Save build folder uses: actions/upload-artifact@v2 with: name: the-build if-no-files-found: error path: build retention-days: 1

Now we can use this build in our e2e jobs. If you have other jobs besides e2e tests (e.g. unit tests, API tests, stress tests, etc) you can use the build in them as well.

An end-to-end job

As a baseline, let’s look at our e2e job without parallelization.

e2e: name: e2e Tests runs-on: ubuntu-latest needs: build services: postgres: image: postgres:13.2 ports: ['5432:5432'] # Make sure the database is ready before we use it options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5 # These need to be set on the service or it won't start for some reason env: POSTGRES_USER: postgres POSTGRES_PASSWORD: postgres POSTGRES_DB: myapp_test steps: - name: Checkout code uses: actions/checkout@v2 - name: Download the build uses: actions/download-artifact@v2 with: name: the-build path: build - name: Setup Node uses: actions/setup-node@v2.1.2 with: node-version: 16.13.2 cache: yarn - name: Install packages run: yarn install --frozen-lockfile - name: Run Cypress e2e tests uses: cypress-io/github-action@v4 with: # we have already installed all dependencies above install: false start: yarn test:server wait-on: 'http://localhost:3001' command: yarn cypress run env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} - name: Upload Cypress Screenshots uses: actions/upload-artifact@v1 # Only capture images on failure if: failure() with: name: cypress-screenshots path: cypress/screenshots - name: Upload Cypress Logs uses: actions/upload-artifact@v1 # Only capture logs on failure if: failure() with: name: cypress-logs path: cypress/logs - name: Upload Cypress Videos uses: actions/upload-artifact@v1 # Only capture videos on failure if: failure() with: name: cypress-videos path: cypress/videos

This will run all of our tests in one job. To begin our parallelization journey, we need to run multiple versions of this job. We can do that with a matrix strategy.

strategy: fail-fast: false matrix: containers: [0, 1, 2, 3, 4]

This will run our e2e tests in five different jobs/containers. However, as it is, each job will do the same thing; they will each run all of our tests.

Split up our tests

We need a way to split our tests up. We could use an orchestration service like Cypress Dashboard or Sorry Cypress, as discussed. But without something like that, we need a way to split our tests up deterministically (the same way every time), so that each of our five e2e jobs knows which tests it should run, each of the tests is only run once, and none of the tests are left out (all of them are run).

cypress run has a --spec argument that allows us to define which test files should be run. If we can split our test files up, we can pass them to --spec, and only those test files will be run. But how to split up our tests? The simplest approach is to get a list of all the test files, sort them in a consistent way, and then divide them evenly among each of our e2e jobs. We need a script for that.

// cypress-spec-split.ts import fs from 'fs/promises'; import globby from 'globby'; import minimatch from 'minimatch'; // These are the same properties that are set in cypress.config. // In practice, it's better to export these from another file, and // import them here and in cypress.config, so that both files use // the same values. const specPatterns = { specPattern: 'tests/e2e/**/*.cy.{ts,tsx,js,jsx}', excludeSpecPattern: ['tsconfig.json'], }; // used to roughly determine how many tests are in a file const testPattern = /(^|\s)(it|test)\(/g; const isCli = require.main?.filename === __filename; function getArgs() { const [totalRunnersStr, thisRunnerStr] = process.argv.splice(2); if (!totalRunnersStr || !thisRunnerStr) { throw new Error('Missing arguments'); } const totalRunners = totalRunnersStr ? Number(totalRunnersStr) : 0; const thisRunner = thisRunnerStr ? Number(thisRunnerStr) : 0; if (isNaN(totalRunners)) { throw new Error('Invalid total runners.'); } if (isNaN(thisRunner)) { throw new Error('Invalid runner.'); } return { totalRunners, thisRunner }; } async function getTestCount(filePath: string): Promise<number> { const content = await fs.readFile(filePath, 'utf8'); return content.match(testPattern)?.length || 0; } // adapated from: // https://github.com/bahmutov/find-cypress-specs/blob/main/src/index.js async function getSpecFilePaths(): Promise<string[]> { const options = specPatterns; const files = await globby(options.specPattern, { ignore: options.excludeSpecPattern, }); // go through the files again and eliminate files that match // the ignore patterns const ignorePatterns = [...(options.excludeSpecPattern || [])]; // a function which returns true if the file does NOT match // all of our ignored patterns const doesNotMatchAllIgnoredPatterns = (file: string) => { // using {dot: true} here so that folders with a '.' in them are matched // as regular characters without needing an '.' in the // using {matchBase: true} here so that patterns without a globstar ** // match against the basename of the file const MINIMATCH_OPTIONS = { dot: true, matchBase: true }; return ignorePatterns.every((pattern) => { return !minimatch(file, pattern, MINIMATCH_OPTIONS); }); }; const filtered = files.filter(doesNotMatchAllIgnoredPatterns); return filtered; } async function sortSpecFilesByTestCount(specPathsOriginal: string[]): Promise<string[]> { const specPaths = [...specPathsOriginal]; const testPerSpec: Record<string, number> = {}; for (const specPath of specPaths) { testPerSpec[specPath] = await getTestCount(specPath); } return ( Object.entries(testPerSpec) // Sort by the number of tests per spec file, so that we get a bit closer to // splitting up the files evenly between the runners. It won't be perfect, // but better than just splitting them randomly. And this will create a // consistent file list/ordering so that file division is deterministic. .sort((a, b) => b[1] - a[1]) .map((x) => x[0]) ); } export function splitSpecs(specs: string[], totalRunners: number, thisRunner: number): string[] { return specs.filter((_, index) => index % totalRunners === thisRunner); } (async () => { // only run this if called via the CLI if (!isCli) { return; } try { const specFilePaths = await sortSpecFilesByTestCount(await getSpecFilePaths()); if (!specFilePaths.length) { throw Error('No spec files found.'); } const { totalRunners, thisRunner } = getArgs(); const specsToRun = splitSpecs(specFilePaths, totalRunners, thisRunner); console.log(specsToRun.join(',')); } catch (err) { console.error(err); process.exit(1); } })();

Note that this script was roughly adapted from this one.

Run the script like this: yarn --silent ts-node --quiet cypress-spec-split.ts 5 2 where 5 is the total number of jobs, and 2 is the number of the current job (starting with 0). The second parameter is needed so that we know which job to assign test files to (each job gets its own, unique set of tests).

Essentially, this script:

  1. Fetches a list of all the e2e test files.
  2. Naively tries to figure out how many tests are in each file.
  3. Sorts the list of tests by the number of tests in each file.
  4. Assigns each file to a specific job (i.e. the number value of thisRunner). It attempts to evenly divide the tests between jobs. That’s why they’re sorted by the number of tests in each file, so that one job doesn’t end up with all the files with the most tests. Without this, some jobs could take much longer than others. Even so, it’s unlikely that the script will exactly evenly divide the test files. That’s one upside of an orchestration service (it will divide tests more evenly). But in practice, this works well enough.
  5. Outputs the list of tests for a job so that it can be directly passed to --spec.

To run our tests, we will update the e2e test step in the workflow and use our script:

- name: Run Cypress e2e tests uses: cypress-io/github-action@v4 with: # we have already installed all dependencies above install: false start: yarn test:server wait-on: 'http://localhost:3001' # NOTE: This doesn't work. Keep reading to find out why. command: yarn cypress run --spec $(yarn --silent ts-node --quiet cypress-spec-split.ts 5 ${{ matrix.containers }}) env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

The 5 is the total number of jobs and ${{ matrix.containers }} is the number of the job from our matrix (i.e. one of [0, 1, 2, 3, 4]; which is why we start them from zero).

However, we have a problem. Unfortunately, cypress-io/github-action doesn’t allow us to use command substitution, environment variables, or context values. We could just run the test command manually instead of using cypress-io/github-action:

- name: Run Cypress e2e tests run: > doppler run --preserve-env -- yarn start-server-and-test 'yarn test:server' http://localhost:3001 "yarn cypress run --spec $(yarn --silent ts-node --quiet scripts/cypress-spec-split.ts 3 ${{ matrix.containers }}) "

But that can end up being a bit complicated with waiting for test servers to start, etc. cypress-io/github-action does that for us, along with a lot of other things. And cypress-io/github-action will continue to improve. Using it is a better option in the long term.

If we want to keep using cypress-io/github-action, we need to create another script that basically just runs cypress run and fills in the --spec argument.

// cypress-ci-run.ts /** * This script runs Cypress tests in CI. It exists because we need to get split * up the tests between multiple runners, but we can't run that script in a way * that will pass a value to --spec directly, using cypress-io/github-action's * `command` property. It just won't let us include an env variable or do * command substitution. * * So, we can either just not use cypress-io/github-action or use a script like * this one to run the tests. */ import { exec } from 'child_process'; type GetEnvOptions = { required?: boolean; }; function getEnvNumber(varName: string, { required = false }: GetEnvOptions = {}): number { if (required && process.env[varName] === undefined) { throw Error(`${varName} is not set.`); } const value = Number(process.env[varName]); if (isNaN(value)) { throw Error(`${varName} is not a number.`); } return value; } function getArgs() { return { totalRunners: getEnvNumber('TOTAL_RUNNERS', { required: true }), thisRunner: getEnvNumber('THIS_RUNNER', { required: true }), }; } (async () => { try { const { totalRunners, thisRunner } = getArgs(); const command = `yarn cypress run --spec "$(yarn --silent ts-node --quiet scripts/cypress-spec-split.ts ${totalRunners} ${thisRunner})"`; console.log(`Running: ${command}`); const commandProcess = exec(command); // pipe output because we want to see the results of the run if (commandProcess.stdout) { commandProcess.stdout.pipe(process.stdout); } if (commandProcess.stderr) { commandProcess.stderr.pipe(process.stderr); } commandProcess.on('exit', (code) => { process.exit(code || 0); }); } catch (err) { console.error(err); process.exit(1); } })();

This script:

  1. Pulls the total number of jobs (totalRunners) and the number of this job (thisRunner) from env variables.
  2. Executes the cypress run command with --spec.

Put it all together

Taking all we’ve built up so far and with our scripts located in scripts/cypress-spec-split.ts and scripts/cypress-ci-run.ts (relative to the root of the project), we get this workflow:

name: CI on: [push] env: NODE_ENV: test DATABASE_URL: postgresql://postgres:postgres@localhost/myapp_test?schema=ci jobs: build: name: Build runs-on: ubuntu-latest steps: - name: Checkout code uses: actions/checkout@v2 - name: Setup Node uses: actions/setup-node@v2.1.2 with: node-version: 16.13.2 cache: yarn - name: Install packages run: yarn install --frozen-lockfile - name: ESLint run: yarn lint - name: Check TypeScript run: yarn typecheck # Build app - name: Build App run: yarn build # Save build for other jobs - name: Save build folder uses: actions/upload-artifact@v2 with: name: the-build if-no-files-found: error path: build retention-days: 1 e2e: name: e2e Tests runs-on: ubuntu-latest needs: build strategy: fail-fast: false matrix: # Run copies of the current job in parallel. These need to be a # continuous series of numbers, starting with `0`. If you change the # number of containers, change TOTAL_RUNNERS below. containers: [0, 1, 2, 3, 4] services: postgres: image: postgres:13.2 ports: ['5432:5432'] # Make sure the database is ready before we use it options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5 # These need to be set on the service or it won't start for some reason env: POSTGRES_USER: postgres POSTGRES_PASSWORD: postgres POSTGRES_DB: myapp_test steps: - name: Checkout code uses: actions/checkout@v2 - name: Download the build uses: actions/download-artifact@v2 with: name: the-build path: build - name: Setup Node uses: actions/setup-node@v2.1.2 with: node-version: 16.13.2 cache: yarn - name: Install packages run: yarn install --frozen-lockfile - name: Run Cypress e2e tests uses: cypress-io/github-action@v4 with: # we have already installed all dependencies above install: false # build: yarn build start: yarn test:server wait-on: 'http://localhost:3001' command: yarn ts-node scripts/cypress-ci-run.ts env: # the number of containers in the job matrix TOTAL_RUNNERS: 5 THIS_RUNNER: ${{ matrix.containers }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} - name: Upload Cypress Screenshots uses: actions/upload-artifact@v1 # Only capture images on failure if: failure() with: name: cypress-screenshots path: cypress/screenshots - name: Upload Cypress Logs uses: actions/upload-artifact@v1 # Only capture logs on failure if: failure() with: name: cypress-logs path: cypress/logs - name: Upload Cypress Videos uses: actions/upload-artifact@v1 # Only capture videos on failure if: failure() with: name: cypress-videos path: cypress/videos

I tried to make this as general as possible, but it’s not going to exactly fit everyone’s needs. I’m hoping it gives you enough to work with if you want to adapt it to your specific use case.

Results

At Echobind we’ve been running this solution in a production app for a while. When we rolled it out, it dropped CI runtimes from 30-40 minutes down to about 20 minutes. The "billable time" was roughly the same as it was before; maybe a few minutes more due to the overhead of setting up each parallel job.

We played around with the number of jobs. The benefit of adding more quickly diminishes.

  • Two jobs save about 5-15 minutes.
  • Three jobs save about 7-17 minutes.
  • Four jobs save about 9-19 minutes.
  • Five jobs save about the same as four.

Run times for GitHub Actions are wildly inconsistent, so these values are general and not exact.

We decided to go with three concurrent jobs. Going up to 4-5 jobs didn’t add enough benefit to make it worthwhile. But you can experiment and see what works for you.

Rant

Cypress should do this out-of-the-box. There’s no reason it can’t implement naive test splitting internally, just as we’ve done above. And it would do a better job of it. I mean, Playwright does it. Hopefully, Cypress will get on board someday and this article will become a useless artifact of progress.

Conclusion

Who reads conclusions? I’m just going to tell you what we did. You either read the article and did all the things and don’t need me to remind you, or you skimmed the article and got what you needed and skipped this part (that’s what I would have done). So no one is reading this except whoever is reviewing this article before publication—thanks for the review, btw!

Anyway, what did we do? We split up our GitHub Actions workflow into multiple jobs, we spilt our tests up so that we can run them in parallel, and we set up a job matrix for our e2e tests so that multiple jobs run at the same time.

What’s next?

Well, you could adapt this to run your local e2e tests in parallel. That’s a bit tricky if they all share the same database. But keep your eyes on this space, because exactly that is coming soon!

Resources

Share this post

Interested in working with us?

Give us some details about your project, and our team will be in touch within a day or two.