GitLab CI Pipeline Execution#

The repository contains DockerFiles and GitLab Runner configuration files to support automated testing and documentation builds. General information on configuring GitLab CI pipelines can be found in the official Gitlab documentation.

The GitLab CI configuration entry point is the .gitlab-ci.yml file at the root of the source tree. Configuration templates are found in the files in the admin/ci-templates/ directory.

Docker images used by GitLab Runner are available in our GitLab Container Registry. (See Containers.) Images are (re)built manually using details in admin/containers. (See Tools.)


Full automated testing is only available for merge requests originating from branches of the main repository. GitLab CI pipelines created for forked repositories will include fewer jobs in the testing pipeline. Non-trivial merge requests may need to be issued from a branch in the gromacs project namespace in order to receive sufficient testing before acceptance.

Configuration files#

At the root of the repository, .gitlab-ci.yml defines the stages and some default parameters, then includes files from admin/gitlab-ci/ to define jobs to be executed in the pipelines.

Note that job names beginning with a period (.) are “hidden”. Such jobs are not directly eligible to run, but may be used as templates via the *extends* job property.

Job parameters#

Refer to for complete documentation on GitLab CI job parameters, but note the following GROMACS-specific conventions.


Used by several of our templates to prepend shell commands to a job script parameter. Avoid using before-script directly, and be cautious about nested extends overriding multiple before_script definitions.

job cache#

There is no global default, but jobs that build software will likely set cache. To explicitly unset cache directives, specify a job parameter of cache: {}. Refer to GitLab docs for details. In particular, note the details of cache identity according to cache:key


See Containers for more about the Docker images used for the CI pipelines. If a job depends on artifacts from previous jobs, be sure to use the same (or a compatible) image as the dependency!


Job parameters for controlling the circumstances under which jobs run. (Some key words may have different meanings when occurring as elements of other parameters, such as archive:when, to which this note is not intended to apply.) Instead of setting any of these directly in a job definition, try to use one of the pre-defined behaviors (defined as .rules:<something> in admin/gitlab-ci/rules.gitlab-ci.yml). Errors or unexpected behavior will occur if you specify more than one .rules:… template, or if you use these parameters in combination with a .rules… template. To reduce errors and unexpected behavior, restrict usage of these controls to regular job definitions (don’t use in “hidden” or parent jobs). Note that rules is not compatible with the older only and except parameters. We have standardized on the (newer) rules mechanism.


Jobs that can only run in the GROMACS GitLab CI Runner infrastructure should require the k8s-scilifelab tag. These include jobs that specify Kubernetes configuration variables or require special facilities, such as GPUs or MPI. Note that the tag controls which Runners are eligible to take a job. It does not affect whether the job is eligible for addition to a particular pipeline. Additional rules logic should be used to make sure that jobs with the k8s-scilifelab do not become eligible for pipelines launched outside of the GROMACS project environment. See, for instance, CI_PROJECT_NAMESPACE


Many job definitions will add or override keys in variables. Refer to GitLab for details of the merging behavior. Refer to Updating regression tests for local usage.

Schedules and triggers#

Pipeline schedules are configured through the GitLab web interface. Scheduled pipelines may provide different variable definitions through the environment to jobs that run under the schedules condition.

Nightly scheduled pipelines run against main and release branches in the GROMACS repository.

Some of the rules defined in rules.gitlab-ci.yml restrict jobs to run only for scheduled pipelines, or only for specific schedules according to the variables defined for that schedule in the web interface. For example, the rule element if-weekly-then-on-success causes a job to run only if the schedule sets GMX_PIPELINE_SCHEDULE=weekly.

Running post-merge-acceptance pipelines

The Gitlab CI for GROMACS runs a set of jobs by default only after a MR has been accepted and the resulting commit is included in the target branch if it is main or one of the release branches. Those jobs can be triggered manually using the POST_MERGE_ACCEPTANCE input variable documented below when executing a new pipeline through the Gitlab web interface.

See also

Global templates#

In addition to the templates in the main job definition files, common “mix-in” functionality and behavioral templates are defined in admin/gitlab-ci/global.gitlab-ci.yml. For readability, some parameters may be separated into their own files, named according to the parameter (e.g. rules.gitlab-ci.yml).

Jobs beginning with .use- provide mix-in behavior, such as boilerplate for jobs using a particular tool chain.

Jobs beginning with a parameter name allow parameters to be set in a single place for common job characteristics. If providing more than a default parameter value, the job name should be suffixed by a meaningful descriptor and documented within admin/gitlab-ci/global.gitlab-ci.yml

Job names#

Job names should

  1. Indicate the purpose of the job.

  2. Indicate relationships between multi-stage tasks.

  3. Distinguish jobs in the same stage.

  4. Distinguish job definitions throughout the configuration.

Jobs may be reassigned to different stages over time, so including the stage name in the job name is not helpful, generally. If tags like “pre” and “post,” or “build” and “test” are necessary to distinguish phases of, say, “webpage,” then such tags can be buried at the end of the job name.

Stylistically, it is helpful to use delimiters like : to distinguish the basic job name from qualifiers or details. Also consider grouping jobs

Updating regression tests#

Changes in GROMACS that require changes in regression-tests are notoriously hard, because a merge request that tests against the non-updated version of the regression tests will necessarily fail, while updating regression tests while the current change is not integrated into main, might cause other merge request pipelines to fail.

The solution is a new regression-test branch or commit, uploaded to gitlab. Then set that regression test branch with REGRESSIONTESTBRANCH or the specific commit with REGRESSIONTESTCOMMIT when running the specific pipeline that requires the regressiontest-update. See below on how to set variables for specific pipelines.


The GitLab CI framework, GitLab Runner, plugins, and our own scripts set and use several variables.

Default values are available from the .variables:default definition in admin/gitlab-ci/global.gitlab-ci.yml. Many of the mix-in / template jobs provide additional or overriding definitions. Other variables may be set when making final job definitions.

Variables may control the behvior of GitLab-CI (those beginning with CI_), GitLab Runner and supporting infrastructure, or may be used by job definitions, or passed along to the environment of executed commands.

variables keys beginning with KUBERNETES_ relate to the GitLab Runner Kubernets executor

Other important variable keys are as follows.


GROMACS specific directory to perform configuration, building and testing in. Usually job dependent, needs to be the same for all tasks of dependent jobs.


Distinguishes pipelines created for repositories in the gromacs GitLab project space. May be used to pre-screen jobs to determine whether GROMACS GitLab infrastructure is available to the pipeline before the job is created.


Integer version number provided by toolchain mix-in for convenience and internal use.


gromacs/ci-... Docker images built after October 2020 have several versions of CMake installed. The most recent version of CMake in the container will be appear first in PATH. To allow individual jobs to use specific versions of CMake, please write the job script sections using $CMAKE instead of cmake and begin the script section with a line such as - CMAKE=${CMAKE:-$(which cmake)}. Specify a CMake version by setting the CMAKE variable to the full executable path for the CMake version you would like to use. See also Containers.


CMake command line options for a tool chain. A definition is provided by the mix-in toolchain definitions (e.g. .use-gcc8) to be appended to cmake calls in a job’s script.


Provide CMake command line arguments to define GROMACS MPI build options.


Read-only environment variable used to control behaviour of script uploading artifact files to the ftp and web servers. Set to false to actually upload files. This is usually done through the pipeline submission script, but can be done manual as well through the web interface.


Read-only environment variable for CI scripts to check the library API version to expect from the build job artifacts. Initially, this variable is only defined in admin/gitlab-ci/api-client.matrix/gromacs-main.gitlab-ci.yml but could be moved to admin/gitlab-ci/global.gitlab-ci.yml if found to be of general utility.


Read-only environment variable that can be checked to see if a job is executing in a pipeline for preparing a tagged release. Can be set when launching pipelines via the GitLab web interface. For example, see rules mix-ins in admin/gitlab-ci/global.gitlab-ci.yml.


Use this branch of the regressiontests rather than main to allow for merge requests that require updated regression tests with valid CI tests.


Use this commit to the regressiontests rather than the head on main to allow for merge requests that require updated regression tests with valid CI tests.


Read-only environment variable that indicates that only jobs scheduled to run after a commit has been merged into its target branch should be executed. Can be set to run pipelines through the web interface or as schedules. For use please see the rules mix-ins in admin/gitlab-ci/global.gitlab-ci.yml.


Read-only environment variable used exclusively by job rules. Rule elements of the form if-<value>-then-on-success check whether GMX_PIPELINE_SCHEDULE==value. Allowed values are determined by the rule elements available in admin/gitlab-ci/rules.gitlab-ci.yml, and include nightly and weekly to restrict jobs to only run in the corresponding schedules.

Setting variables#

Variables for individual piplelines are set in the gitlab interface under CI/CD; Pipelines. Then chose in the top right corner Run Piplelines. Under Run for, the desired branch may be selected, and variables may be set in the fields below.


GROMACS project infrastructure uses Docker containerization to isolate automated tasks. A number of images are maintained to provide a breadth of testing coverage.

Scripts and configuration files for building images are stored in the repository under admin/containers/. Images are (re)built manually by GROMACS project staff and pushed to DockerHub and GitLab. See and

GitLab Container Registry#

CI Pipelines use a GitLab container registry instead of pulling from Docker Hub.

Project members with role Developer or higher privilege can push images to the container registry.


  1. Create a personal access token (docs) with write_registry and read_registry scopes. Save the hash!

  2. Authenticate from the command line with docker login -u <user name> -p <hash>

  3. docker push<imagename>

Refer to in the main branch for the set of images currently built.

Within pipeline jobs, jobs specify a Docker image with the image property. For image naming convention, see utility.image_name(). Images from the GitLab registry are easily accessible with the same identifier as above. For portability, CI environment variables may be preferable for parts of the image identifier. Example:

  image: ${CI_REGISTRY_IMAGE}/ci-<configuration>

For more granularity, consider equivalent expressions ${CI_REGISTRY}/${CI_PROJECT_PATH} or ${CI_REGISTRY}/${CI_PROJECT_NAMESPACE}/${CI_PROJECT_NAME} Ref:


(in admin/)

Automatic release options.

usage: [-h] (--local | --server) [--token TOKEN]
                             [--ssh-key SSH_KEY] [--release | --no-release]
                             [--dry-run | --no-dry-run] [--branch BRANCH]
-h, --help#

show this help message and exit


Set when running in local (submit pipeline) mode.


Set when running in server (upload artefacts) mode.

--token <token>#

GitLab access token needed to launch pipelines

--ssh-key <ssh_key>#

Path to SSH key needed to upload things to server. Pass in local mode to have it during the job

--branch <branch>#

Branch to run pipeline for (default “main”)

Options for manually submitting pipelines.

usage: [-h] [--type TYPE] --token TOKEN
                             [--branch BRANCH]
                             [--regtest-branch REGTEST_BRANCH]
                             [--regtest-commit REGTEST_COMMIT]
-h, --help#

show this help message and exit

--type <type>#

What kind of pipeline to run (default is “POST_MERGE_ACCEPTANCE”)

--token <token>#

GitLab access token needed to launch pipelines

--branch <branch>#

Branch to run pipeline for (default “main”)

--regtest-branch <regtest_branch>#

Regressiontest branch to use to for running regression tests (default none, which means fall back to main)

--regtest-commit <regtest_commit>#

Commit to use instead of the regtest-branch tip for running tests (default empty)


Uses NVidia’s HPC Container Maker to generate DockerFiles using our scripted_gmx_docker_builds module. Refer to the contents of admin/ for the flags currently in use. Run the script to see the tagged images currently being produced.

(in admin/containers/)

GROMACS CI image creation script

usage: [-h] [--cmake [CMAKE ...]]
                                     [--gcc GCC | --llvm [LLVM] | --oneapi
                                     [ONEAPI] | --intel-llvm [INTEL_LLVM]]
                                     [--ubuntu [UBUNTU] | --centos [CENTOS]]
                                     [--cuda [CUDA]] [--mpi [MPI]]
                                     [--tsan [TSAN]] [--hipsycl [HIPSYCL]]
                                     [--rocm [ROCM]] [--intel-compute-runtime]
                                     [--oneapi-plugin-amd] [--clfft [CLFFT]]
                                     [--heffte [HEFFTE]]
                                     [--nvhpcsdk [NVHPCSDK]]
                                     [--doxygen [DOXYGEN]] [--cp2k [CP2K]]
                                     [--venvs [VENVS ...]]
                                     [--format {docker,singularity}]

Named Arguments#


Selection of CMake version to provide to base image. (default: [‘3.18.4’, ‘3.21.2’, ‘3.24.0’])


Select GNU compiler tool chain. (default: 9) Some checking is implemented to avoid incompatible combinations


Select LLVM compiler tool chain. Some checking is implemented to avoid incompatible combinations


Select Intel oneAPI package version.


Select Intel LLVM release (GitHub tag).


Select Ubuntu Linux base image. (default: “20.04”)


Select Centos Linux base image.


Select a CUDA version for a base Linux image from NVIDIA.


Enable MPI (default disabled) and optionally select distribution (default: openmpi)


Build special compiler versions with TSAN OpenMP support


Select hipSYCL repository tag/commit/branch.


Select AMD compute engine version.


Include Intel Compute Runtime.


Install Codeplay oneAPI NVIDIA plugin.


Install Codeplay oneAPI AMD plugin.


Add external clFFT libraries to the build image


Select heffte repository tag/commit/branch.


Select NVIDIA HPC SDK version.


Add doxygen environment for documentation builds. Also adds other requirements needed for final docs images.


Add build environment for CP2K QM/MM support


List of Python versions (“major.minor.patch”) for which to install venvs. (default: [‘3.7.13’, ‘3.10.5’])


Possible choices: docker, singularity

Container specification format (default: “docker”)

Supporting modules in admin/containers#

Building block based Dockerfile generation for CI testing images.

Generates a set of docker images used for running GROMACS CI on Gitlab. The images are prepared according to a selection of build configuration targets that hope to cover a broad enough scope of different possible systems, allowing us to check compiler types and versions, as well as libraries used for accelerators and parallel communication systems. Each combinations is described as an entry in the build_configs dictionary, with the script analysing the logic and adding build stages as needed.

Based on the example script provided by the NVidia HPCCM repository.


NVidia HPC Container Maker



$ python3 --help
$ python3 --format docker > Dockerfile && docker build .
$ python3 | docker build -

See also

scripted_gmx_docker_builds.add_base_stage(name: str, input_args, output_stages: MutableMapping[str, hpccm.Stage])#

Establish dependencies that are shared by multiple parallel stages.

scripted_gmx_docker_builds.add_documentation_dependencies(input_args, output_stages: MutableMapping[str, hpccm.Stage])#

Add appropriate layers according to doxygen input arguments.

scripted_gmx_docker_builds.add_intel_llvm_compiler_build_stage(input_args, output_stages: Mapping[str, hpccm.Stage])#

Isolate the Intel LLVM (open-source oneAPI) preparation stage.

This stage is isolated so that its installed components are minimized in the final image (chiefly /opt/intel) and its environment setup script can be sourced. This also helps with rebuild time and final image size.

scripted_gmx_docker_builds.add_oneapi_compiler_build_stage(input_args, output_stages: Mapping[str, hpccm.Stage])#

Isolate the oneAPI preparation stage.

This stage is isolated so that its installed components are minimized in the final image (chiefly /opt/intel) and its environment setup script can be sourced. This also helps with rebuild time and final image size.

scripted_gmx_docker_builds.add_python_stages(input_args: Namespace, *, base: str, output_stages: MutableMapping[str, hpccm.Stage])#

Add the stage(s) necessary for the requested venvs.

One intermediate build stage is created for each venv (see –venv option).

Each stage partially populates Python installations and venvs in the home directory. The home directory is collected by the ‘pyenv’ stage for use by the main build stage.

scripted_gmx_docker_builds.add_tsan_compiler_build_stage(input_args, output_stages: Mapping[str, hpccm.Stage])#

Isolate the expensive TSAN preparation stage.

This is a very expensive stage, but has few and disjoint dependencies, and its output is easily compartmentalized (/usr/local) so we can isolate this build stage to maximize build cache hits and reduce rebuild time, bookkeeping, and final image size.

scripted_gmx_docker_builds.base_image_tag(args) str#

Generate image for hpccm.baseimage().

scripted_gmx_docker_builds.build_stages(args) Iterable[hpccm.Stage]#

Define and sequence the stages for the recipe corresponding to args.

scripted_gmx_docker_builds.get_cmake_stages(*, input_args: Namespace, base: str)#

Get the stage(s) necessary for the requested CMake versions.

One (intermediate) build stage is created for each CMake version, based on the base stage. See --cmake option.

Each stage uses the version number to determine an installation location:


The resulting path is easily copied into the main stage.


dict of isolated CMake installation stages with keys from cmake-{version}

scripted_gmx_docker_builds.hpccm_distro_name(args) str#

Generate _distro for hpccm.baseimage().

Convert the linux distribution variables into something that hpccm understands.

The same format is used by the lower level hpccm.config.set_linux_distro().

scripted_gmx_docker_builds.prepare_venv(version: Version) Sequence[str]#

Get shell commands to set up the venv for the requested Python version.


Return a shell-escaped string from split_command.

Copied from Python 3.8. Can be replaced with shlex.join once we don’t need to support Python 3.7.

A utility module to help manage the matrix of configurations for CI testing and build containers.

When called as a stand alone script, prints a Docker image name based on the command line arguments. The Docker image name is of the form used in the GROMACS CI pipeline jobs.


$ python3 -m utility --llvm --doxygen

See also

As a module, provides importable argument parser and docker image name generator.

Note that the parser is created with add_help=False to make it friendly as a parent parser, but this means that you must derive a new parser from it if you want to see the full generated command line help.


import utility.parser
# utility.parser does not support `-h` or `--help`
parser = argparse.ArgumentParser(
    description='GROMACS CI image creation script',
# ArgumentParser(add_help=True) is default, so parser supports `-h` and `--help`

See also

utility.image_name(configuration: Namespace) str#

Generate docker image name.

Image names have the form ci-<slug>, where the configuration slug has the form:

<distro>-<version>-<compiler>-<major version>[-<gpusdk>-<version>][-<use case>]

This function also applies an appropriate Docker image repository prefix.


configuration – Docker image configuration as described by the parsed arguments.

utility.parser = ArgumentParser(prog='sphinx-build', usage=None, description='GROMACS CI image slug options.', formatter_class=<class 'argparse.HelpFormatter'>, conflict_handler='error', add_help=False)#

A parent parser for tools referencing image parameters.

This argparse parser is defined for convenience and may be used to partially initialize parsers for tools.


Do not modify this parser.

Instead, inherit from it with the parents argument to argparse.ArgumentParser