GitLab CI Pipeline Execution#
The repository contains DockerFiles and GitLab Runner configuration files to support automated testing and documentation builds. General information on configuring GitLab CI pipelines can be found in the official Gitlab documentation.
The GitLab CI configuration entry point is the .gitlab-ci.yml
file
at the root of the source tree.
Configuration templates are found in the files in the
admin/ci-templates/
directory.
Docker images used by GitLab Runner are available in our
GitLab Container Registry.
(See Containers.)
Images are (re)built manually using details in admin/containers
.
(See Tools.)
Note
Full automated testing is only available for merge requests originating from
branches of the main https://gitlab.com/gromacs/gromacs repository.
GitLab CI pipelines created for forked repositories will include fewer jobs
in the testing pipeline. Non-trivial merge requests may need to be issued
from a branch in the gromacs
project namespace in order to receive
sufficient testing before acceptance.
Configuration files#
At the root of the repository, .gitlab-ci.yml
defines the stages and
some default parameters, then includes files from admin/gitlab-ci/
to
define jobs to be executed in the pipelines.
Note that job names beginning with a period (.
) are
“hidden”.
Such jobs are not directly eligible to run, but may be used as templates
via the *extends* job property.
Job parameters#
Refer to https://docs.gitlab.com/ee/ci/yaml for complete documentation on GitLab CI job parameters, but note the following GROMACS-specific conventions.
- before_script#
Used by several of our templates to prepend shell commands to a job script parameter. Avoid using before-script directly, and be cautious about nested extends overriding multiple before_script definitions.
- job cache#
There is no global default, but jobs that build software will likely set cache. To explicitly unset cache directives, specify a job parameter of
cache: {}
. Refer to GitLab docs for details. In particular, note the details of cache identity according to cache:key- image#
See Containers for more about the Docker images used for the CI pipelines. If a job depends on artifacts from previous jobs, be sure to use the same (or a compatible) image as the dependency!
- rules#
- only#
- except#
- when#
Job parameters for controlling the circumstances under which jobs run. (Some key words may have different meanings when occurring as elements of other parameters, such as archive:when, to which this note is not intended to apply.) Rules in GitLab are special, since the first matching rule will cause a job to trigger, and then all remaining rules are ignored. To create rules to skip jobs, write rules that use the execution time “never”. Errors or unexpected behavior will occur if you specify more than one .rules:… template, or if you use these parameters in combination with a .rules… template - it is thus NOT possible to combine rules through inheritance with the
extends
tag. Instead, to combine sequences of rules we recommend using a plain rules tag where you reference rule entries with the !reference tag, e.g.!reference [.rules:<something>, rules]
. Each such reference can be used as an individual rule in the list. To reduce errors and unexpected behavior, restrict usage of these controls to regular job definitions (don’t use in “hidden” or parent jobs). Note that rules is not compatible with the older only and except parameters. We have standardized on the (newer) rules mechanism.We no longer use any special tags for general (meaning CPU-only) GROMACS CI jobs, to make sure at least the CPU jobs can still run even if somebody clones the repo. For testing you can still add a default tag at the start of the top-level
.gitlab-ci.yml
, but this should only be used to check that a specific runner works - for production we handle it in GitLab instead by selecting what runners accept untagged jobs. By default we currently run those on the infrastructure in Stockholm for the GROMACS project, but please design all CPU jobs so they will work on the shared runners too.- variables#
Many job definitions will add or override keys in variables. Refer to GitLab for details of the merging behavior. Refer to Updating regression tests for local usage.
Schedules and triggers#
Pipeline schedules are
configured through the GitLab web interface.
Scheduled pipelines may provide different variable definitions through the
environment to jobs that run under the schedules
condition.
Nightly scheduled pipelines run against main
and release branches in
the GROMACS repository.
Some of the rules defined in rules.gitlab-ci.yml
restrict jobs
to run only for scheduled pipelines, or only for specific schedules
according to the variables defined for that schedule in the web interface.
For example, the rule element if-weekly-then-on-success
causes a job
to run only if the schedule sets GMX_PIPELINE_SCHEDULE=weekly
.
Running post-merge-acceptance pipelines
The Gitlab CI for GROMACS runs a set of jobs by default only after a MR has been
accepted and the resulting commit is included in the target branch if it is main
or one of the release branches. Those jobs can be triggered manually using the
POST_MERGE_ACCEPTANCE
input variable documented below when executing a new pipeline
through the Gitlab web interface.
See also trigger-post-merge.py.
Global templates#
In addition to the templates in the main job definition files,
common “mix-in” functionality and behavioral templates are defined in
admin/gitlab-ci/global.gitlab-ci.yml
.
For readability, some parameters may be separated into their own files, named
according to the parameter (e.g. rules.gitlab-ci.yml
).
Jobs beginning with .use-
provide mix-in behavior, such as boilerplate for
jobs using a particular tool chain.
Jobs beginning with a parameter
name allow parameters to be set in a single place for common job characteristics.
If providing more than a default parameter value, the job name should be suffixed
by a meaningful descriptor and documented within
admin/gitlab-ci/global.gitlab-ci.yml
Job names#
Job names should
Indicate the purpose of the job.
Indicate relationships between multi-stage tasks.
Distinguish jobs in the same stage.
Distinguish job definitions throughout the configuration.
Jobs may be reassigned to different stages over time, so including the stage name in the job name is not helpful, generally. If tags like “pre” and “post,” or “build” and “test” are necessary to distinguish phases of, say, “webpage,” then such tags can be buried at the end of the job name.
Stylistically, it is helpful to use delimiters like :
to distinguish the
basic job name from qualifiers or details. Also consider
grouping jobs
Updating regression tests#
Changes in GROMACS that require changes in regression-tests are notoriously hard, because a merge request that tests against the non-updated version of the regression tests will necessarily fail, while updating regression tests while the current change is not integrated into main, might cause other merge request pipelines to fail.
The solution is a new regression-test branch or commit, uploaded to gitlab. Then set that regression test branch with REGRESSIONTESTBRANCH or the specific commit with REGRESSIONTESTCOMMIT when running the specific pipeline that requires the regressiontest-update. See below on how to set variables for specific pipelines.
Variables#
The GitLab CI framework, GitLab Runner, plugins, and our own scripts set and use several variables.
Default values are available from the top level variables
definition in
global.gitlab-ci.yml
.
Many of the mix-in / template jobs provide additional or overriding definitions.
Other variables may be set when making final job definitions.
Variables may control the behvior of GitLab-CI (those beginning with CI_
),
GitLab Runner and supporting infrastructure, or may be used by job definitions,
or passed along to the environment of executed commands.
variables keys beginning with KUBERNETES_
relate to the GitLab Runner
Kubernets executor
Other important variable keys are as follows.
- BUILD_DIR#
GROMACS specific directory to perform configuration, building and testing in. Usually job dependent, needs to be the same for all tasks of dependent jobs.
- CI_PROJECT_NAMESPACE#
Distinguishes pipelines created for repositories in the
gromacs
GitLab project space. May be used to pre-screen jobs to determine whether GROMACS GitLab infrastructure is available to the pipeline before the job is created.- COMPILER_MAJOR_VERSION#
Integer version number provided by toolchain mix-in for convenience and internal use.
- CMAKE#
gromacs/ci-...
Docker images built after October 2020 have several versions of CMake installed. The most recent version of CMake in the container will be appear first inPATH
. To allow individual jobs to use specific versions of CMake, please write the job script sections using$CMAKE
instead ofcmake
and begin the script section with a line such as- CMAKE=${CMAKE:-$(which cmake)}
. Specify a CMake version by setting the CMAKE variable to the full executable path for the CMake version you would like to use. See also Containers.- CMAKE_COMPILER_SCRIPT#
CMake command line options for a tool chain. A definition is provided by the mix-in toolchain definitions (e.g.
.use-gcc8
) to be appended to cmake calls in a job’s script.- CMAKE_MPI_OPTIONS#
Provide CMake command line arguments to define GROMACS MPI build options.
- DRY_RUN#
Read-only environment variable used to control behaviour of script uploading artifact files to the ftp and web servers. Set to false to actually upload files. This is usually done through the pipeline submission script, but can be done manual as well through the web interface.
- GROMACS_MAJOR_VERSION#
Read-only environment variable for CI scripts to check the library API version to expect from the
build
job artifacts. Initially, this variable is only defined inadmin/gitlab-ci/api-client.matrix/gromacs-main.gitlab-ci.yml
but could be moved toadmin/gitlab-ci/global.gitlab-ci.yml
if found to be of general utility.- GROMACS_RELEASE#
Read-only environment variable that can be checked to see if a job is executing in a pipeline for preparing a tagged release. Can be set when launching pipelines via the GitLab web interface. For example, see rules mix-ins in
admin/gitlab-ci/global.gitlab-ci.yml
.- REGRESSIONTESTBRANCH#
Use this branch of the regressiontests rather than main to allow for merge requests that require updated regression tests with valid CI tests.
- REGRESSIONTESTCOMMIT#
Use this commit to the regressiontests rather than the head on main to allow for merge requests that require updated regression tests with valid CI tests.
- POST_MERGE_ACCEPTANCE#
Read-only environment variable that indicates that only jobs scheduled to run after a commit has been merged into its target branch should be executed. Can be set to run pipelines through the web interface or as schedules. For use please see the rules mix-ins in
admin/gitlab-ci/global.gitlab-ci.yml
.- GMX_PIPELINE_SCHEDULE#
Read-only environment variable used exclusively by job rules. Rule elements of the form
if-<value>-then-on-success
check whetherGMX_PIPELINE_SCHEDULE==value
. Allowed values are determined by the rule elements available inadmin/gitlab-ci/rules.gitlab-ci.yml
, and includenightly
andweekly
to restrict jobs to only run in the corresponding schedules.
Setting variables#
Variables for individual piplelines are set in the gitlab interface under
CI/CD
; Pipelines
. Then chose in the top right corner Run Piplelines
.
Under Run for
, the desired branch may be selected, and variables may be set
in the fields below.
Using GPUs in Gitlab-runner#
Previously, GROMACS used a hacked local version of Gitlab-runner where we had added support for Kubernetes extended resources. However, Gitlab has unfortunately not shown interest in merging these, and as the runner has evolved it is difficult to keep up. In the future it might be possible to select GPUs directly in the job configuration, but for now we use the ability to specify it in each Gitlab-runner configuration and thus have separate runners going for CPU-only as well as single or dual GPU devices from Nvidia, AMD, and Intel.
To enable both us and other users to also use the shared Gitlab runners, the
top-level configuration .gitlab-ci.yml
now contains a few variables where
you can select what tags to use for Gitlab-runners to get single or dual devices
from each vendor. There are also variables that allow you to set the largest
number of devices you have (on single nodes) in these runners; if any tests
cannot be run because you do not have the right hardware, we will simply skip
those tests.
Containers#
GROMACS project infrastructure uses Docker containerization to isolate automated tasks. A number of images are maintained to provide a breadth of testing coverage.
Scripts and configuration files for building images are stored in the repository
under admin/containers/
.
Images are (re)built manually by GROMACS project staff and pushed to
DockerHub and GitLab.
See https://hub.docker.com/u/gromacs and https://gitlab.com/gromacs/gromacs/container_registry
GitLab Container Registry#
CI Pipelines use a GitLab container registry instead of pulling from Docker Hub.
Project members with role Developer
or higher privilege can
push images
to the container registry.
Steps:
Create a personal access token (docs) with
write_registry
andread_registry
scopes. Save the hash!Authenticate from the command line with
docker login registry.gitlab.com -u <user name> -p <hash>
docker push registry.gitlab.com/gromacs/gromacs/<imagename>
Refer to buildall.sh
in the main
branch for the set of images
currently built.
Within pipeline jobs, jobs specify a Docker image with the image property.
For image naming convention, see utility.image_name()
.
Images from the GitLab registry
are easily accessible with the same identifier as above.
For portability, CI environment variables may be preferable for parts of the image identifier.
Example:
some_job:
image: ${CI_REGISTRY_IMAGE}/ci-<configuration>
...
For more granularity,
consider equivalent expressions ${CI_REGISTRY}/${CI_PROJECT_PATH}
or ${CI_REGISTRY}/${CI_PROJECT_NAMESPACE}/${CI_PROJECT_NAME}
Ref: https://docs.gitlab.com/ee/ci/variables/predefined_variables.html
Tools#
(in admin/
)
make-release-build.py#
Automatic release options.
usage: make-release-build.py [-h] (--local | --server) [--token TOKEN]
[--ssh-key SSH_KEY] [--release | --no-release]
[--dry-run | --no-dry-run] [--branch BRANCH]
- -h, --help#
show this help message and exit
- --local#
Set when running in local (submit pipeline) mode.
- --server#
Set when running in server (upload artefacts) mode.
- --token <token>#
GitLab access token needed to launch pipelines
- --ssh-key <ssh_key>#
Path to SSH key needed to upload things to server. Pass in local mode to have it during the job
- --release#
- --no-release#
- --dry-run#
- --no-dry-run#
- --branch <branch>#
Branch to run pipeline for (default “main”)
trigger-post-merge.py#
Options for manually submitting pipelines.
usage: trigger-post-merge.py [-h] [--type TYPE] --token TOKEN
[--branch BRANCH]
[--regtest-branch REGTEST_BRANCH]
[--regtest-commit REGTEST_COMMIT]
- -h, --help#
show this help message and exit
- --type <type>#
What kind of pipeline to run (default is “POST_MERGE_ACCEPTANCE”)
- --token <token>#
GitLab access token needed to launch pipelines
- --branch <branch>#
Branch to run pipeline for (default “main”)
- --regtest-branch <regtest_branch>#
Regressiontest branch to use to for running regression tests (default none, which means fall back to main)
- --regtest-commit <regtest_commit>#
Commit to use instead of the regtest-branch tip for running tests (default empty)
admin/containers/buildall.sh#
Uses NVidia’s
HPC Container Maker
to generate DockerFiles using our scripted_gmx_docker_builds
module.
Refer to the contents of admin/buildall.sh
for the flags currently in use.
Run the script to see the tagged images currently being produced.
scripted_gmx_docker_builds.py#
(in admin/containers/
)
GROMACS CI image creation script
usage: scripted_gmx_docker_builds.py [-h] [--cmake [CMAKE ...]]
[--gcc GCC | --llvm [LLVM] | --oneapi
[ONEAPI] | --intel-llvm [INTEL_LLVM]]
[--ubuntu [UBUNTU] | --centos [CENTOS]]
[--cuda [CUDA]] [--mpi [MPI]]
[--tsan [TSAN]] [--hipsycl [HIPSYCL]]
[--rocm [ROCM]] [--intel-compute-runtime]
[--oneapi-plugin-nvidia]
[--oneapi-plugin-amd] [--clfft [CLFFT]]
[--heffte [HEFFTE]]
[--nvhpcsdk [NVHPCSDK]]
[--doxygen [DOXYGEN]] [--cp2k [CP2K]]
[--venvs [VENVS ...]]
[--format {docker,singularity}]
Named Arguments#
- --cmake
Selection of CMake version to provide to base image. (default: [‘3.18.4’, ‘3.21.2’, ‘3.24.0’])
- --gcc
Select GNU compiler tool chain. (default: 9) Some checking is implemented to avoid incompatible combinations
- --llvm
Select LLVM compiler tool chain. Some checking is implemented to avoid incompatible combinations
- --oneapi
Select Intel oneAPI package version.
- --intel-llvm
Select Intel LLVM release (GitHub tag).
- --ubuntu
Select Ubuntu Linux base image. (default: “20.04”)
- --centos
Select Centos Linux base image.
- --cuda
Select a CUDA version for a base Linux image from NVIDIA.
- --mpi
Enable MPI (default disabled) and optionally select distribution (default: openmpi)
- --tsan
Build special compiler versions with TSAN OpenMP support
- --hipsycl
Select hipSYCL repository tag/commit/branch.
- --rocm
Select AMD compute engine version.
- --intel-compute-runtime
Include Intel Compute Runtime.
- --oneapi-plugin-nvidia
Install Codeplay oneAPI NVIDIA plugin.
- --oneapi-plugin-amd
Install Codeplay oneAPI AMD plugin.
- --clfft
Add external clFFT libraries to the build image
- --heffte
Select heffte repository tag/commit/branch.
- --nvhpcsdk
Select NVIDIA HPC SDK version.
- --doxygen
Add doxygen environment for documentation builds. Also adds other requirements needed for final docs images.
- --cp2k
Add build environment for CP2K QM/MM support
- --venvs
List of Python versions (“major.minor.patch”) for which to install venvs. (default: [‘3.7.13’, ‘3.10.5’])
- --format
Possible choices: docker, singularity
Container specification format (default: “docker”)
Supporting modules in admin/containers
#
scripted_gmx_docker_builds.py
#
Building block based Dockerfile generation for CI testing images.
Generates a set of docker images used for running GROMACS CI on Gitlab. The images are prepared according to a selection of build configuration targets that hope to cover a broad enough scope of different possible systems, allowing us to check compiler types and versions, as well as libraries used for accelerators and parallel communication systems. Each combinations is described as an entry in the build_configs dictionary, with the script analysing the logic and adding build stages as needed.
Based on the example script provided by the NVidia HPCCM repository.
- Reference:
- Authors:
Paul Bauer <paul.bauer.q@gmail.com>
Eric Irrgang <ericirrgang@gmail.com>
Joe Jordan <e.jjordan12@gmail.com>
Mark Abraham <mark.j.abraham@gmail.com>
Gaurav Garg <gaugarg@nvidia.com>
Usage:
$ python3 scripted_gmx_docker_builds.py --help
$ python3 scripted_gmx_docker_builds.py --format docker > Dockerfile && docker build .
$ python3 scripted_gmx_docker_builds.py | docker build -
See also
buildall.sh
- scripted_gmx_docker_builds.add_base_stage(name: str, input_args, output_stages: MutableMapping[str, hpccm.Stage])#
Establish dependencies that are shared by multiple parallel stages.
- scripted_gmx_docker_builds.add_documentation_dependencies(input_args, output_stages: MutableMapping[str, hpccm.Stage])#
Add appropriate layers according to doxygen input arguments.
- scripted_gmx_docker_builds.add_intel_llvm_compiler_build_stage(input_args, output_stages: Mapping[str, hpccm.Stage])#
Isolate the Intel LLVM (open-source oneAPI) preparation stage.
This stage is isolated so that its installed components are minimized in the final image (chiefly /opt/intel) and its environment setup script can be sourced. This also helps with rebuild time and final image size.
- scripted_gmx_docker_builds.add_oneapi_compiler_build_stage(input_args, output_stages: Mapping[str, hpccm.Stage])#
Isolate the oneAPI preparation stage.
This stage is isolated so that its installed components are minimized in the final image (chiefly /opt/intel) and its environment setup script can be sourced. This also helps with rebuild time and final image size.
- scripted_gmx_docker_builds.add_python_stages(input_args: Namespace, *, base: str, output_stages: MutableMapping[str, hpccm.Stage])#
Add the stage(s) necessary for the requested venvs.
One intermediate build stage is created for each venv (see –venv option).
Each stage partially populates Python installations and venvs in the home directory. The home directory is collected by the ‘pyenv’ stage for use by the main build stage.
- scripted_gmx_docker_builds.add_tsan_compiler_build_stage(input_args, output_stages: Mapping[str, hpccm.Stage])#
Isolate the expensive TSAN preparation stage.
This is a very expensive stage, but has few and disjoint dependencies, and its output is easily compartmentalized (/usr/local) so we can isolate this build stage to maximize build cache hits and reduce rebuild time, bookkeeping, and final image size.
- scripted_gmx_docker_builds.build_stages(args) Iterable[hpccm.Stage] #
Define and sequence the stages for the recipe corresponding to args.
- scripted_gmx_docker_builds.get_cmake_stages(*, input_args: Namespace, base: str)#
Get the stage(s) necessary for the requested CMake versions.
One (intermediate) build stage is created for each CMake version, based on the base stage. See
--cmake
option.- Each stage uses the version number to determine an installation location:
/usr/local/cmake-{version}
The resulting path is easily copied into the main stage.
- Returns:
dict of isolated CMake installation stages with keys from
cmake-{version}
- scripted_gmx_docker_builds.hpccm_distro_name(args) str #
Generate _distro for hpccm.baseimage().
Convert the linux distribution variables into something that hpccm understands.
The same format is used by the lower level hpccm.config.set_linux_distro().
- scripted_gmx_docker_builds.prepare_venv(version: Version) Sequence[str] #
Get shell commands to set up the venv for the requested Python version.
- scripted_gmx_docker_builds.shlex_join(split_command)#
Return a shell-escaped string from split_command.
Copied from Python 3.8. Can be replaced with shlex.join once we don’t need to support Python 3.7.
utility.py
#
A utility module to help manage the matrix of configurations for CI testing and build containers.
When called as a stand alone script, prints a Docker image name based on the command line arguments. The Docker image name is of the form used in the GROMACS CI pipeline jobs.
Example:
$ python3 -m utility --llvm --doxygen
gromacs/ci-ubuntu-20.04-llvm-9-docs
See also
buildall.sh
As a module, provides importable argument parser and docker image name generator.
Note that the parser is created with add_help=False
to make it friendly as a
parent parser, but this means that you must derive a new parser from it if you
want to see the full generated command line help.
Example:
import utility.parser
# utility.parser does not support `-h` or `--help`
parser = argparse.ArgumentParser(
description='GROMACS CI image creation script',
parents=[utility.parser])
# ArgumentParser(add_help=True) is default, so parser supports `-h` and `--help`
See also
scripted_gmx_docker_builds.py
- Authors:
Paul Bauer <paul.bauer.q@gmail.com>
Eric Irrgang <ericirrgang@gmail.com>
Joe Jordan <e.jjordan12@gmail.com>
Mark Abraham <mark.j.abraham@gmail.com>
Gaurav Garg <gaugarg@nvidia.com>
- utility.image_name(configuration: Namespace) str #
Generate docker image name.
Image names have the form
ci-<slug>
, where the configuration slug has the form:<distro>-<version>-<compiler>-<major version>[-<gpusdk>-<version>][-<use case>]
This function also applies an appropriate Docker image repository prefix.
- Parameters:
configuration – Docker image configuration as described by the parsed arguments.
- utility.parser = ArgumentParser(prog='sphinx-build', usage=None, description='GROMACS CI image slug options.', formatter_class=<class 'argparse.HelpFormatter'>, conflict_handler='error', add_help=False)#
A parent parser for tools referencing image parameters.
This argparse parser is defined for convenience and may be used to partially initialize parsers for tools.
Warning
Do not modify this parser.
Instead, inherit from it with the parents argument to
argparse.ArgumentParser