GROMACS programs may be influenced by the use of
environment variables. First of all, the variables set in
GMXRC file are essential for running and
compiling GROMACS. Some other useful environment variables are
listed in the following sections. Most environment variables function
by being set in your shell to any non-NULL value. Specific
requirements are described below if other values need to be set. You
should consult the documentation for your shell for instructions on
how to set environment variables in the current shell, or in configuration
files for future shells. Note that requirements for exporting
environment variables to jobs run under batch control systems vary and
you should consult your local documentation for details.
Applies for computational electrophysiology setups only (see reference manual). The initial structure gets dumped to pdb file, which allows to check whether multimeric channels have the correct PBC representation.
Disables GPU timings in the log file for OpenCL.
Enables GPU timings in the log file for CUDA and SYCL. Note that CUDA timings are incorrect with multiple streams, as happens with domain decomposition or with both non-bondeds and PME on the GPU (this is also the main reason why they are not turned on by default).
the size of the buffer for file I/O. When set to 0, all file I/O will be unbuffered and therefore very slow. This can be handy for debugging purposes, because it ensures that all files are always totally up-to-date.
GROMACS automatically backs up old copies of files when trying to write a new file of the same name, and this variable controls the maximum number of backups that will be made, default 99. If set to 0 it fails to run if any output file already exists. And if set to -1 it overwrites any output file without making a backup.
if this is explicitly set, no cool quotes will be printed at the end of a program.
use long float format when printing decimal values.
prevent dumping of step files during (for example) blowing up during failure of constraint algorithms.
dump all configurations to a pdb file that have an interaction energy less than the value set in this environment variable.
Defaults to 1, which prints frame count e.g. when reading trajectory files. Set to 0 for quiet operation.
GMX_VIEW_PDB, commands used to automatically view xvg, eps and pdb file types, respectively; they default to
rasmol. Set to empty to disable automatic viewing of a particular file type. The command will be forked off and run in the background at the same priority as the GROMACS tool (which might not be what you want). Be careful not to use a command which blocks the terminal (e.g.
vi), since multiple instances might be run.
over-ride the number of DD pulses used (default 0, meaning no over-ride). Normally 1 or 2.
general debugging trigger for every domain decomposition (default 0, meaning off). Currently only checks global-local atom index mapping for consistency.
number of steps that elapse between dumping the current DD to a PDB file (default 0). This only takes effect during domain decomposition, so it should typically be 0 (never), 1 (every DD phase) or a multiple of
number of steps that elapse between dumping the current DD grid to a PDB file (default 0). This only takes effect during domain decomposition, so it should typically be 0 (never), 1 (every DD phase) or a multiple of
disables the specialized polling wait path used to wait for the PME and nonbonded GPU tasks completion to overlap to do the reduction of the resulting forces that arrive first. Setting this variable switches to the generic path with fixed waiting order.
sets the number of GPUs required by the test suite. By default, the test suite would fall-back to using CPU if GPUs could not be detected. Set it to a positive integer value to ensure that at least this at least this number of usable GPUs are detected. Default: 0 (not testing GPU availability).
There are a number of extra environment variables like these that are used in debugging - check the code!
Performance and Run Control#
Removes the upper limit on the number of points in an AWH bias grid. By default, an error is raised if the grid is unreasonably large and can cause sampling problems. Setting this variable will only remove this safety check. It is recommended instead to reduce the grid size, e.g., by using lower force constants.
Value of the number of threads per rank from which to switch from uniform to localized bonded interaction distribution; optimal value dependent on system and hardware, default value is 4.
Use CUDA Graphs to schedule a graph on each step rather than multiple activities scheduled to multiple CUDA streams, if the run conditions allow. Experimental.
times all code during runs. Incompatible with threads.
calls MPI_Barrier before each cycle start/stop call.
build domain decomposition cells in the order (z, y, x) rather than the default (x, y, z).
record DD load statistics for reporting at end of the run (default 1, meaning on)
Controls the use of the domain decomposition machinery when using a single MPI rank. Value 0 turns DD off, 1 turns DD on. Default is automated choice based on heuristics.
during constraint and vsite communication, use a pair of
MPI_Sendrecvcalls instead of two simultaneous non-blocking calls (default 0, meaning off). Might be faster on some MPI implementations.
when set, print slightly more detailed performance information to the log file. The resulting output is the way performance summary is reported in versions 4.5.x and thus may be useful for anyone using scripts to parse log files or standard output.
disables dynamic pair-list pruning. Note that gmx mdrun will still tune nstlist to the optimal value picked assuming dynamic pruning. Thus for good performance the -nstlist option should be used.
when set, disables GPU detection even if gmx mdrun was compiled with GPU support.
timing of asynchronously executed GPU operations can have a non-negligible overhead with short step times. Disabling timing can improve performance in these cases. Timings are disabled by default with CUDA and SYCL.
disables architecture-specific SIMD-optimized (SSE2, SSE4.1, AVX, etc.) non-bonded kernels thus forcing the use of plain C kernels.
Use direct rather than staged GPU communications for PME force transfers from the PME GPU to the CPU memory of a PP rank. This may have advantages in PCIe-only servers, or for runs with low atom counts (which are more sensitive to latency than bandwidth).
the number of systems for distance restraint ensemble averaging. Takes an integer value.
do domain-decomposition dynamic load balancing based on flop count rather than measured time elapsed (default 0, meaning off). This makes the load balancing reproducible, which can be useful for debugging purposes. A value of 1 uses the flops; a value > 1 adds (value - 1)*5% of noise to the flops to increase the imbalance and the scaling.
maximum percentage box scaling permitted per domain-decomposition load-balancing step (default 10)
planetary simulations are made possible (just for fun) by setting this environment variable, which allows setting
epsilon-rto -1 in the mdp file. Normally,
epsilon-rmust be greater than zero to prevent a fatal error. See webpage for example input files for a planetary simulation.
emulate GPU runs by using algorithmically equivalent CPU reference code instead of GPU-accelerated functions. As the CPU code is slow, it is intended to be used only for debugging purposes.
Enable direct GPU communication in multi-rank parallel runs. Note that domain decomposition with GPU-aware MPI does not support multiple pulses along the second and third decomposition dimension, so for very small systems the feature will be disabled internally.
Use a staged implementation of GPU communications for PME force transfers from the PME GPU to the CPU memory of a PP rank for thread-MPI. The staging is done via a GPU buffer on the PP GPU. This is expected to be beneficial for servers with direct communication links between GPUs.
disable exiting upon encountering a corrupted frame in an edr file, allowing the use of all frames up until the corruption.
update forces when invoking
Override the result of build- and runtime GPU-aware MPI detection and force the use of direct GPU MPI communication. Aimed at cases where the user knows that the MPI library is GPU-aware, but GROMACS is not able to detect this. Note that only CUDA and SYCL builds support such functionality.
Force update to run on the CPU by default, makes the
mdrun -update autobehave as
Removed, use GMX_ENABLE_DIRECT_GPU_COMM instead.
Disables the hardware compatibility check in OpenCL and SYCL. Useful for developers and allows testing the OpenCL/SYCL kernels on non-supported platforms without source code modification.
set in the same way as
GMX_GPU_IDallows the user to specify different GPU IDs for different ranks, which can be useful for selecting different devices on different compute nodes in a cluster. Cannot be used in conjunction with
force the use of twin-range cutoff kernel even if
rcoulombafter PP-PME load balancing. The switch to twin-range kernels is automated, so this variable should be used only for benchmarking.
force the use of analytical Ewald kernels. Should be used only for benchmarking.
force the use of tabulated Ewald kernels. Should be used only for benchmarking.
Enable the support for PME decomposition on GPU. This feature is supported with CUDA and SYCL backends, and allows using multiple PME ranks with GPU offload, which is expected to improve performance when scaling over many GPUs. Note: this feature still lacks substantial testing.
Removed, use GMX_ENABLE_DIRECT_GPU_COMM instead.
set in the same way as
GMX_GPUTASKSallows the mapping of GPU tasks to GPU device IDs to be different on different ranks, if e.g. the MPI runtime permits this variable to be different for different ranks. Cannot be used in conjunction with
mdrun -gputasks. Has all the same requirements as
allow gmx mdrun to continue even if a file is missing.
when set to a floating-point value, overrides the default tolerance of 1e-5 for force-field floating-point parameters.
if set to -1, gmx mdrun will not exit if it produces too many LINCS warnings.
neighbor list balancing parameter used when running on GPU. Sets the target minimum number pair-lists in order to improve multi-processor load-balance for better performance with small simulation systems. Must be set to a non-negative integer, the 0 value disables list splitting. The default value is optimized for supported GPUs therefore changing it is not necessary for normal usage, but it can be useful on future architectures.
when set, print detailed neighbor search cycle counting.
force the use of analytical Ewald non-bonded kernels, mutually exclusive of
force the use of tabulated Ewald non-bonded kernels, mutually exclusive of
force the use of 2x(N+N) SIMD CPU non-bonded kernels, mutually exclusive of
force the use of 4xN SIMD CPU non-bonded kernels, mutually exclusive of
used in initializing domain decomposition communicators. Rank reordering is default, but can be switched off with this environment variable.
disable signal handlers for SIGINT, SIGTERM, and SIGUSR1, respectively.
force the use of LJ parameter lookup instead of using combination rules in the non-bonded kernels.
do not use separate inter- and intra-node communicators.
skip non-bonded calculations; can be used to estimate the possible performance gain from adding a GPU accelerator to the current hardware setup – assuming that this is fast enough to complete the non-bonded calculations while the CPU does bonded force and PME computation. Freezing the particles will be required to stop the system blowing up.
turns off update groups. May allow for a decomposition of more domains for small systems at the cost of communication during update.
shell positions are not predicted.
overrides the dynamic pair-list pruning interval chosen heuristically by mdrun. Values should be between the pruning frequency value (1 for CPU and 2 for GPU) and
set the number of OpenMP or PME threads; overrides the default set by gmx mdrun; can be used instead of the
-npmecommand line option, also useful to set heterogeneous per-process/-node thread count.
use P3M-optimized influence function instead of smooth PME B-spline interpolation.
PME thread division in the format “x y z” for all three dimensions. The sum of the threads in each dimension must equal the total number of PME threads (set in
if the number of domain decomposition cells is set to 1 for both x and y, decompose PME in one dimension.
disable the default heuristic for when to use a separate pull MPI communicator (at >=32 ranks).
require that shell positions are initiated.
should contain multiple masses used for test particle insertion into a cavity. The center of mass of the last atoms is used for insertion into the cavity.
resolution of buffer size in Verlet cutoff scheme. The default value is 0.001, but can be overridden with this environment variable.
Not strictly a GROMACS environment variable, but on large machines the hwloc detection can take a few seconds if you have lots of MPI processes. If you run the hwloc command lstopo out.xml and set this environment variable to point to the location of this file, the hwloc library will use the cached information instead, which can be faster.
the gmx mdrun command used by gmx tune_pme.
mpiruncommand used by gmx tune_pme.
Currently, several environment variables exist that help customize some aspects of the OpenCL version of GROMACS. They are mostly related to the runtime compilation of OpenCL kernels, but they are also used in device selection.
Use in conjunction with
OCL_FORCE_CPUor with an AMD device. It adds the debug flag to the compiler options (-g).
Prevents the use of
-cl-fast-relaxed-mathcompiler option. Note: fast math is always disabled on Intel devices due to instability.
Disables i-atom data (type or LJ parameter) prefetch allowing testing.
Enables i-atom data (type or LJ parameter) prefetch allowing testing on platforms where this behavior is not default.
If defined, intermediate language code corresponding to the OpenCL build process is saved to file. Caching has to be turned off in order for this option to take effect.
NVIDIA GPUs: PTX code is saved in the current directory with the name
.IL/.ISAfiles will be created for each OpenCL kernel built. For details about where these files are created check AMD documentation for
If defined, the OpenCL build log is always written to the mdrun log file. Otherwise, the build log is written to the log file only when an error occurs.
Use this parameter to force GROMACS to load the OpenCL kernels from a custom location. Use it only if you want to override GROMACS default behavior, or if you want to test your own kernels.
Force the selection of a CPU device instead of a GPU. This exists only for debugging purposes. Do not expect GROMACS to function properly with this option on, it is solely for the simplicity of stepping in a kernel and see what is happening.
Enable OpenCL binary caching. Only intended to be used for development and (expert) testing as neither concurrency nor cache invalidation is implemented safely!
If set, generate and compile all algorithm flavors, otherwise only the flavor required for the simulation is generated and compiled.
Disable optimisations. Adds the option
cl-opt-disableto the compiler options.
Use Intel OpenCL extension to show additional runtime performance diagnostics.
If defined, it enables verbose mode for OpenCL kernel build. Currently available only for NVIDIA GPUs. See
GMX_OCL_DUMP_LOGfor details about how to obtain the OpenCL build log.
Analysis and Core Functions#
spacing used by gmx dipoles.
make gmx energy and gmx eneconv loud and noisy.
sets the maximum number of residues to be renumbered by gmx grompp. A value of -1 indicates all residues should be renumbered.
Some force fields (like AMBER) use specific names for N- and C- terminal residues (NXXX and CXXX) as rtp entries that are normally renamed. Setting this environment variable disables this renaming.
sets viewer to
xmgr(deprecated) instead of
the time unit used in output files, can be anything in fs, ps, ns, us, ms, s, m or h.
where to find VMD plug-ins. Needed to be able to read file formats recognized only by a VMD plug-in.
base path of VMD installation.