GROMACS 2026.2 release notes¶
This version was released on May 5th, 2026. These release notes document the changes that have taken place in GROMACS since the previous 2026.1 version, to fix known issues. It also incorporates all fixes made in version 2025.4 and earlier, which you can find described in the Release notes.
Fixes where mdrun could behave incorrectly¶
Fixed PME mixed mode with spread/gather on GPU and FFT and solve on CPU¶
With PME mixed mode, mdrun would exit with an error about infinite energies or with a CUDA error.
Fixed a RuntimeError within the NNPot interface¶
Fixed a “RuntimeError: Global alloc not supported yet” when running GPU inference on certain NNP models (e.g. Nutmeg) within the NNPot interface.
Correctly restore deltaH history from a checkpoint¶
Was reset to zero. This would only affect results when continuing from a checkpoint that was written at a step that is not a multiple of nstdhdl.
Fixed performance issue with GPU FEP kernel¶
Previously, using -nbfe gpu with Gapsys soft-core activated parts
of GPU offloading support code while still doing the computation on
the CPU, resulting in reduced performance and misleading log messages.
Now mdrun correctly identifies that the Gapsys soft-core flavor
is not supported by GPU non-bonded force kernels, ensuring users
revert to the supported CPU path or switch to Beutler soft-core.
Fixed GMX_NO_CART_REORDER environment variable¶
The GMX_NO_CART_REORDER environment variable did not disable Cartesian
communicator rank reordering when set to 1 because the logic was inverted.
This was broken since GROMACS 2020.
Fixed mdrun -ddorder cartesian with PME-only ranks¶
With PME-only ranks, mdrun -ddorder cartesian could fail during domain
decomposition setup.
Correct wallcycle table in the md.log file with single rank simulations¶
Fixed MPI communicator passing to CP2K¶
Corrected an issue with GROMACS-CP2K interface in case it is built with support
for an external MPI library but mdrun runs as a single rank. The interface
erroneously passed MPI_COMM_NULL to CP2K which caused an error.
Fixes for gmx tools¶
Better error reporting in gmx x2top¶
Properly print the filename when a forcefield parsing error is encountered.
gmx polystat computed internal distances between all particles¶
Now only particles in the index group are used.
Fixed a selection evaluation issue for numeric expressions¶
Fixed a selection issue that would lead to a crash when a numeric expression would reference a variable derived from a position expression.
Fixes that affect portability¶
Facilitated configuring a oneAPI SYCL build with MKL from oneAPI 2026.0 and newer¶
A configure-time check that MKL can link in a SYCL build referred to a header deprecated in oneAPI 2025.0 that is removed in oneAPI 2026.0. The check now flexibly refers to the new header whenever it exists.
Improved detection of GPU-awareness in MPI¶
Previously, GROMACS could fail to detect that the MPI library has
extensions to query its GPU support, necessitating the use of
GMX_FORCE_GPU_AWARE_MPI=1 environment variable.
Cray MPICH 9 with ROCm was affected.
Miscellaneous¶
Adjusted fudge coefficients for AMBER14SB and AMBER19SB¶
AMBER itself uses value \(\frac{1}{1.2} = 0.8\overline{3}\). The new value
of 0.83333333333333333 for the Lennard-Jones fudge coefficient matches the
original value better than the previously used 0.8333.
Fix SPC and SPC/E atom types in AMBER14SB and AMBER19SB force fields¶
The atom type for OW and HW was OW_spc and HW_spc, which are
undefined. These are now corrected to use OW_spce and HW_spce atom types
(respectively), which were converted from Amber ff14SB and ff19SB.
Fix test failure with OMP_NUM_THREADS set¶
Fix test failure in NbnxmTests when OMP_NUM_THREADS was set
to a value larger than 1.
Hardened configuration with cuFFTMp and HeFFTe¶
Previously the CMake options for these distributed GPU FFT libraries
would be ignored if GMX_MPI was not also set. This led to
incorrect builds and confusing warnings from CMake. GMX_MPI is
required, and now an error is given if support for such an FFT library
is required and GMX_MPI is not also set.
Hardened configuration with NVSHMEM¶
Previously the CMake options for NVSHMEM would be ignored if
GMX_MPI was not also set. This led to incorrect builds and
confusing warnings from CMake. GMX_MPI is required, and now an
error is given if NVSHMEM support is required and GMX_MPI is not
also set.