Gromacs  2025-dev-20241003-bd59e46
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Groups Pages
List of all members | Public Attributes
PmeShared Struct Reference

#include <gromacs/ewald/pme_gpu_types_host.h>

+ Collaboration diagram for PmeShared:

Description

The PME GPU structure for all the data copied directly from the CPU PME structure. The copying is done when the CPU PME structure is already (re-)initialized (pme_gpu_reinit is called at the end of gmx_pme_init). All the variables here are named almost the same way as in gmx_pme_t. The types are different: pointers are replaced by vectors. TODO: use the shared data with the PME CPU. Included in the main PME GPU structure by value.

Public Attributes

int ngrids
 Grid count.
 
int nk [DIM]
 Grid dimensions - nkx, nky, nkz.
 
int pme_order
 PME interpolation order.
 
real ewaldcoeff_q
 Ewald splitting coefficient for Coulomb.
 
real epsilon_r
 Electrostatics parameter.
 
std::vector< int > nn
 Gridline indices - nnx, nny, nnz.
 
std::vector< realfsh
 Fractional shifts - fshx, fshy, fshz.
 
std::vector< realbsp_mod [DIM]
 Precomputed B-spline values.
 
PmeRunMode runMode
 The PME codepath being taken.
 
bool isRankPmeOnly
 Whether PME execution is happening on a PME-only rank (from gmx_pme_t.bPPnode).
 
class EwaldBoxZScalerboxScaler
 The box scaler based on inputrec - created in pme_init and managed by CPU structure.
 
matrix previousBox
 The previous computation box to know if we even need to update the current box params. More...
 
int ndecompdim
 The The number of decomposition dimensions.
 
int nodeid
 MPI rank within communicator.
 
int nnodes
 Number of MPI ranks doing PME.
 
int nodeidX
 MPI rank within communicator for PME X-decomposition.
 
int nodeidY
 MPI rank within communicator for PME Y-decomposition.
 
int nnodesX
 Number of MPI ranks in X-decomposition.
 
int nnodesY
 Number of MPI ranks in Y-decomposition.
 
MPI_Comm mpiComm
 MPI communicator for PME ranks.
 
MPI_Comm mpiCommX
 MPI communicator for ranks in X-decomposition.
 
MPI_Comm mpiCommY
 MPI communicator for ranks in Y-decomposition.
 
std::vector< int > s2g0X
 local interpolation grid start values in x-dimension
 
std::vector< int > s2g1X
 local interpolation grid end values in x-dimension
 
std::vector< int > s2g0Y
 local interpolation grid start values in y-dimension
 
std::vector< int > s2g1Y
 local interpolation grid end values in y-dimension
 
std::array< int, DIM > pmegridNk
 local grid size
 
int gridHalo
 Size of the grid halo region.
 

Member Data Documentation

matrix PmeShared::previousBox

The previous computation box to know if we even need to update the current box params.

Todo:

Manage this on higher level.

Alternatively, when this structure is used by CPU PME code, make use of this field there as well.


The documentation for this struct was generated from the following file: