Gromacs  2024.4
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Groups Pages
List of all members | Public Member Functions | Public Attributes
gmx_pme_pp Struct Reference
+ Collaboration diagram for gmx_pme_pp:

Description

Main PP-PME communication data structure.

Public Member Functions

 gmx_pme_pp (MPI_Comm simulationCommunicator, std::vector< PpRanks > &&ppRanks)
 

Public Attributes

MPI_Comm mpi_comm_mysim
 MPI communicator for this simulation.
 
std::vector< PpRanksppRanks
 The PP partner ranks.
 
int peerRankId
 The peer PP rank id (the last one)
 
gmx::HostVector< gmx::RVecx
 Vector of atom coordinates to transfer to PME ranks.
 
std::vector< gmx::RVecf
 Vector of atom forces received from PME ranks.
 
std::unique_ptr
< gmx::PmeCoordinateReceiverGpu > 
pmeCoordinateReceiverGpu
 object for receiving coordinates using communications operating on GPU memory space
 
std::unique_ptr
< gmx::PmeForceSenderGpu
pmeForceSenderGpu
 object for sending PME force using communications operating on GPU memory space
 
bool useGpuDirectComm = false
 whether GPU direct communications are active for PME-PP transfers
 
bool sendForcesDirectToPpGpu = false
 whether GPU direct communications should send forces directly to remote GPU memory
 
bool useMdGpuGraph = false
 Whether a GPU graph should be used to execute steps in the MD loop if run conditions allow.
 
bool useNvshmem = false
 Whether a NVSHMEM should be used for GPU communication if run conditions allow.
 
gmx::PaddedHostVector< realchargeA
 < Vectors of A- and B-state parameters used to transfer vectors to PME ranks
 
gmx::PaddedHostVector< realchargeB
 
std::vector< realsqrt_c6A
 
std::vector< realsqrt_c6B
 
std::vector< realsigmaA
 
std::vector< realsigmaB
 
std::vector< MPI_Request > req
 < Vectors of MPI objects used in non-blocking communication between multiple PP ranks per PME rank
 
std::vector< MPI_Status > stat
 

The documentation for this struct was generated from the following file: