gmx::PmeForceSenderGpu::Impl::Impl |
( |
GpuEventSynchronizer * |
pmeForcesReady, |
|
|
MPI_Comm |
comm, |
|
|
const DeviceContext & |
deviceContext, |
|
|
gmx::ArrayRef< PpRanks > |
ppRanks |
|
) |
| |
Creates PME GPU Force sender object.
Create PME-PP GPU communication object.
- Parameters
-
[in] | pmeForcesReady | Event synchronizer marked when PME forces are ready on the GPU |
[in] | comm | Communicator used for simulation |
[in] | deviceContext | GPU context |
[in] | ppRanks | List of PP ranks |
void gmx::PmeForceSenderGpu::Impl::sendFToPpGpuAwareMpi |
( |
DeviceBuffer< RVec > |
sendbuf, |
|
|
int |
offset, |
|
|
int |
numBytes, |
|
|
int |
ppRank, |
|
|
MPI_Request * |
request |
|
) |
| |
Send force to PP rank (used with Lib-MPI)
Send PME data directly using GPU-aware MPI.
- Parameters
-
[in] | sendbuf | force buffer in GPU memory |
[in] | offset | starting element in buffer |
[in] | numBytes | number of bytes to transfer |
[in] | ppRank | PP rank to receive data |
[in] | request | MPI request to track asynchronous MPI call status |
void gmx::PmeForceSenderGpu::Impl::sendFToPpPeerToPeer |
( |
int |
ppRank, |
|
|
int |
numAtoms, |
|
|
bool |
sendForcesDirectToPpGpu |
|
) |
| |
Send force to PP rank (used with Thread-MPI)
Send PME synchronizer directly to the peer devices. Not implemented with SYCL.
- Parameters
-
[in] | ppRank | PP rank to receive data |
[in] | numAtoms | number of atoms to send |
[in] | sendForcesDirectToPpGpu | whether forces are transferred direct to remote GPU memory |
Sets location of force to be sent to each PP rank.
- Parameters
-
[in] | d_f | force buffer in GPU memory |
The documentation for this class was generated from the following files: