|
Gromacs
2026.0-dev-20251110-920b6d1
|
Impl class stub.
Public Member Functions | |
| Impl (GpuEventSynchronizer *pmeForcesReady, MPI_Comm comm, const DeviceContext &deviceContext, gmx::ArrayRef< PpRanks > ppRanks) | |
| Creates PME GPU Force sender object. More... | |
| void | setForceSendBuffer (DeviceBuffer< Float3 > d_f) |
| Sets location of force to be sent to each PP rank. More... | |
| void | sendFToPpPeerToPeer (int ppRank, int numAtoms, bool sendForcesDirectToPpGpu) |
| Send force to PP rank (used with Thread-MPI) More... | |
| void | sendFToPpGpuAwareMpi (DeviceBuffer< RVec > sendbuf, int offset, int numBytes, int ppRank, MPI_Request *request) |
| Send force to PP rank (used with Lib-MPI) More... | |
| void | waitForEvents () |
| gmx::PmeForceSenderGpu::Impl::Impl | ( | GpuEventSynchronizer * | pmeForcesReady, |
| MPI_Comm | comm, | ||
| const DeviceContext & | deviceContext, | ||
| gmx::ArrayRef< PpRanks > | ppRanks | ||
| ) |
Creates PME GPU Force sender object.
Create PME-PP GPU communication object.
| [in] | pmeForcesReady | Event synchronizer marked when PME forces are ready on the GPU |
| [in] | comm | Communicator used for simulation |
| [in] | deviceContext | GPU context |
| [in] | ppRanks | List of PP ranks |
| void gmx::PmeForceSenderGpu::Impl::sendFToPpGpuAwareMpi | ( | DeviceBuffer< RVec > | sendbuf, |
| int | offset, | ||
| int | numBytes, | ||
| int | ppRank, | ||
| MPI_Request * | request | ||
| ) |
Send force to PP rank (used with Lib-MPI)
Send PME data directly using GPU-aware MPI.
| [in] | sendbuf | force buffer in GPU memory |
| [in] | offset | starting element in buffer |
| [in] | numBytes | number of bytes to transfer |
| [in] | ppRank | PP rank to receive data |
| [in] | request | MPI request to track asynchronous MPI call status |
| void gmx::PmeForceSenderGpu::Impl::sendFToPpPeerToPeer | ( | int | ppRank, |
| int | numAtoms, | ||
| bool | sendForcesDirectToPpGpu | ||
| ) |
Send force to PP rank (used with Thread-MPI)
Send PME synchronizer directly to the peer devices. Not implemented with SYCL.
Send PME synchronizer directly using HIP memory copy.
| [in] | ppRank | PP rank to receive data |
| [in] | numAtoms | number of atoms to send |
| [in] | sendForcesDirectToPpGpu | whether forces are transferred direct to remote GPU memory |
| void gmx::PmeForceSenderGpu::Impl::setForceSendBuffer | ( | DeviceBuffer< Float3 > | d_f | ) |
Sets location of force to be sent to each PP rank.
| [in] | d_f | force buffer in GPU memory |
1.8.5