|
Gromacs
2026.1
|
#include <gromacs/ewald/pme_coordinate_receiver_gpu.h>
Manages receiving coordinates on PME-only ranks from their PP ranks.
For multi-GPU runs, the PME GPU can receive coordinates from multiple PP GPUs. Data from these distinct communications can be handled separately in the PME spline/spread kernel, allowing pipelining which overlaps computation and communication.
Note that the PME rank always transfers coordinates from each PP rank each step, even from empty domains.
Public Member Functions | |
| PmeCoordinateReceiverGpu (MPI_Comm comm, const DeviceContext &deviceContext, gmx::ArrayRef< PpRanks > ppRanks) | |
| Creates PME GPU coordinate receiver object. More... | |
| void | reinitCoordinateReceiver (DeviceBuffer< RVec > d_x) |
| Re-initialize: set atom ranges and, for thread-MPI case, send coordinates buffer address to PP rank This is required after repartitioning since atom ranges and buffer allocations may have changed. More... | |
| void | receiveCoordinatesSynchronizerFromPpPeerToPeer (int ppRank) |
| Receive coordinate synchronizer pointer from the PP ranks. More... | |
| void | launchReceiveCoordinatesFromPpGpuAwareMpi (DeviceBuffer< RVec > recvbuf, int numAtoms, int numBytes, int ppRank, int senderIndex) |
| Used for lib MPI, receives co-ordinates from PP ranks. More... | |
| std::tuple< int, GpuEventSynchronizer * > | receivePpCoordinateSendEvent (int senderIndex) |
Return PP co-ordinate transfer event received from PP rank determined from senderIndex, for consumer to enqueue. More... | |
| int | waitForCoordinatesFromAnyPpRank () |
| Wait for coordinates from any PP rank. More... | |
| DeviceStream * | ppCommStream (int senderIndex) |
| Return pointer to stream associated with specific PP rank sender index. More... | |
| std::tuple< int, int > | ppCommAtomRange (int senderIndex) |
| Returns range of atoms involved in communication associated with specific PP rank sender index. More... | |
| int | ppCommNumRanksSendingParticles () |
| Return number of PP ranks contributing particles to PME-PP communication. | |
| void | insertAsDependencyIntoStream (int senderIndex, const DeviceStream &stream) |
Mark an event in the sender stream senderIndex (which must be valid) and enqueue it into stream. | |
| gmx::PmeCoordinateReceiverGpu::PmeCoordinateReceiverGpu | ( | MPI_Comm | comm, |
| const DeviceContext & | deviceContext, | ||
| gmx::ArrayRef< PpRanks > | ppRanks | ||
| ) |
Creates PME GPU coordinate receiver object.
Constructor stub.
| [in] | comm | Communicator used for simulation |
| [in] | deviceContext | GPU context |
| [in] | ppRanks | List of PP ranks |
| void gmx::PmeCoordinateReceiverGpu::launchReceiveCoordinatesFromPpGpuAwareMpi | ( | DeviceBuffer< RVec > | recvbuf, |
| int | numAtoms, | ||
| int | numBytes, | ||
| int | ppRank, | ||
| int | senderIndex | ||
| ) |
Used for lib MPI, receives co-ordinates from PP ranks.
| [in] | recvbuf | coordinates buffer in GPU memory |
| [in] | numAtoms | starting element in buffer |
| [in] | numBytes | number of bytes to transfer |
| [in] | ppRank | PP rank to send data |
| [in] | senderIndex | Index of PP rank within those involved in communication with this PME rank |
| std::tuple< int, int > gmx::PmeCoordinateReceiverGpu::ppCommAtomRange | ( | int | senderIndex | ) |
Returns range of atoms involved in communication associated with specific PP rank sender index.
| [in] | senderIndex | Index of sender PP rank. |
| DeviceStream * gmx::PmeCoordinateReceiverGpu::ppCommStream | ( | int | senderIndex | ) |
Return pointer to stream associated with specific PP rank sender index.
| [in] | senderIndex | Index of sender PP rank. |
| void gmx::PmeCoordinateReceiverGpu::receiveCoordinatesSynchronizerFromPpPeerToPeer | ( | int | ppRank | ) |
Receive coordinate synchronizer pointer from the PP ranks.
| [in] | ppRank | PP rank to receive the synchronizer from. |
| std::tuple< int, GpuEventSynchronizer * > gmx::PmeCoordinateReceiverGpu::receivePpCoordinateSendEvent | ( | int | senderIndex | ) |
Return PP co-ordinate transfer event received from PP rank determined from senderIndex, for consumer to enqueue.
The returned sender index corresponds to a PP rank that transferred particles this step.
| [in] | senderIndex | Index of the sender within the set of PP ranks |
| void gmx::PmeCoordinateReceiverGpu::reinitCoordinateReceiver | ( | DeviceBuffer< RVec > | d_x | ) |
Re-initialize: set atom ranges and, for thread-MPI case, send coordinates buffer address to PP rank This is required after repartitioning since atom ranges and buffer allocations may have changed.
init PME-PP GPU communication stub
| [in] | d_x | coordinates buffer in GPU memory |
| int gmx::PmeCoordinateReceiverGpu::waitForCoordinatesFromAnyPpRank | ( | ) |
Wait for coordinates from any PP rank.
1.8.5