Gromacs  2026.1
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Groups Pages
List of all members | Public Member Functions
gmx::PmeCoordinateReceiverGpu Class Reference

#include <gromacs/ewald/pme_coordinate_receiver_gpu.h>

Description

Manages receiving coordinates on PME-only ranks from their PP ranks.

For multi-GPU runs, the PME GPU can receive coordinates from multiple PP GPUs. Data from these distinct communications can be handled separately in the PME spline/spread kernel, allowing pipelining which overlaps computation and communication.

Note that the PME rank always transfers coordinates from each PP rank each step, even from empty domains.

Public Member Functions

 PmeCoordinateReceiverGpu (MPI_Comm comm, const DeviceContext &deviceContext, gmx::ArrayRef< PpRanks > ppRanks)
 Creates PME GPU coordinate receiver object. More...
 
void reinitCoordinateReceiver (DeviceBuffer< RVec > d_x)
 Re-initialize: set atom ranges and, for thread-MPI case, send coordinates buffer address to PP rank This is required after repartitioning since atom ranges and buffer allocations may have changed. More...
 
void receiveCoordinatesSynchronizerFromPpPeerToPeer (int ppRank)
 Receive coordinate synchronizer pointer from the PP ranks. More...
 
void launchReceiveCoordinatesFromPpGpuAwareMpi (DeviceBuffer< RVec > recvbuf, int numAtoms, int numBytes, int ppRank, int senderIndex)
 Used for lib MPI, receives co-ordinates from PP ranks. More...
 
std::tuple< int,
GpuEventSynchronizer * > 
receivePpCoordinateSendEvent (int senderIndex)
 Return PP co-ordinate transfer event received from PP rank determined from senderIndex, for consumer to enqueue. More...
 
int waitForCoordinatesFromAnyPpRank ()
 Wait for coordinates from any PP rank. More...
 
DeviceStreamppCommStream (int senderIndex)
 Return pointer to stream associated with specific PP rank sender index. More...
 
std::tuple< int, int > ppCommAtomRange (int senderIndex)
 Returns range of atoms involved in communication associated with specific PP rank sender index. More...
 
int ppCommNumRanksSendingParticles ()
 Return number of PP ranks contributing particles to PME-PP communication.
 
void insertAsDependencyIntoStream (int senderIndex, const DeviceStream &stream)
 Mark an event in the sender stream senderIndex (which must be valid) and enqueue it into stream.
 

Constructor & Destructor Documentation

gmx::PmeCoordinateReceiverGpu::PmeCoordinateReceiverGpu ( MPI_Comm  comm,
const DeviceContext &  deviceContext,
gmx::ArrayRef< PpRanks ppRanks 
)

Creates PME GPU coordinate receiver object.

Constructor stub.

Parameters
[in]commCommunicator used for simulation
[in]deviceContextGPU context
[in]ppRanksList of PP ranks

Member Function Documentation

void gmx::PmeCoordinateReceiverGpu::launchReceiveCoordinatesFromPpGpuAwareMpi ( DeviceBuffer< RVec recvbuf,
int  numAtoms,
int  numBytes,
int  ppRank,
int  senderIndex 
)

Used for lib MPI, receives co-ordinates from PP ranks.

Parameters
[in]recvbufcoordinates buffer in GPU memory
[in]numAtomsstarting element in buffer
[in]numBytesnumber of bytes to transfer
[in]ppRankPP rank to send data
[in]senderIndexIndex of PP rank within those involved in communication with this PME rank
std::tuple< int, int > gmx::PmeCoordinateReceiverGpu::ppCommAtomRange ( int  senderIndex)

Returns range of atoms involved in communication associated with specific PP rank sender index.

Parameters
[in]senderIndexIndex of sender PP rank.
DeviceStream * gmx::PmeCoordinateReceiverGpu::ppCommStream ( int  senderIndex)

Return pointer to stream associated with specific PP rank sender index.

Parameters
[in]senderIndexIndex of sender PP rank.
void gmx::PmeCoordinateReceiverGpu::receiveCoordinatesSynchronizerFromPpPeerToPeer ( int  ppRank)

Receive coordinate synchronizer pointer from the PP ranks.

Parameters
[in]ppRankPP rank to receive the synchronizer from.
std::tuple< int, GpuEventSynchronizer * > gmx::PmeCoordinateReceiverGpu::receivePpCoordinateSendEvent ( int  senderIndex)

Return PP co-ordinate transfer event received from PP rank determined from senderIndex, for consumer to enqueue.

The returned sender index corresponds to a PP rank that transferred particles this step.

Parameters
[in]senderIndexIndex of the sender within the set of PP ranks
Returns
tuple with index of sending PP rank and corresponding event.
void gmx::PmeCoordinateReceiverGpu::reinitCoordinateReceiver ( DeviceBuffer< RVec d_x)

Re-initialize: set atom ranges and, for thread-MPI case, send coordinates buffer address to PP rank This is required after repartitioning since atom ranges and buffer allocations may have changed.

init PME-PP GPU communication stub

Parameters
[in]d_xcoordinates buffer in GPU memory
int gmx::PmeCoordinateReceiverGpu::waitForCoordinatesFromAnyPpRank ( )

Wait for coordinates from any PP rank.

Returns
rank of sending PP task

The documentation for this class was generated from the following files: