Gromacs  2025.0-dev-20241011-013a99c
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Groups Pages
List of all members | Public Member Functions
gmx::PmeCoordinateReceiverGpu::Impl Class Reference

#include <gromacs/ewald/pme_coordinate_receiver_gpu_impl.h>

Description

Class with interfaces and data for CUDA version of PME coordinate receiving functionality.

Impl class stub.

Public Member Functions

 Impl (MPI_Comm comm, const DeviceContext &deviceContext, gmx::ArrayRef< const PpRanks > ppRanks)
 Creates PME GPU coordinate receiver object. More...
 
void reinitCoordinateReceiver (DeviceBuffer< RVec > d_x)
 Re-initialize: set atom ranges and, for thread-MPI case, send coordinates buffer address to PP rank. This is required after repartitioning since atom ranges and buffer allocations may have changed. More...
 
void receiveCoordinatesSynchronizerFromPpPeerToPeer (int ppRank)
 Receive coordinate synchronizer pointer from the PP ranks. More...
 
void launchReceiveCoordinatesFromPpGpuAwareMpi (DeviceBuffer< RVec > recvbuf, int numAtoms, int numBytes, int ppRank, int senderIndex)
 Used for lib MPI, receives co-ordinates from PP ranks. More...
 
std::tuple< int,
GpuEventSynchronizer * > 
receivePpCoordinateSendEvent (int pipelineStage)
 Return PP co-ordinate transfer event received from PP rank determined from pipeline stage, for consumer to enqueue. More...
 
int waitForCoordinatesFromAnyPpRank ()
 Wait for coordinates from any PP rank. More...
 
DeviceStreamppCommStream (int senderIndex)
 Return pointer to stream associated with specific PP rank sender index. More...
 
std::tuple< int, int > ppCommAtomRange (int senderIndex)
 Returns range of atoms involved in communication associated with specific PP rank sender index. More...
 
int ppCommNumSenderRanks ()
 Return number of PP ranks involved in PME-PP communication.
 
void insertAsDependencyIntoStream (int senderIndex, const DeviceStream &stream)
 Mark an event in the sender stream senderIndex and enqueue it into stream.
 

Constructor & Destructor Documentation

gmx::PmeCoordinateReceiverGpu::Impl::Impl ( MPI_Comm  comm,
const DeviceContext &  deviceContext,
gmx::ArrayRef< const PpRanks ppRanks 
)

Creates PME GPU coordinate receiver object.

Parameters
[in]commCommunicator used for simulation
[in]deviceContextGPU context
[in]ppRanksList of PP ranks

Member Function Documentation

void gmx::PmeCoordinateReceiverGpu::Impl::launchReceiveCoordinatesFromPpGpuAwareMpi ( DeviceBuffer< RVec recvbuf,
int  numAtoms,
int  numBytes,
int  ppRank,
int  senderIndex 
)

Used for lib MPI, receives co-ordinates from PP ranks.

Receive coordinate data using GPU-aware MPI.

Parameters
[in]recvbufcoordinates buffer in GPU memory
[in]numAtomsstarting element in buffer
[in]numBytesnumber of bytes to transfer
[in]ppRankPP rank to send data
[in]senderIndexIndex of PP rank within those involved in communication with this PME rank
std::tuple< int, int > gmx::PmeCoordinateReceiverGpu::Impl::ppCommAtomRange ( int  senderIndex)

Returns range of atoms involved in communication associated with specific PP rank sender index.

Parameters
[in]senderIndexIndex of sender PP rank.
DeviceStream * gmx::PmeCoordinateReceiverGpu::Impl::ppCommStream ( int  senderIndex)

Return pointer to stream associated with specific PP rank sender index.

Parameters
[in]senderIndexIndex of sender PP rank.
void gmx::PmeCoordinateReceiverGpu::Impl::receiveCoordinatesSynchronizerFromPpPeerToPeer ( int  ppRank)

Receive coordinate synchronizer pointer from the PP ranks.

Parameters
[in]ppRankPP rank to receive the synchronizer from.
std::tuple< int, GpuEventSynchronizer * > gmx::PmeCoordinateReceiverGpu::Impl::receivePpCoordinateSendEvent ( int  pipelineStage)

Return PP co-ordinate transfer event received from PP rank determined from pipeline stage, for consumer to enqueue.

Parameters
[in]pipelineStagestage of pipeline corresponding to this transfer
Returns
tuple with rank of sending PP task and corresponding event
void gmx::PmeCoordinateReceiverGpu::Impl::reinitCoordinateReceiver ( DeviceBuffer< RVec d_x)

Re-initialize: set atom ranges and, for thread-MPI case, send coordinates buffer address to PP rank. This is required after repartitioning since atom ranges and buffer allocations may have changed.

Parameters
[in]d_xcoordinates buffer in GPU memory
int gmx::PmeCoordinateReceiverGpu::Impl::waitForCoordinatesFromAnyPpRank ( )

Wait for coordinates from any PP rank.

Returns
rank of sending PP task

The documentation for this class was generated from the following files: