Gromacs  2024.4
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Groups Pages
List of all members | Public Member Functions
gmx::PmePpCommGpu::Impl Class Reference

#include <gromacs/ewald/pme_pp_comm_gpu_impl.h>

Description

Class with interfaces and data for GPU version of PME-PP Communication.

Impl class stub.

Public Member Functions

 Impl (MPI_Comm comm, int pmeRank, gmx::HostVector< gmx::RVec > *pmeCpuForceBuffer, const DeviceContext &deviceContext, const DeviceStream &deviceStream, bool useNvshmem)
 Creates PME-PP GPU communication object. More...
 
void reinit (int size)
 Perform steps required when buffer size changes. More...
 
void receiveForceFromPme (Float3 *recvPtr, int recvSize, bool receivePmeForceToGpu)
 Pull force buffer directly from GPU memory on PME rank to either GPU or CPU memory on PP task using CUDA Memory copy or GPU-aware MPI. More...
 
void sendCoordinatesToPme (Float3 *sendPtr, int sendSize, GpuEventSynchronizer *coordinatesReadyOnDeviceEvent)
 Push coordinates buffer directly to GPU memory on PME task, from either GPU or CPU memory on PP task using CUDA Memory copy or GPU-aware MPI. If sending from GPU, this method should be called after the local GPU coordinate buffer operations. The remote PME task will automatically wait for data to be copied before commencing PME force calculations. More...
 
DeviceBuffer< Float3getGpuForceStagingPtr ()
 Return pointer to buffer used for staging PME force on GPU.
 
GpuEventSynchronizer * getForcesReadySynchronizer ()
 Return pointer to event recorded when forces are ready.
 
DeviceBuffer< uint64_t > getGpuForcesSyncObj ()
 Return pointer to NVSHMEM sync object used for staging PME force on GPU.
 

Constructor & Destructor Documentation

gmx::PmePpCommGpu::Impl::Impl ( MPI_Comm  comm,
int  pmeRank,
gmx::HostVector< gmx::RVec > *  pmeCpuForceBuffer,
const DeviceContext &  deviceContext,
const DeviceStream deviceStream,
bool  useNvshmem 
)

Creates PME-PP GPU communication object.

Parameters
[in]commCommunicator used for simulation
[in]pmeRankRank of PME task
[in]pmeCpuForceBufferBuffer for PME force in CPU memory
[in]deviceContextGPU context.
[in]deviceStreamGPU stream.
[in]useNvshmemNVSHMEM enable/disable for GPU comm.

Member Function Documentation

void gmx::PmePpCommGpu::Impl::receiveForceFromPme ( Float3 recvPtr,
int  recvSize,
bool  receivePmeForceToGpu 
)

Pull force buffer directly from GPU memory on PME rank to either GPU or CPU memory on PP task using CUDA Memory copy or GPU-aware MPI.

recvPtr should be in GPU or CPU memory if recvPmeForceToGpu is true or false, respectively. If receiving to GPU, this method should be called before the local GPU buffer operations. If receiving to CPU it should be called before forces are reduced with the other force contributions on the CPU. It will automatically wait for remote PME force data to be ready.

Parameters
[out]recvPtrCPU or GPU buffer to receive PME force data
[in]recvSizeNumber of elements to receive
[in]receivePmeForceToGpuWhether receive is to GPU, otherwise CPU
void gmx::PmePpCommGpu::Impl::reinit ( int  size)

Perform steps required when buffer size changes.

Parameters
[in]sizeNumber of elements in buffer
void gmx::PmePpCommGpu::Impl::sendCoordinatesToPme ( Float3 sendPtr,
int  sendSize,
GpuEventSynchronizer *  coordinatesReadyOnDeviceEvent 
)

Push coordinates buffer directly to GPU memory on PME task, from either GPU or CPU memory on PP task using CUDA Memory copy or GPU-aware MPI. If sending from GPU, this method should be called after the local GPU coordinate buffer operations. The remote PME task will automatically wait for data to be copied before commencing PME force calculations.

Parameters
[in]sendPtrBuffer with coordinate data
[in]sendSizeNumber of elements to send
[in]coordinatesReadyOnDeviceEventEvent recorded when coordinates are available on device

The documentation for this class was generated from the following files: