Gromacs  2020.4
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Groups Pages
List of all members | Public Member Functions
gmx::PmePpCommGpu::Impl Class Reference

#include <gromacs/ewald/pme_pp_comm_gpu_impl.h>

Description

Class with interfaces and data for CUDA version of PME-PP Communication.

Public Member Functions

 Impl (MPI_Comm comm, int pmeRank)
 Creates PME-PP GPU communication object. More...
 
void reinit (int size)
 Perform steps required when buffer size changes. More...
 
void receiveForceFromPmeCudaDirect (void *recvPtr, int recvSize, bool receivePmeForceToGpu)
 Pull force buffer directly from GPU memory on PME rank to either GPU or CPU memory on PP task using CUDA Memory copy. More...
 
void sendCoordinatesToPmeCudaDirect (void *sendPtr, int sendSize, bool sendPmeCoordinatesFromGpu, GpuEventSynchronizer *coordinatesReadyOnDeviceEvent)
 Push coordinates buffer directly to GPU memory on PME task, from either GPU or CPU memory on PP task using CUDA Memory copy. sendPtr should be in GPU or CPU memory if sendPmeCoordinatesFromGpu is true or false respectively. If sending from GPU, this method should be called after the local GPU coordinate buffer operations. The remote PME task will automatically wait for data to be copied before commencing PME force calculations. More...
 
void * getGpuForceStagingPtr ()
 Return pointer to buffer used for staging PME force on GPU.
 
void * getForcesReadySynchronizer ()
 Return pointer to event recorded when forces are ready.
 

Constructor & Destructor Documentation

gmx::PmePpCommGpu::Impl::Impl ( MPI_Comm  comm,
int  pmeRank 
)

Creates PME-PP GPU communication object.

Parameters
[in]commCommunicator used for simulation
[in]pmeRankRank of PME task

Member Function Documentation

void gmx::PmePpCommGpu::Impl::receiveForceFromPmeCudaDirect ( void *  recvPtr,
int  recvSize,
bool  receivePmeForceToGpu 
)

Pull force buffer directly from GPU memory on PME rank to either GPU or CPU memory on PP task using CUDA Memory copy.

recvPtr should be in GPU or CPU memory if recvPmeForceToGpu is true or false, respectively. If receiving to GPU, this method should be called before the local GPU buffer operations. If receiving to CPU it should be called before forces are reduced with the other force contributions on the CPU. It will automatically wait for remote PME force data to be ready.

Parameters
[out]recvPtrCPU buffer to receive PME force data
[in]recvSizeNumber of elements to receive
[in]receivePmeForceToGpuWhether receive is to GPU, otherwise CPU
void gmx::PmePpCommGpu::Impl::reinit ( int  size)

Perform steps required when buffer size changes.

Parameters
[in]sizeNumber of elements in buffer
void gmx::PmePpCommGpu::Impl::sendCoordinatesToPmeCudaDirect ( void *  sendPtr,
int  sendSize,
bool  sendPmeCoordinatesFromGpu,
GpuEventSynchronizer coordinatesReadyOnDeviceEvent 
)

Push coordinates buffer directly to GPU memory on PME task, from either GPU or CPU memory on PP task using CUDA Memory copy. sendPtr should be in GPU or CPU memory if sendPmeCoordinatesFromGpu is true or false respectively. If sending from GPU, this method should be called after the local GPU coordinate buffer operations. The remote PME task will automatically wait for data to be copied before commencing PME force calculations.

Parameters
[in]sendPtrBuffer with coordinate data
[in]sendSizeNumber of elements to send
[in]sendPmeCoordinatesFromGpuWhether send is from GPU, otherwise CPU
[in]coordinatesReadyOnDeviceEventEvent recorded when coordinates are available on device

The documentation for this class was generated from the following file: