Gromacs  2022.2
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Groups Pages
List of all members | Public Member Functions
gmx::PmeForceSenderGpu::Impl Class Reference

Description

Impl class stub.

Public Member Functions

 Impl (GpuEventSynchronizer *pmeForcesReady, MPI_Comm comm, const DeviceContext &deviceContext, gmx::ArrayRef< PpRanks > ppRanks)
 Creates PME GPU Force sender object. More...
 
void setForceSendBuffer (DeviceBuffer< Float3 > d_f)
 Sets location of force to be sent to each PP rank. More...
 
void sendFToPpCudaDirect (int ppRank, int numAtoms, bool sendForcesDirectToPpGpu)
 Send force to PP rank (used with Thread-MPI) More...
 
void sendFToPpCudaMpi (DeviceBuffer< RVec > sendbuf, int offset, int numBytes, int ppRank, MPI_Request *request)
 Send force to PP rank (used with Lib-MPI) More...
 

Constructor & Destructor Documentation

gmx::PmeForceSenderGpu::Impl::Impl ( GpuEventSynchronizer pmeForcesReady,
MPI_Comm  comm,
const DeviceContext &  deviceContext,
gmx::ArrayRef< PpRanks ppRanks 
)

Creates PME GPU Force sender object.

Parameters
[in]pmeForcesReadyEvent synchronizer marked when PME forces are ready on the GPU
[in]commCommunicator used for simulation
[in]deviceContextGPU context
[in]ppRanksList of PP ranks

Member Function Documentation

void gmx::PmeForceSenderGpu::Impl::sendFToPpCudaDirect ( int  ppRank,
int  numAtoms,
bool  sendForcesDirectToPpGpu 
)

Send force to PP rank (used with Thread-MPI)

Parameters
[in]ppRankPP rank to receive data
[in]numAtomsnumber of atoms to send
[in]sendForcesDirectToPpGpuwhether forces are transferred direct to remote GPU memory
void gmx::PmeForceSenderGpu::Impl::sendFToPpCudaMpi ( DeviceBuffer< RVec sendbuf,
int  offset,
int  numBytes,
int  ppRank,
MPI_Request *  request 
)

Send force to PP rank (used with Lib-MPI)

Parameters
[in]sendbufforce buffer in GPU memory
[in]offsetstarting element in buffer
[in]numBytesnumber of bytes to transfer
[in]ppRankPP rank to receive data
[in]requestMPI request to track asynchronous MPI call status
void gmx::PmeForceSenderGpu::Impl::setForceSendBuffer ( DeviceBuffer< Float3 d_f)

Sets location of force to be sent to each PP rank.

Parameters
[in]d_fforce buffer in GPU memory

The documentation for this class was generated from the following files: