Gromacs  2022.2
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Groups Pages
List of all members | Classes | Public Member Functions
gmx::PmeForceSenderGpu Class Reference

#include <gromacs/ewald/pme_force_sender_gpu.h>

Description

Manages sending forces from PME-only ranks to their PP ranks.

Classes

class  Impl
 Impl class stub. More...
 

Public Member Functions

 PmeForceSenderGpu (GpuEventSynchronizer *pmeForcesReady, MPI_Comm comm, const DeviceContext &deviceContext, gmx::ArrayRef< PpRanks > ppRanks)
 Creates PME GPU Force sender object. More...
 
void setForceSendBuffer (DeviceBuffer< RVec > d_f)
 Sets location of force to be sent to each PP rank. More...
 
void sendFToPpCudaDirect (int ppRank, int numAtoms, bool sendForcesDirectToPpGpu)
 Send force to PP rank (used with Thread-MPI) More...
 
void sendFToPpCudaMpi (DeviceBuffer< RVec > sendbuf, int offset, int numBytes, int ppRank, MPI_Request *request)
 Send force to PP rank (used with Lib-MPI) More...
 

Constructor & Destructor Documentation

gmx::PmeForceSenderGpu::PmeForceSenderGpu ( GpuEventSynchronizer pmeForcesReady,
MPI_Comm  comm,
const DeviceContext &  deviceContext,
gmx::ArrayRef< PpRanks ppRanks 
)

Creates PME GPU Force sender object.

Constructor stub.

Parameters
[in]pmeForcesReadyEvent synchronizer marked when PME forces are ready on the GPU
[in]commCommunicator used for simulation
[in]deviceContextGPU context
[in]ppRanksList of PP ranks

Member Function Documentation

void gmx::PmeForceSenderGpu::sendFToPpCudaDirect ( int  ppRank,
int  numAtoms,
bool  sendForcesDirectToPpGpu 
)

Send force to PP rank (used with Thread-MPI)

Parameters
[in]ppRankPP rank to receive data
[in]numAtomsnumber of atoms to send
[in]sendForcesDirectToPpGpuwhether forces are transferred direct to remote GPU memory
void gmx::PmeForceSenderGpu::sendFToPpCudaMpi ( DeviceBuffer< RVec sendbuf,
int  offset,
int  numBytes,
int  ppRank,
MPI_Request *  request 
)

Send force to PP rank (used with Lib-MPI)

Parameters
[in]sendbufforce buffer in GPU memory
[in]offsetstarting element in buffer
[in]numBytesnumber of bytes to transfer
[in]ppRankPP rank to receive data
[in]requestMPI request to track asynchronous MPI call status
void gmx::PmeForceSenderGpu::setForceSendBuffer ( DeviceBuffer< RVec d_f)

Sets location of force to be sent to each PP rank.

init PME-PP GPU communication stub

Parameters
[in]d_fforce buffer in GPU memory

The documentation for this class was generated from the following files: