Gromacs  2020.4
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Groups Pages
Enumerations | Functions
gpu_utils.h File Reference
#include <cstdio>
#include <string>
#include <vector>
#include "gromacs/gpu_utils/gpu_macros.h"
#include "gromacs/utility/basedefinitions.h"
+ Include dependency graph for gpu_utils.h:
+ This graph shows which files directly or indirectly include this file:

Description

Declare functions for detection and initialization for GPU devices.

Author
Szilard Pall pall..nosp@m.szil.nosp@m.ard@g.nosp@m.mail.nosp@m..com
Mark Abraham mark..nosp@m.j.ab.nosp@m.raham.nosp@m.@gma.nosp@m.il.co.nosp@m.m

Enumerations

enum  GpuApiCallBehavior { Sync, Async }
 Enum which is only used to describe transfer calls at the moment.
 
enum  GpuTaskCompletion { Wait, Check }
 Types of actions associated to waiting or checking the completion of GPU tasks.
 

Functions

bool canPerformGpuDetection ()
 Return whether GPUs can be detected. More...
 
bool isGpuDetectionFunctional (std::string *errorMessage)
 Return whether GPU detection is functioning correctly. More...
 
void findGpus (gmx_gpu_info_t *gpu_info)
 Find all GPUs in the system. More...
 
std::vector< int > getCompatibleGpus (const gmx_gpu_info_t &gpu_info)
 Return a container of the detected GPUs that are compatible. More...
 
const char * getGpuCompatibilityDescription (const gmx_gpu_info_t &gpu_info, int index)
 Return a string describing how compatible the GPU with given index is. More...
 
void free_gpu_info (const gmx_gpu_info_t *gpu_info)
 Frees the gpu_dev and dev_use array fields of gpu_info. More...
 
void init_gpu (const gmx_device_info_t *deviceInfo)
 Initializes the GPU described by deviceInfo. More...
 
void free_gpu (const gmx_device_info_t *deviceInfo)
 Frees up the CUDA GPU used by the active context at the time of calling. More...
 
gmx_device_info_tgetDeviceInfo (const gmx_gpu_info_t &gpu_info, int deviceId)
 Return a pointer to the device info for deviceId. More...
 
int get_current_cuda_gpu_device_id ()
 Returns the device ID of the CUDA GPU currently in use. More...
 
void get_gpu_device_info_string (char *s, const gmx_gpu_info_t &gpu_info, int index)
 Formats and returns a device information string for a given GPU. More...
 
size_t sizeof_gpu_dev_info ()
 Returns the size of the gpu_dev_info struct. More...
 
int gpu_info_get_stat (const gmx_gpu_info_t &info, int index)
 Get status of device with specified index.
 
bool buildSupportsNonbondedOnGpu (std::string *error)
 Check if GROMACS has been built with GPU support. More...
 
void startGpuProfiler ()
 Starts the GPU profiler if mdrun is being profiled. More...
 
void resetGpuProfiler ()
 Resets the GPU profiler if mdrun is being profiled. More...
 
void stopGpuProfiler ()
 Stops the CUDA profiler if mdrun is being profiled. More...
 
bool isHostMemoryPinned (const void *h_ptr)
 Tells whether the host buffer was pinned for non-blocking transfers. Only implemented for CUDA.
 
void setupGpuDevicePeerAccess (const std::vector< int > &gpuIdsToUse, const gmx::MDLogger &mdlog)
 Enable peer access between GPUs where supported. More...
 

Function Documentation

bool buildSupportsNonbondedOnGpu ( std::string *  error)

Check if GROMACS has been built with GPU support.

Parameters
[in]errorPointer to error string or nullptr.
Todo:
Move this to NB module once it exists.
bool canPerformGpuDetection ( )

Return whether GPUs can be detected.

Returns true when this is a build of GROMACS configured to support GPU usage, GPU detection is not disabled by an environment variable and a valid device driver, ICD, and/or runtime was detected. Does not throw.

void findGpus ( gmx_gpu_info_t gpu_info)

Find all GPUs in the system.

Will detect every GPU supported by the device driver in use. Must only be called if canPerformGpuDetection() has returned true. This routine also checks for the compatibility of each and fill the gpu_info->gpu_dev array with the required information on each the device: ID, device properties, status.

Note that this function leaves the GPU runtime API error state clean; this is implemented ATM in the CUDA flavor. TODO: check if errors do propagate in OpenCL as they do in CUDA and whether there is a mechanism to "clear" them.

Parameters
[in]gpu_infopointer to structure holding GPU information.
Exceptions
InternalErrorif a GPU API returns an unexpected failure (because the call to canDetectGpus() should always prevent this occuring)
void free_gpu ( const gmx_device_info_t deviceInfo)

Frees up the CUDA GPU used by the active context at the time of calling.

If deviceInfo is nullptr, then it is understood that no device was selected so no context is active to be freed. Otherwise, the context is explicitly destroyed and therefore all data uploaded to the GPU is lost. This must only be called when none of this data is required anymore, because subsequent attempts to free memory associated with the context will otherwise fail.

Calls gmx_warning upon errors.

Parameters
[in]deviceInfodevice info of the GPU to clean up for
Returns
true if no error occurs during the freeing.
void free_gpu_info ( const gmx_gpu_info_t gpu_info)

Frees the gpu_dev and dev_use array fields of gpu_info.

Parameters
[in]gpu_infopointer to structure holding GPU information
int get_current_cuda_gpu_device_id ( )

Returns the device ID of the CUDA GPU currently in use.

The GPU used is the one that is active at the time of the call in the active context.

Returns
device ID of the GPU in use at the time of the call
void get_gpu_device_info_string ( char *  s,
const gmx_gpu_info_t gpu_info,
int  index 
)

Formats and returns a device information string for a given GPU.

Given an index directly into the array of available GPUs (gpu_dev) returns a formatted info string for the respective GPU which includes ID, name, compute capability, and detection status.

Parameters
[out]spointer to output string (has to be allocated externally)
[in]gpu_infoInformation about detected GPUs
[in]indexan index directly into the array of available GPUs
std::vector<int> getCompatibleGpus ( const gmx_gpu_info_t gpu_info)

Return a container of the detected GPUs that are compatible.

This function filters the result of the detection for compatible GPUs, based on the previously run compatibility tests.

Parameters
[in]gpu_infoInformation detected about GPUs, including compatibility.
Returns
vector of IDs of GPUs already recorded as compatible
gmx_device_info_t* getDeviceInfo ( const gmx_gpu_info_t gpu_info,
int  deviceId 
)

Return a pointer to the device info for deviceId.

Parameters
[in]gpu_infoGPU info of all detected devices in the system.
[in]deviceIdID for the GPU device requested.
Returns
Pointer to the device info for deviceId.
const char* getGpuCompatibilityDescription ( const gmx_gpu_info_t gpu_info,
int  index 
)

Return a string describing how compatible the GPU with given index is.

Parameters
[in]gpu_infoInformation about detected GPUs
[in]indexindex of GPU to ask about
Returns
A null-terminated C string describing the compatibility status, useful for error messages.
void init_gpu ( const gmx_device_info_t deviceInfo)

Initializes the GPU described by deviceInfo.

TODO Doxygen complains about these - probably a Doxygen bug, since the patterns here are the same as elsewhere in this header.

Parameters
[in]deviceInfodevice info of the GPU to initialize

Issues a fatal error for any critical errors that occur during initialization.

bool isGpuDetectionFunctional ( std::string *  errorMessage)

Return whether GPU detection is functioning correctly.

Returns true when this is a build of GROMACS configured to support GPU usage, and a valid device driver, ICD, and/or runtime was detected.

This function is not intended to be called from build configurations that do not support GPUs, and there will be no descriptive message in that case.

Parameters
[out]errorMessageWhen returning false on a build configured with GPU support and non-nullptr was passed, the string contains a descriptive message about why GPUs cannot be detected.

Does not throw.

void resetGpuProfiler ( )

Resets the GPU profiler if mdrun is being profiled.

When a profiler run is in progress (based on the presence of the NVPROF_ID env. var.), the profiler data is restet in order to eliminate the data collected from the preceding part fo the run.

This function should typically be called at the mdrun counter reset time.

Note that this is implemented only for the CUDA API.

void setupGpuDevicePeerAccess ( const std::vector< int > &  gpuIdsToUse,
const gmx::MDLogger mdlog 
)

Enable peer access between GPUs where supported.

Parameters
[in]gpuIdsToUseList of GPU IDs in use
[in]mdlogLogger object
size_t sizeof_gpu_dev_info ( )

Returns the size of the gpu_dev_info struct.

The size of gpu_dev_info can be used for allocation and communication.

Returns
size in bytes of gpu_dev_info
void startGpuProfiler ( )

Starts the GPU profiler if mdrun is being profiled.

When a profiler run is in progress (based on the presence of the NVPROF_ID env. var.), the profiler is started to begin collecting data during the rest of the run (or until stopGpuProfiler is called).

Note that this is implemented only for the CUDA API.

void stopGpuProfiler ( )

Stops the CUDA profiler if mdrun is being profiled.

This function can be called at cleanup when skipping recording recording subsequent API calls from being traces/profiled is desired, e.g. before uninitialization.

Note that this is implemented only for the CUDA API.