The OpenMP API includes an ever-growing number of run-time library routines.
These routines are used for a variety of purposes as shown in the table below:
Routine | Purpose |
---|---|
OMP_SET_NUM_THREADS | Sets the number of threads that will be used in the next parallel region |
OMP_GET_NUM_THREADS | Returns the number of threads that are currently in the team executing the parallel region from which it is called |
OMP_GET_MAX_THREADS | Returns the maximum value that can be returned by a call to the OMP_GET_NUM_THREADS function |
OMP_GET_THREAD_NUM | Returns the thread number of the thread, within the team, making this call. |
OMP_GET_THREAD_LIMIT | Returns the maximum number of OpenMP threads available to a program |
OMP_GET_NUM_PROCS | Returns the number of processors that are available to the program |
OMP_IN_PARALLEL | Used to determine if the section of code which is executing is parallel or not |
OMP_SET_DYNAMIC | Enables or disables dynamic adjustment (by the run time system) of the number of threads available for execution of parallel regions |
OMP_GET_DYNAMIC | Used to determine if dynamic thread adjustment is enabled or not |
OMP_SET_NESTED | Used to enable or disable nested parallelism |
OMP_GET_NESTED | Used to determine if nested parallelism is enabled or not |
OMP_SET_SCHEDULE | Sets the loop scheduling policy when "runtime" is used as the schedule kind in the OpenMP directive |
OMP_GET_SCHEDULE | Returns the loop scheduling policy when "runtime" is used as the schedule kind in the OpenMP directive |
OMP_SET_MAX_ACTIVE_LEVELS | Sets the maximum number of nested parallel regions |
OMP_GET_MAX_ACTIVE_LEVELS | Returns the maximum number of nested parallel regions |
OMP_GET_LEVEL | Returns the current level of nested parallel regions |
OMP_GET_ANCESTOR_THREAD_NUM | Returns, for a given nested level of the current thread, the thread number of ancestor thread |
OMP_GET_TEAM_SIZE | Returns, for a given nested level of the current thread, the size of the thread team |
OMP_GET_ACTIVE_LEVEL | Returns the number of nested, active parallel regions enclosing the task that contains the call |
OMP_IN_FINAL | Returns true if the routine is executed in the final task region; otherwise it returns false |
OMP_INIT_LOCK | Initializes a lock associated with the lock variable |
OMP_DESTROY_LOCK | Disassociates the given lock variable from any locks |
OMP_SET_LOCK | Acquires ownership of a lock |
OMP_UNSET_LOCK | Releases a lock |
OMP_TEST_LOCK | Attempts to set a lock, but does not block if the lock is unavailable |
OMP_INIT_NEST_LOCK | Initializes a nested lock associated with the lock variable |
OMP_DESTROY_NEST_LOCK | Disassociates the given nested lock variable from any locks |
OMP_SET_NEST_LOCK | Acquires ownership of a nested lock |
OMP_UNSET_NEST_LOCK | Releases a nested lock |
OMP_TEST_NEST_LOCK | Attempts to set a nested lock, but does not block if the lock is unavailable |
OMP_GET_WTIME | Provides a portable wall clock timing routine |
OMP_GET_WTICK | Returns a double-precision floating point value equal to the number of seconds between successive clock ticks |
For C/C++, all of the run-time library routines are actual subroutines. For Fortran, some are actually functions, and some are subroutines. For example:
Fortran
INTEGER FUNCTION OMP_GET_NUM_THREADS()
C/C++
#include <omp.h>
int omp_get_num_threads(void)
Note that for C/C++, you usually need to include the <omp.h>
header file.
Fortran routines are not case sensitive, but C/C++ routines are.
omp_lock_t
or type omp_nest_lock_t
, depending on the function being used.