The RAM Class

As the name suggests, the RAM class provides an abstraction for the RAM a process can access.

Getting RAM

RuntimeView objects are containers of ResourceSet objects, which in turn are containers of hardware including RAM. Assuming we have a RuntimeView object rv we can get a RAM object by:

// Get the RAM for rank 0
const auto& rank_0_ram = rv.at(0).ram();

// Get the RAM local to the current process
const auto& my_ram = rv.my_resource_set().ram();

The first call to ResourceSet::ram() retrieves a handle to the RAM which is local to rank 0. The second call gets the RAM instance local to the current process (which for rank 0 is the same as rank_0_ram). Once you have the RAM object you can see basic stats such as how much total memory it has by using member functions, for example:

// How much total ram does rank_0_ram and my_ram have?
const auto rank_0_total = rank_0_ram.total_space();
const auto my_total_ram = my_ram.total_space();

All-to-One MPI Operations

MPI operations are typically categorized based on how many processes are sending and how many are receiving. In “all-to-one” MPI operations every process computes some data and sends it to a “root” process. For simplicity, this tutorial will have process \(p\) “compute” three integers: \(p\), \(p+1\), and \(p+2\). If we want to collect all of these computed results into rank 0’s RAM we would do:

// Generate some process-specific data
const auto my_rank = rv.my_resource_set().mpi_rank();
std::vector<std::size_t> local_data{my_rank, my_rank + 1, my_rank + 2};

auto all_data = rank_0_ram.gather(local_data);

The resulting object all_data is a std::optional which has a value if rank_0_ram is local to the current process. We can exploit whether or not all_data has a value to avoid needing to remember which process was the root for the gather operation. For example:

if(all_data) {
    // TODO: use logger
} else {
    // TODO: use logger
}