Rookie HPC

Collectives

Non-blocking

C | FORTRAN-legacy | FORTRAN-2008

MPI_Iallgather

Definition

MPI_Iallgather is the non-blocking version of MPI_Allgather; it collects data from all processes in a given communicator and stores the gathered data in the receive buffer of each process. Unlike MPI_Allgather however, it will not wait for the collection to complete and will return immediately instead. The user must therefore check for completion with MPI_Wait or MPI_Test before the buffers passed can be safely reused. MPI_Iallgather is a collective operation; all processes in the communicator must invoke this routine. Other variants of MPI_Iallgather are MPI_Igather, MPI_Igatherv and MPI_Iallgatherv. Refer to MPI_Allgather to see the blocking counterpart of MPI_Iallgather.

Copy

Feedback

int MPI_Iallgather(void* buffer_send,
                   int count_send,
                   MPI_Datatype datatype_send,
                   void* buffer_recv,
                   int count_recv,
                   MPI_Datatype datatype_recv,
                   MPI_Comm communicator,
                   MPI_Request* request);

Parameters

buffer_send
The buffer containing the data to send.
count_send
The number of elements in the send buffer.
datatype_send
The type of one send buffer element.
buffer_recv
The buffer in which store the gathered data.
count_recv
The number of elements to receive from each process, not the total number of elements to receive from all processes altogether.
datatype_recv
The type of one receive buffer element.
communicator
The communicator in which the allgather takes place.
request
The variable in which store the handler on the non-blocking operation.

Returned value

The error code returned during the non-blocking allgather.

MPI_SUCCESS
The routine successfully completed.

Example

Copy

Feedback

#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>

/**
 * @brief Illustrates how to use a gather in a non-blocking way.
 * @details This application is meant to be run with 3 MPI processes. Every MPI
 * process begins with a value, then every MPI process collects the entirety of
 * the data gathered and moves on immediately to do something else while the
 * gather progresses. They then wait for the gather to complete before printing
 * the data gathered. It can be visualised as follows:
 *
 * +-----------+  +-----------+  +-----------+
 * | Process 0 |  | Process 1 |  | Process 2 |
 * +-+-------+-+  +-+-------+-+  +-+-------+-+
 *   | Value |      | Value |      | Value |
 *   |   0   |      |  100  |      |  200  |
 *   +-------+      +-------+      +-------+
 *       |________      |      ________|
 *                |     |     |
 *             +-----+-----+-----+
 *             |  0  | 100 | 200 |
 *             +-----+-----+-----+
 *             |   Each process  |
 *             +-----------------+
 **/
int main(int argc, char* argv[])
{
    MPI_Init(&argc, &argv);

    // Get number of processes and check that 3 processes are used
    int size;
    MPI_Comm_size(MPI_COMM_WORLD, &size);
    if(size != 3)
    {
        printf("This application is meant to be run with 3 MPI processes.\n");
        MPI_Abort(MPI_COMM_WORLD, EXIT_FAILURE);
    }

    // Get my rank
    int my_rank;
    MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);

    // Define my value
    int my_value = my_rank * 100;
    printf("Process %d, my value = %d.\n", my_rank, my_value);

    // Issue the gather and move on immediately after, before the MPI_Iallgather completes
    int buffer[3];
    MPI_Request request;
    MPI_Iallgather(&my_value, 1, MPI_INT, buffer, 1, MPI_INT, MPI_COMM_WORLD, &request);

    // Do another job while the gather progresses
    // ...

    // Wait for the gather to complete before printing the values received
    MPI_Wait(&request, MPI_STATUS_IGNORE);
    printf("Values collected on process %d: %d, %d, %d.\n", my_rank, buffer[0], buffer[1], buffer[2]);

    MPI_Finalize();

    return EXIT_SUCCESS;
}