C | Fortran-2008 | Fortran-90



MPI_Iallgather is the non-blocking version of MPI_Allgather; it collects data from all processes in a given communicator and stores the gathered data in the receive buffer of each process. Unlike MPI_Allgather however, it will not wait for the collection to complete and will return immediately instead. The user must therefore check for completion with MPI_Wait or MPI_Test before the buffers passed can be safely reused. MPI_Iallgather is a collective operation; all processes in the communicator must invoke this routine. Other variants of MPI_Iallgather are MPI_Igather, MPI_Igatherv and MPI_Iallgatherv. Refer to MPI_Allgather to see the blocking counterpart of MPI_Iallgather.



int MPI_Iallgather(void* buffer_send,
                   int count_send,
                   MPI_Datatype datatype_send,
                   void* buffer_recv,
                   int count_recv,
                   MPI_Datatype datatype_recv,
                   MPI_Comm communicator,
                   MPI_Request* request);



The buffer containing the data to send. The “in place” option for intra-communicators is specified by passing MPI_IN_PLACE as the value of buffer_send at the root. In such a case, count_send and datatype_send are ignored, and the contribution of the root to the gathered vector is assumed to be already in the correct place in the receive buffer.


The number of elements in the send buffer.


The type of one send buffer element.


The buffer in which store the gathered data.


The number of elements to receive from each process, not the total number of elements to receive from all processes altogether.


The type of one receive buffer element.


The communicator in which the allgather takes place.


The variable in which store the handler on the non-blocking operation.

Return value

The error code returned during the non-blocking allgather.




#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>

 * @brief Illustrates how to use a gather in a non-blocking way.
 * @details This application is meant to be run with 3 MPI processes. Every MPI
 * process begins with a value, then every MPI process collects the entirety of
 * the data gathered and moves on immediately to do something else while the
 * gather progresses. They then wait for the gather to complete before printing
 * the data gathered. It can be visualised as follows:
 * +-----------+  +-----------+  +-----------+
 * | Process 0 |  | Process 1 |  | Process 2 |
 * +-+-------+-+  +-+-------+-+  +-+-------+-+
 *   | Value |      | Value |      | Value |
 *   |   0   |      |  100  |      |  200  |
 *   +-------+      +-------+      +-------+
 *       |________      |      ________|
 *                |     |     |
 *             +-----+-----+-----+
 *             |  0  | 100 | 200 |
 *             +-----+-----+-----+
 *             |   Each process  |
 *             +-----------------+
int main(int argc, char* argv[])
    MPI_Init(&argc, &argv);

    // Get number of processes and check that 3 processes are used
    int size;
    MPI_Comm_size(MPI_COMM_WORLD, &size);
    if(size != 3)
        printf("This application is meant to be run with 3 MPI processes.\n");

    // Get my rank
    int my_rank;
    MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);

    // Define my value
    int my_value = my_rank * 100;
    printf("Process %d, my value = %d.\n", my_rank, my_value);

    // Issue the gather and move on immediately after, before the MPI_Iallgather completes
    int buffer[3];
    MPI_Request request;
    MPI_Iallgather(&my_value, 1, MPI_INT, buffer, 1, MPI_INT, MPI_COMM_WORLD, &request);

    // Do another job while the gather progresses
    // ...

    // Wait for the gather to complete before printing the values received
    MPI_Wait(&request, MPI_STATUS_IGNORE);
    printf("Values collected on process %d: %d, %d, %d.\n", my_rank, buffer[0], buffer[1], buffer[2]);


    return EXIT_SUCCESS;