Rookie HPC

Collectives

Non-blocking

C | FORTRAN-legacy | FORTRAN-2008

MPI_Iscatter

Definition

MPI_Iscatter is the non-blocking version of MPI_Scatter; it dispatches data from a process across all processes in the same communicator. Unlike MPI_Scatter however, MPI_Iscatter returns immediately, before the buffer is guaranteed to have been dispatched. The user must therefore explicitly wait (MPI_Wait) or test (MPI_Test) for the completion of MPI_Iscatter before safely reusing the buffer passed. Also, MPI_Iscatter is a collective operation; all processes in the communicator must invoke this routine. Other variants of MPI_Iscatter are MPI_Scatter, MPI_Scatterv and MPI_Iscatterv. Refer to MPI_Scatter to see the blocking counterpart of MPI_Iscatter.

Copy

Feedback

int MPI_Iscatter(const void* buffer_send,
                 int count_send,
                 MPI_Datatype datatype_send,
                 void* buffer_recv,
                 int count_recv,
                 MPI_Datatype datatype_recv,
                 int root,
                 MPI_Comm communicator,
                 MPI_Request* request);

Parameters

buffer_send
The buffer containing the data to disptach from the root process. For non-root processes, the send parameters like this one are ignored.
count_send
The number of elements to send to each process, not the total number of elements in the send buffer. For non-root processes, the send parameters like this one are ignored.
datatype_send
The type of one send buffer element. For non-root processes, the send parameters like this one are ignored.
buffer_recv
The buffer in which store the data dispatched.
count_recv
The number of elements in the receive buffer.
datatype_recv
The type of one receive buffer element.
root
The rank of the root process, which will dispatch the data to scatter.
communicator
The communicator in which the scatter takes place.
request
The variable in which store the handler on the non-blocking operation.

Returned value

The error code returned from the non-blocking scatter.

MPI_SUCCESS
The routine successfully completed.

Example

Copy

Feedback

#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>

/**
 * @brief Illustrates how to use a scatter in a non-blocking way.
 * @details This application is meant to be run with 4 MPI processes. Process 0
 * is designed as root and begins with a buffer containing all values, and
 * prints them. It then dispatches these values to all the processes in the same
 * communicator. Other process just receive the dispatched value meant for them.
 * Finally, everybody prints the value received.
 *
 *                +-----------------------+
 *                |       Process 0       |
 *                +-----+-----+-----+-----+
 *                |  0  | 100 | 200 | 300 |
 *                +-----+-----+-----+-----+
 *                 /      |       |      \
 *                /       |       |       \
 *               /        |       |        \
 *              /         |       |         \
 *             /          |       |          \
 *            /           |       |           \
 * +-----------+ +-----------+ +-----------+ +-----------+
 * | Process 0 | | Process 1 | | Process 2 | | Process 3 |
 * +-+-------+-+ +-+-------+-+ +-+-------+-+ +-+-------+-+ 
 *   | Value |     | Value |     | Value |     | Value |   
 *   |   0   |     |  100  |     |  200  |     |  300  |   
 *   +-------+     +-------+     +-------+     +-------+   
 *                
 **/
int main(int argc, char* argv[])
{
    MPI_Init(&argc, &argv);

    // Get number of processes and check that 4 processes are used
    int size;
    MPI_Comm_size(MPI_COMM_WORLD, &size);
    if(size != 4)
    {
        printf("This application is meant to be run with 4 processes.\n");
        MPI_Abort(MPI_COMM_WORLD, EXIT_FAILURE);
    }

    // Determine root's rank
    int root_rank = 0;

    // Get my rank
    int my_rank;
    MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);

    // Define my value
    int my_value;

    // Request handler
    MPI_Request request;

    if(my_rank == root_rank)
    {
        int buffer[4] = {0, 100, 200, 300};
        printf("Values to scatter from process %d: %d, %d, %d, %d.\n", my_rank, buffer[0], buffer[1], buffer[2], buffer[3]);

        // Launch the scatter
        MPI_Iscatter(buffer, 1, MPI_INT, &my_value, 1, MPI_INT, root_rank, MPI_COMM_WORLD, &request);
    }
    else
    {
        // Launch the scatter
        MPI_Iscatter(NULL, 1, MPI_INT, &my_value, 1, MPI_INT, root_rank, MPI_COMM_WORLD, &request);
    }

    // Do some other job
    printf("Process %d issued the MPI_Iscatter and has moved on, printing this message.\n", my_rank);

    // Wait for the scatter to complete
    printf("Process %d waits for the MPI_Iscatter to complete.\n", my_rank);
    MPI_Wait(&request, MPI_STATUS_IGNORE);
    printf("The MPI_Wait completed, meaning that the MPI_Iscatter completed; process %d received value = %d.\n", my_rank, my_value);

    MPI_Finalize();

    return EXIT_SUCCESS;
}