Rookie HPC

Collectives

C | FORTRAN-legacy | FORTRAN-2008

MPI_Scatterv

Definition

MPI_Scatterv is a version of MPI_Scatter in which the data dispatched from the root process can vary in the number of elements, and the location from which load these elements in the root process buffer. Also, MPI_Scatterv is a collective operation; all processes in the communicator must invoke this routine. Other variants of MPI_Scatterv are MPI_Scatter, MPI_Iscatter and MPI_Iscatterv. Refer to MPI_Iscatterv to see the blocking counterpart of MPI_Scatterv.

Copy

Feedback

int MPI_Scatterv(const void* buffer_send,
                 const int counts_send[],
                 const int displacements[],
                 MPI_Datatype datatype_send,
                 void* buffer_recv,
                 int count_recv,
                 MPI_Datatype datatype_recv,
                 int root,
                 MPI_Comm communicator);

Parameters

buffer_send
The buffer containing the data to disptach from the root process. For non-root processes, the send parameters like this one are ignored.
counts_send
An array that contains the number of elements to send to each process, not the total number of elements in the send buffer. For non-root processes, the send parameters like this one are ignored.
displacements
An array containing the displacement to apply to the message sent to each process. Displacements are expressed in number of elements, not bytes. For non-root processes, the sending parameters like this one are ignored.
datatype_send
The type of one send buffer element. For non-root processes, the send parameters like this one are ignored.
buffer_recv
The buffer in which store the data dispatched.
count_recv
The number of elements in the receive buffer.
datatype_recv
The type of one receive buffer element.
root
The rank of the root process, which will dispatch the data to scatter.
communicator
The communicator in which the scatter takes place.

Returned value

The error code returned from the variable scatter.

MPI_SUCCESS
The routine successfully completed.

Example

Copy

Feedback

#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>

/**
 * @brief Illustrates how to use the variable version of a scatter.
 * @details A process is designed as root and begins with a buffer containig all
 * values, and prints them. It then dispatches these values to all the processes
 * in the same communicator. Other process just receive the dispatched value(s)
 * meant for them. Finally, everybody prints the value received. This
 * application is designed to cover all cases:
 * - Different send counts
 * - Different displacements
 * This application is meant to be run with 3 processes.
 *
 *       +-----------------------------------------+
 *       |                Process 0                |
 *       +-----+-----+-----+-----+-----+-----+-----+
 *       | 100 |  0  | 101 | 102 |  0  |  0  | 103 |
 *       +-----+-----+-----+-----+-----+-----+-----+
 *         |            |     |                |
 *         |            |     |                |
 *         |            |     |                |
 *         |            |     |                |
 *         |            |     |                |
 *         |            |     |                |
 * +-----------+ +-------------------+ +-----------+
 * | Process 0 | |    Process 1      | | Process 2 |
 * +-+-------+-+ +-+-------+-------+-+ +-+-------+-+
 *   | Value |     | Value | Value |     | Value |
 *   |  100  |     |  101  |  102  |     |  103  |
 *   +-------+     +-------+-------+     +-------+ 
 *                
 **/
int main(int argc, char* argv[])
{
    MPI_Init(&argc, &argv);

    // Get number of processes and check that 3 processes are used
    int size;
    MPI_Comm_size(MPI_COMM_WORLD, &size);
    if(size != 3)
    {
        printf("This application is meant to be run with 3 processes.\n");
        MPI_Abort(MPI_COMM_WORLD, EXIT_FAILURE);
    }

    // Determine root's rank
    int root_rank = 0;

    // Get my rank
    int my_rank;
    MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);

    switch(my_rank)
    {
        case 0:
        {
            // Define my value
            int my_value;

            // Declare the buffer
            int buffer[7] = {100, 0, 101, 102, 0, 0, 103};

            // Declare the counts
            int counts[3] = {1, 2, 1};

            // Declare the displacements
            int displacements[3] = {0, 2, 6};

            printf("Values in the buffer of root process:");
            for(int i = 0; i < 7; i++)
            {
                printf(" %d", buffer[i]);
            }
            printf("\n");
            MPI_Scatterv(buffer, counts, displacements, MPI_INT, &my_value, 1, MPI_INT, root_rank, MPI_COMM_WORLD);
            printf("Process %d received value %d.\n", my_rank, my_value);
            break;
        }
        case 1:
        {
            // Declare my values
            int my_values[2];

            MPI_Scatterv(NULL, NULL, NULL, MPI_INT, my_values, 2, MPI_INT, root_rank, MPI_COMM_WORLD);
            printf("Process %d received values %d and %d.\n", my_rank, my_values[0], my_values[1]);
            break;
        }
        case 2:
        {
            // Declare my values
            int my_value;

            MPI_Scatterv(NULL, NULL, NULL, MPI_INT, &my_value, 1, MPI_INT, root_rank, MPI_COMM_WORLD);
            printf("Process %d received value %d.\n", my_rank, my_value);
            break;
        }
    }

    MPI_Finalize();

    return EXIT_SUCCESS;
}