1
1
openmpi/ompi/mpi/man/man3/MPI_Neighbor_alltoallv.3in
Nathan Hjelm 9235b290d5 Update man pages for MPI-3 and add some missing man pages for MPI-2.x
functions.

cmr=v1.7.4

This commit was SVN r29336.
2013-10-02 14:27:47 +00:00

192 строки
7.4 KiB
Plaintext

.\" -*- nroff -*-
.\" Copyright 2013 Los Alamos National Security, LLC. All rights reserved.
.\" Copyright 2010 Cisco Systems, Inc. All rights reserved.
.\" Copyright 2006-2008 Sun Microsystems, Inc.
.\" Copyright (c) 1996 Thinking Machines Corporation
.TH MPI_Neighbor_alltoallv 3 "#OMPI_DATE#" "#PACKAGE_VERSION#" "#PACKAGE_NAME#"
.SH NAME
\fBMPI_Neighbor_alltoallv, MPI_Ineighbor_alltoallv\fP \- All processes send different amounts of data to, and receive different amounts of data from, all neighbors
.SH SYNTAX
.ft R
.SH C Syntax
.nf
#include <mpi.h>
int MPI_Neighbor_alltoallv(const void *\fIsendbuf\fP, const int \fIsendcounts\fP[],
const int \fIsdispls\f[]P, MPI_Datatype \fIsendtype\fP,
void *\fIrecvbuf\fP, const int\fI recvcounts\fP[],
const int \fIrdispls\fP[], MPI_Datatype \fIrecvtype\fP, MPI_Comm \fIcomm\fP)
int MPI_Ineighbor_alltoallv(const void *\fIsendbuf\fP, const int \fIsendcounts\fP[],
const int \fIsdispls\f[]P, MPI_Datatype \fIsendtype\fP,
void *\fIrecvbuf\fP, const int\fI recvcounts\fP[],
const int \fIrdispls\fP[], MPI_Datatype \fIrecvtype\fP, MPI_Comm \fIcomm\fP,
MPI_Request \fI*request\fP)
.fi
.SH Fortran Syntax
.nf
INCLUDE 'mpif.h'
MPI_NEIGHBOR_ALLTOALLV(\fISENDBUF, SENDCOUNTS, SDISPLS, SENDTYPE,
RECVBUF, RECVCOUNTS, RDISPLS, RECVTYPE, COMM, IERROR\fP)
<type> \fISENDBUF(*), RECVBUF(*)\fP
INTEGER \fISENDCOUNTS(*), SDISPLS(*), SENDTYPE\fP
INTEGER \fIRECVCOUNTS(*), RDISPLS(*), RECVTYPE\fP
INTEGER \fICOMM, IERROR\fP
MPI_INEIGHBOR_ALLTOALLV(\fISENDBUF, SENDCOUNTS, SDISPLS, SENDTYPE,
RECVBUF, RECVCOUNTS, RDISPLS, RECVTYPE, COMM, REQUEST, IERROR\fP)
<type> \fISENDBUF(*), RECVBUF(*)\fP
INTEGER \fISENDCOUNTS(*), SDISPLS(*), SENDTYPE\fP
INTEGER \fIRECVCOUNTS(*), RDISPLS(*), RECVTYPE\fP
INTEGER \fICOMM, REQUEST, IERROR\fP
.fi
.SH INPUT PARAMETERS
.ft R
.TP 1.2i
sendbuf
Starting address of send buffer.
.TP 1.2i
sendcounts
Integer array, where entry i specifies the number of elements to send
to neighbor i.
.TP 1.2i
sdispls
Integer array, where entry i specifies the displacement (offset from
\fIsendbuf\fP, in units of \fIsendtype\fP) from which to send data to
neighbor i.
.TP 1.2i
sendtype
Datatype of send buffer elements.
.TP 1.2i
recvcounts
Integer array, where entry j specifies the number of elements to
receive from neighbor j.
.TP 1.2i
rdispls
Integer array, where entry j specifies the displacement (offset from
\fIrecvbuf\fP, in units of \fIrecvtype\fP) to which data from neighbor j
should be written.
.TP 1.2i
recvtype
Datatype of receive buffer elements.
.TP 1.2i
comm
Communicator over which data is to be exchanged.
.SH OUTPUT PARAMETERS
.ft R
.TP 1.2i
recvbuf
Address of receive buffer.
.TP 1i
request
Request (handle, non-blocking only).
.ft R
.TP 1.2i
IERROR
Fortran only: Error status.
.SH DESCRIPTION
.ft R
MPI_Neighbor_alltoallv is a generalized collective operation in which all
processes send data to and receive data from all neighbors. It
adds flexibility to MPI_Neighbor_alltoall by allowing the user to specify data
to send and receive vector-style (via a displacement and element
count). The operation of this routine can be thought of as follows,
where each process performs 2n (n being the number of neighbors in
to topology of communicator \fIcomm\fP) independent point-to-point communications.
The neighbors and buffer layout are determined by the topology of \fIcomm\fP.
.sp
.nf
MPI_Cart_get(\fIcomm\fP, maxdims, dims, periods, coords);
for (dim = 0, i = 0 ; dim < dims ; ++dim) {
MPI_Cart_shift(\fIcomm\fP, dim, 1, &r0, &r1);
MPI_Isend(\fIsendbuf\fP + \fIsdispls\fP[i] * extent(\fIsendtype\fP),
\fIsendcount\fP, \fIsendtype\fP, r0, ..., \fIcomm\fP, ...);
MPI_Irecv(\fIrecvbuf\fP + \fIrdispls\fP[i] * extent(\fIrecvtype\fP),
\fIrecvcount\fP, \fIrecvtype\fP, r0, ..., \fIcomm\fP, ...);
++i;
MPI_Isend(\fIsendbuf\fP + \fIsdispls\fP[i] * extent(\fIsendtype\fP),
\fIsendcount\fP, \fIsendtype\fP, r1, ..., \fIcomm\fP, &req[i]);
MPI_Irecv(\fIrecvbuf\fP + \fIrdispls\fP[i] * extent(\fIrecvtype\fP),
\fIrecvcount\fP, \fIrecvtype\fP, r1, ..., \fIcomm\fP, ...);
++i;
}
.fi
.sp
Process j sends the k-th block of its local \fIsendbuf\fP to neighbor
k, which places the data in the j-th block of its local
\fIrecvbuf\fP.
.sp
When a pair of processes exchanges data, each may pass different
element count and datatype arguments so long as the sender specifies
the same amount of data to send (in bytes) as the receiver expects
to receive.
.sp
Note that process i may send a different amount of data to process j
than it receives from process j. Also, a process may send entirely
different amounts of data to different processes in the communicator.
.sp
.SH NEIGHBOR ORDERING
For a distributed graph topology, created with MPI_Dist_graph_create, the sequence of neighbors
in the send and receive buffers at each process is defined as the sequence returned by MPI_Dist_graph_neighbors
for destinations and sources, respectively. For a general graph topology, created with MPI_Graph_create, the order of
neighbors in the send and receive buffers is defined as the sequence of neighbors as returned by MPI_Graph_neighbors.
Note that general graph topologies should generally be replaced by the distributed graph topologies.
For a Cartesian topology, created with MPI_Cart_create, the sequence of neighbors in the send and receive
buffers at each process is defined by order of the dimensions, first the neighbor in the negative direction
and then in the positive direction with displacement 1. The numbers of sources and destinations in the
communication routines are 2*ndims with ndims defined in MPI_Cart_create. If a neighbor does not exist, i.e., at
the border of a Cartesian topology in the case of a non-periodic virtual grid dimension (i.e.,
periods[...]==false), then this neighbor is defined to be MPI_PROC_NULL.
If a neighbor in any of the functions is MPI_PROC_NULL, then the neighborhood collective communication behaves
like a point-to-point communication with MPI_PROC_NULL in this direction. That is, the buffer is still part of
the sequence of neighbors but it is neither communicated nor updated.
.sp
.SH NOTES
.ft R
The MPI_IN_PLACE option for \fIsendbuf\fP is not meaningful for this operation.
.sp
The specification of counts and displacements should not cause
any location to be written more than once.
.sp
All arguments on all processes are significant. The \fIcomm\fP argument,
in particular, must describe the same communicator on all processes.
.sp
The offsets of \fIsdispls\fP and \fIrdispls\fP are measured in units
of \fIsendtype\fP and \fIrecvtype\fP, respectively. Compare this to
MPI_Neighbor_alltoallw, where these offsets are measured in bytes.
.SH ERRORS
.ft R
Almost all MPI routines return an error value; C routines as
the value of the function and Fortran routines in the last argument.
.sp
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for
I/O function errors. The error handler may be changed with
MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN
may be used to cause error values to be returned. Note that MPI does not
guarantee that an MPI program can continue past an error.
.SH SEE ALSO
.ft R
.nf
MPI_Neighbor_alltoall
MPI_Neighbor_alltoallw
MPI_Cart_create
MPI_Graph_create
MPI_Dist_graph_create