1
1

Convert MPI_Gather.3in - MPI_Get_version.3in to md

Signed-off-by: Fangcong Yin (fyin2@nd.edu)
Этот коммит содержится в:
Fangcong-Yin 2020-10-30 16:33:05 -04:00
родитель f813656d24
Коммит d85bf3ae1a
21 изменённых файлов: 1433 добавлений и 1431 удалений

Просмотреть файл

@ -1,205 +0,0 @@
.\" -*- nroff -*-
.\" Copyright 2013 Los Alamos National Security, LLC. All rights reserved.
.\" Copyright 2010 Cisco Systems, Inc. All rights reserved.
.\" Copyright 2006-2008 Sun Microsystems, Inc.
.\" Copyright (c) 1996 Thinking Machines Corporation
.\" Copyright (c) 2020 Google, LLC. All rights reserved.
.\" $COPYRIGHT$
.TH MPI_Gather 3 "#OMPI_DATE#" "#PACKAGE_VERSION#" "#PACKAGE_NAME#"
.SH NAME
\fBMPI_Gather, MPI_Igather\fP \- Gathers values from a group of processes.
.SH SYNOPSIS
.ft R
.SH C Syntax
.nf
#include <mpi.h>
int MPI_Gather(const void \fI*sendbuf\fP, int\fI sendcount\fP, MPI_Datatype\fI sendtype\fP,
void\fI *recvbuf\fP, int\fI recvcount\fP, MPI_Datatype\fI recvtype\fP, int \fIroot\fP,
MPI_Comm\fI comm\fP)
int MPI_Igather(const void \fI*sendbuf\fP, int\fI sendcount\fP, MPI_Datatype\fI sendtype\fP,
void\fI *recvbuf\fP, int\fI recvcount\fP, MPI_Datatype\fI recvtype\fP, int \fIroot\fP,
MPI_Comm\fI comm\fP, MPI_Request \fI*request\fP)
.fi
.SH Fortran Syntax
.nf
USE MPI
! or the older form: INCLUDE 'mpif.h'
MPI_GATHER(\fISENDBUF, SENDCOUNT, SENDTYPE, RECVBUF, RECVCOUNT,
RECVTYPE, ROOT, COMM, IERROR\fP)
<type> \fISENDBUF(*), RECVBUF(*)\fP
INTEGER \fISENDCOUNT, SENDTYPE, RECVCOUNT, RECVTYPE, ROOT\fP
INTEGER \fICOMM, IERROR\fP
MPI_IGATHER(\fISENDBUF, SENDCOUNT, SENDTYPE, RECVBUF, RECVCOUNT,
RECVTYPE, ROOT, COMM, REQUEST, IERROR\fP)
<type> \fISENDBUF(*), RECVBUF(*)\fP
INTEGER \fISENDCOUNT, SENDTYPE, RECVCOUNT, RECVTYPE, ROOT\fP
INTEGER \fICOMM, REQUEST, IERROR\fP
.fi
.SH Fortran 2008 Syntax
.nf
USE mpi_f08
MPI_Gather(\fIsendbuf\fP, \fIsendcount\fP, \fIsendtype\fP, \fIrecvbuf\fP, \fIrecvcount\fP, \fIrecvtype\fP,
\fIroot\fP, \fIcomm\fP, \fIierror\fP)
TYPE(*), DIMENSION(..), INTENT(IN) :: \fIsendbuf\fP
TYPE(*), DIMENSION(..) :: \fIrecvbuf\fP
INTEGER, INTENT(IN) :: \fIsendcount\fP, \fIrecvcount\fP, \fIroot\fP
TYPE(MPI_Datatype), INTENT(IN) :: \fIsendtype\fP, \fIrecvtype\fP
TYPE(MPI_Comm), INTENT(IN) :: \fIcomm\fP
INTEGER, OPTIONAL, INTENT(OUT) :: \fIierror\fP
MPI_Igather(\fIsendbuf\fP, \fIsendcount\fP, \fIsendtype\fP, \fIrecvbuf\fP, \fIrecvcount\fP, \fIrecvtype\fP,
\fIroot\fP, \fIcomm\fP, \fIrequest\fP, \fIierror\fP)
TYPE(*), DIMENSION(..), INTENT(IN), ASYNCHRONOUS :: \fIsendbuf\fP
TYPE(*), DIMENSION(..), ASYNCHRONOUS :: \fIrecvbuf\fP
INTEGER, INTENT(IN) :: \fIsendcount\fP, \fIrecvcount\fP, \fIroot\fP
TYPE(MPI_Datatype), INTENT(IN) :: \fIsendtype\fP, \fIrecvtype\fP
TYPE(MPI_Comm), INTENT(IN) :: \fIcomm\fP
TYPE(MPI_Request), INTENT(OUT) :: \fIrequest\fP
INTEGER, OPTIONAL, INTENT(OUT) :: \fIierror\fP
.fi
.SH INPUT PARAMETERS
.ft R
.TP 1i
sendbuf
Starting address of send buffer (choice).
.TP 1i
sendcount
Number of elements in send buffer (integer).
.TP 1i
sendtype
Datatype of send buffer elements (handle).
.TP 1i
recvcount
Number of elements for any single receive (integer, significant only at
root).
.TP 1i
recvtype
Datatype of recvbuffer elements (handle, significant only at root).
.TP 1i
root
Rank of receiving process (integer).
.TP 1i
comm
Communicator (handle).
.SH OUTPUT PARAMETERS
.TP 1i
recvbuf
Address of receive buffer (choice, significant only at root).
.TP 1i
request
Request (handle, non-blocking only).
.ft R
.TP 1i
IERROR
Fortran only: Error status (integer).
.SH DESCRIPTION
.ft R
Each process (root process included) sends the contents of its send buffer to the root process. The root process receives the messages and stores them in rank order. The outcome is as if each of the n processes in the group (including the root process) had executed a call to
.sp
.nf
MPI_Send(sendbuf, sendcount, sendtype, root, \&...)
.fi
.sp
and the root had executed n calls to
.sp
.nf
MPI_Recv(recfbuf + i * recvcount * extent(recvtype), \
recvcount, recvtype, i, \&...)
.fi
.sp
where extent(recvtype) is the type extent obtained from a call to MPI_Type_extent().
.sp
An alternative description is that the n messages sent by the processes in the group are concatenated in rank order, and the resulting message is received by the root as if by a call to MPI_RECV(recvbuf, recvcount * n, recvtype, . . . ).
.sp
The receive buffer is ignored for all nonroot processes.
.sp
General, derived datatypes are allowed for both sendtype and recvtype. The
type signature of sendcount, sendtype on process i must be equal to the type signature of recvcount, recvtype at the root. This implies that the amount of data sent must be equal to the amount of data received, pairwise between each process and the root. Distinct type maps between sender and receiver are still allowed.
.sp
All arguments to the function are significant on process root, while on other processes, only arguments sendbuf, sendcount, sendtype, root, comm are significant. The arguments root and comm must have identical values on all processes.
.sp
The specification of counts and types should not cause any location on the root to be written more than once. Such a call is erroneous.
.sp
Note that the recvcount argument at the root indicates the number of items it receives from each process, not the total number of items it receives.
.sp
\fBExample 1:\fP Gather 100 ints from every process in group to root.
.sp
.nf
MPI_Comm comm;
int gsize,sendarray[100];
int root, *rbuf;
\&...
MPI_Comm_size( comm, &gsize);
rbuf = (int *)malloc(gsize*100*sizeof(int));
MPI_Gather( sendarray, 100, MPI_INT, rbuf, 100, MPI_INT, root, comm);
.fi
.sp
.br
\fBExample 2:\fP Previous example modified -- only the root allocates memory for the receive buffer.
.sp
.nf
MPI_Comm comm;
int gsize,sendarray[100];
int root, myrank, *rbuf;
\&...
MPI_Comm_rank( comm, myrank);
if ( myrank == root) {
MPI_Comm_size( comm, &gsize);
rbuf = (int *)malloc(gsize*100*sizeof(int));
}
MPI_Gather( sendarray, 100, MPI_INT, rbuf, 100, MPI_INT, root, comm);
.fi
.sp
\fBExample 3:\fP Do the same as the previous example, but use a derived
datatype. Note that the type cannot be the entire set of gsize * 100 ints since type matching is defined pairwise between the root and each process in the gather.
.nf
MPI_Comm comm;
int gsize,sendarray[100];
int root, *rbuf;
MPI_Datatype rtype;
\&...
MPI_Comm_size( comm, &gsize);
MPI_Type_contiguous( 100, MPI_INT, &rtype );
MPI_Type_commit( &rtype );
rbuf = (int *)malloc(gsize*100*sizeof(int));
MPI_Gather( sendarray, 100, MPI_INT, rbuf, 1, rtype, root, comm);
.fi
.SH USE OF IN-PLACE OPTION
When the communicator is an intracommunicator, you can perform a gather operation in-place (the output buffer is used as the input buffer). Use the variable MPI_IN_PLACE as the value of the root process \fIsendbuf\fR. In this case, \fIsendcount\fR and \fIsendtype\fR are ignored, and the contribution of the root process to the gathered vector is assumed to already be in the correct place in the receive buffer.
.sp
Note that MPI_IN_PLACE is a special kind of value; it has the same restrictions on its use as MPI_BOTTOM.
.sp
Because the in-place option converts the receive buffer into a send-and-receive buffer, a Fortran binding that includes INTENT must mark these as INOUT, not OUT.
.sp
.SH WHEN COMMUNICATOR IS AN INTER-COMMUNICATOR
.sp
When the communicator is an inter-communicator, the root process in the first group gathers data from all the processes in the second group. The first group defines the root process. That process uses MPI_ROOT as the value of its \fIroot\fR argument. The remaining processes use MPI_PROC_NULL as the value of their \fIroot\fR argument. All processes in the second group use the rank of that root process in the first group as the value of their \fIroot\fR argument. The send buffer argument of the processes in the first group must be consistent with the receive buffer argument of the root process in the second group.
.sp
.SH ERRORS
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument.
.sp
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for I/O function errors. The error handler may be changed with MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned. Note that MPI does not guarantee that an MPI program can continue past an error.
.sp
See the MPI man page for a full list of MPI error codes.
.SH SEE ALSO
.ft R
.sp
.nf
MPI_Gatherv
MPI_Scatter
MPI_Scatterv

205
ompi/mpi/man/man3/MPI_Gather.md Обычный файл
Просмотреть файл

@ -0,0 +1,205 @@
# Name
`MPI_Gather`, `MPI_Igather` - Gathers values from a group of processes.
# Synopsis
## C Syntax
```c
#include <mpi.h>
int MPI_Gather(const void *sendbuf, int sendcount, MPI_Datatype sendtype,
void *recvbuf, int recvcount, MPI_Datatype recvtype, int root,
MPI_Comm comm)
int MPI_Igather(const void *sendbuf, int sendcount, MPI_Datatype sendtype,
void *recvbuf, int recvcount, MPI_Datatype recvtype, int root,
MPI_Comm comm, MPI_Request *request)
```
## Fortran Syntax
```fortran
USE MPI
! or the older form: INCLUDE 'mpif.h'
MPI_GATHER(SENDBUF, SENDCOUNT, SENDTYPE, RECVBUF, RECVCOUNT,
RECVTYPE, ROOT, COMM, IERROR)
<type> SENDBUF(*), RECVBUF(*)
INTEGER SENDCOUNT, SENDTYPE, RECVCOUNT, RECVTYPE, ROOT
INTEGER COMM, IERROR
MPI_IGATHER(SENDBUF, SENDCOUNT, SENDTYPE, RECVBUF, RECVCOUNT,
RECVTYPE, ROOT, COMM, REQUEST, IERROR)
<type> SENDBUF(*), RECVBUF(*)
INTEGER SENDCOUNT, SENDTYPE, RECVCOUNT, RECVTYPE, ROOT
INTEGER COMM, REQUEST, IERROR
```
## Fortran 2008 Syntax
```fortran
USE mpi_f08
MPI_Gather(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype,
root, comm, ierror)
TYPE(*), DIMENSION(..), INTENT(IN) :: sendbuf
TYPE(*), DIMENSION(..) :: recvbuf
INTEGER, INTENT(IN) :: sendcount, recvcount, root
TYPE(MPI_Datatype), INTENT(IN) :: sendtype, recvtype
TYPE(MPI_Comm), INTENT(IN) :: comm
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
MPI_Igather(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype,
root, comm, request, ierror)
TYPE(*), DIMENSION(..), INTENT(IN), ASYNCHRONOUS :: sendbuf
TYPE(*), DIMENSION(..), ASYNCHRONOUS :: recvbuf
INTEGER, INTENT(IN) :: sendcount, recvcount, root
TYPE(MPI_Datatype), INTENT(IN) :: sendtype, recvtype
TYPE(MPI_Comm), INTENT(IN) :: comm
TYPE(MPI_Request), INTENT(OUT) :: request
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
```
# Input Parameters
* `sendbuf` : Starting address of send buffer (choice).
* `sendcount` : Number of elements in send buffer (integer).
* `sendtype` : Datatype of send buffer elements (handle).
* `recvcount` : Number of elements for any single receive (integer, significant only
at root).
* `recvtype` : Datatype of recvbuffer elements (handle, significant only at root).
* `root` : Rank of receiving process (integer).
* `comm` : Communicator (handle).
# Output Parameters
* `recvbuf` : Address of receive buffer (choice, significant only at root).
* `request` : Request (handle, non-blocking only).
* `IERROR` : Fortran only: Error status (integer).
# Description
Each process (root process included) sends the contents of its send
buffer to the root process. The root process receives the messages and
stores them in rank order. The outcome is as if each of the n processes
in the group (including the root process) had executed a call to
```c
MPI_Send(sendbuf, sendcount, sendtype, root, ...)
```
and the root had executed n calls to
```c
MPI_Recv(recfbuf + i * recvcount * extent(recvtype), recvcount, recvtype, i, ...)
```
where extent(recvtype) is the type extent obtained from a call to
`MPI_Type_extent()`.
An alternative description is that the n messages sent by the processes
in the group are concatenated in rank order, and the resulting message
is received by the root as if by a call to `MPI_RECV(recvbuf, recvcount*
n, recvtype, ... )`.
The receive buffer is ignored for all nonroot processes.
General, derived datatypes are allowed for both sendtype and recvtype.
The type signature of `sendcount`, `sendtype` on process i must be equal to
the type signature of `recvcount`, `recvtype` at the root. This implies that
the amount of data sent must be equal to the amount of data received,
pairwise between each process and the root. Distinct type maps between
sender and receiver are still allowed.
All arguments to the function are significant on process root, while on
other processes, only arguments `sendbuf`, `sendcount`, `sendtype`, `root`, `comm`
are significant. The arguments `root` and `comm` must have identical values
on all processes.
The specification of counts and types should not cause any location on
the root to be written more than once. Such a call is erroneous.
Note that the `recvcount` argument at the root indicates the number of
items it receives from each process, not the total number of items it
receives.
Example 1: Gather 100 ints from every process in group to root.
```c
MPI_Comm comm;
int gsize,sendarray[100];
int root, *rbuf;
//...
MPI_Comm_size( comm, &gsize);
rbuf = (int *)malloc(gsize*100*sizeof(int));
MPI_Gather( sendarray, 100, MPI_INT, rbuf, 100, MPI_INT, root, comm);
```
Example 2: Previous example modified -- only the root allocates
memory for the receive buffer.
```c
MPI_Comm comm;
int gsize,sendarray[100];
int root, myrank, *rbuf;
//...
MPI_Comm_rank( comm, myrank);
if ( myrank == root) {
MPI_Comm_size( comm, &gsize);
rbuf = (int *)malloc(gsize*100*sizeof(int));
}
MPI_Gather( sendarray, 100, MPI_INT, rbuf, 100, MPI_INT, root, comm);
```
Example 3: Do the same as the previous example, but use a derived
datatype. Note that the type cannot be the entire set of gsize * 100
ints since type matching is defined pairwise between the root and each
process in the gather.
```c
MPI_Comm comm;
int gsize,sendarray[100];
int root, *rbuf;
MPI_Datatype rtype;
//...
MPI_Comm_size( comm, &gsize);
MPI_Type_contiguous( 100, MPI_INT, &rtype );
MPI_Type_commit( &rtype );
rbuf = (int *)malloc(gsize*100*sizeof(int));
MPI_Gather( sendarray, 100, MPI_INT, rbuf, 1, rtype, root, comm);
```
# Use Of In-Place Option
When the communicator is an intracommunicator, you can perform a gather operation in-place (the output buffer is used as the input buffer). Use the variable `MPI_IN_PLACE` as the value of the root process `sendbuf`. In this case, `sendcount` and `sendtype` are ignored, and the contribution of the root process to the gathered vector is assumed to already be in the correct place in the receive buffer.
Note that `MPI_IN_PLACE` is a special kind of value; it has the same restrictions on its use as MPI_BOTTOM.
Because the in-place option converts the receive buffer into a send-and-receive buffer, a Fortran binding that includes `INTENT` must mark these as `INOUT`, not `OUT`.
# When Communicator Is An Inter-Communicator
When the communicator is an inter-communicator, the root process in the first group gathers data from all the processes in the second group. The first group defines the root process. That process uses MPI_ROOT as the value of its `root` argument. The remaining processes use `MPI_PROC_NULL` as the value of their `root` argument. All processes in the second group use the rank of that root process in the first group as the value of their `root` argument. The send buffer argument of the processes in the first group must be consistent with the receive buffer argument of the root process in the second group.
# Errors
Almost all MPI routines return an error value; C routines as the value
of the function and Fortran routines in the last argument.
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for
I/O function errors. The error handler may be changed with
`MPI_Comm_set_errhandler`; the predefined error handler `MPI_ERRORS_RETURN`
may be used to cause error values to be returned. Note that MPI does not
guarantee that an MPI program can continue past an error.
See the MPI man page for a full list of MPI error codes.
# See Also
[`MPI_Gatherv`(3)](MPI_Gatherv.html)
[`MPI_Scatter`(3)](MPI_Scatter.html)
[`MPI_Scatterv`(3)](MPI_Scatterv.html)

Просмотреть файл

@ -1,367 +0,0 @@
.\" -*- nroff -*-
.\" Copyright 2013 Los Alamos National Security, LLC. All rights reserved.
.\" Copyright 2010 Cisco Systems, Inc. All rights reserved.
.\" Copyright 2006-2008 Sun Microsystems, Inc.
.\" Copyright (c) 1996 Thinking Machines Corporation
.\" Copyright (c) 2020 Google, LLC. All rights reserved.
.\" $COPYRIGHT$
.TH MPI_Gatherv 3 "#OMPI_DATE#" "#PACKAGE_VERSION#" "#PACKAGE_NAME#"
.SH NAME
\fBMPI_Gatherv, MPI_Igatherv\fP \- Gathers varying amounts of data from all processes to the root process
.SH SYNTAX
.ft R
.SH C Syntax
.nf
#include <mpi.h>
int MPI_Gatherv(const void *\fIsendbuf\fP, int\fI sendcount\fP, MPI_Datatype\fI sendtype\fP,
void\fI *recvbuf\fP, const int\fI recvcounts[]\fP, const int\fI displs[]\fP, MPI_Datatype\fI recvtype\fP,
int \fIroot\fP, MPI_Comm\fI comm\fP)
int MPI_Igatherv(const void *\fIsendbuf\fP, int\fI sendcount\fP, MPI_Datatype\fI sendtype\fP,
void\fI *recvbuf\fP, const int\fI recvcounts[]\fP, const int\fI displs[]\fP, MPI_Datatype\fI recvtype\fP,
int \fIroot\fP, MPI_Comm\fI comm\fP, MPI_Request \fI*request\fP)
.fi
.SH Fortran Syntax
.nf
USE MPI
! or the older form: INCLUDE 'mpif.h'
MPI_GATHERV(\fISENDBUF, SENDCOUNT, SENDTYPE, RECVBUF, RECVCOUNTS,
DISPLS, RECVTYPE, ROOT, COMM, IERROR\fP)
<type> \fISENDBUF(*), RECVBUF(*)\fP
INTEGER \fISENDCOUNT, SENDTYPE, RECVCOUNTS(*), DISPLS(*)\fP
INTEGER \fIRECVTYPE, ROOT, COMM, IERROR\fP
MPI_IGATHERV(\fISENDBUF, SENDCOUNT, SENDTYPE, RECVBUF, RECVCOUNTS,
DISPLS, RECVTYPE, ROOT, COMM, REQUEST, IERROR\fP)
<type> \fISENDBUF(*), RECVBUF(*)\fP
INTEGER \fISENDCOUNT, SENDTYPE, RECVCOUNTS(*), DISPLS(*)\fP
INTEGER \fIRECVTYPE, ROOT, COMM, REQUEST, IERROR\fP
.fi
.SH Fortran 2008 Syntax
.nf
USE mpi_f08
MPI_Gatherv(\fIsendbuf\fP, \fIsendcount\fP, \fIsendtype\fP, \fIrecvbuf\fP, \fIrecvcounts\fP, \fIdispls\fP,
\fIrecvtype\fP, \fIroot\fP, \fIcomm\fP, \fIierror\fP)
TYPE(*), DIMENSION(..), INTENT(IN) :: \fIsendbuf\fP
TYPE(*), DIMENSION(..) :: \fIrecvbuf\fP
INTEGER, INTENT(IN) :: \fIsendcount\fP, \fIrecvcounts(*)\fP, \fIdispls(*)\fP, \fIroot\fP
TYPE(MPI_Datatype), INTENT(IN) :: \fIsendtype\fP, \fIrecvtype\fP
TYPE(MPI_Comm), INTENT(IN) :: \fIcomm\fP
INTEGER, OPTIONAL, INTENT(OUT) :: \fIierror\fP
MPI_Igatherv(\fIsendbuf\fP, \fIsendcount\fP, \fIsendtype\fP, \fIrecvbuf\fP, \fIrecvcounts\fP, \fIdispls\fP,
\fIrecvtype\fP, \fIroot\fP, \fIcomm\fP, \fIrequest\fP, \fIierror\fP)
TYPE(*), DIMENSION(..), INTENT(IN), ASYNCHRONOUS :: \fIsendbuf\fP
TYPE(*), DIMENSION(..), ASYNCHRONOUS :: \fIrecvbuf\fP
INTEGER, INTENT(IN) :: \fIsendcount\fP, \fIroot\fP
INTEGER, INTENT(IN), ASYNCHRONOUS :: \fIrecvcounts(*)\fP, \fIdispls(*)\fP
TYPE(MPI_Datatype), INTENT(IN) :: \fIsendtype\fP, \fIrecvtype\fP
TYPE(MPI_Comm), INTENT(IN) :: \fIcomm\fP
TYPE(MPI_Request), INTENT(OUT) :: \fIrequest\fP
INTEGER, OPTIONAL, INTENT(OUT) :: \fIierror\fP
.fi
.SH INPUT PARAMETERS
.ft R
.TP 1i
sendbuf
Starting address of send buffer (choice).
.TP 1i
sendcount
Number of elements in send buffer (integer).
.TP 1i
sendtype
Datatype of send buffer elements (handle).
.TP 1i
recvcounts
Integer array (of length group size) containing the number of elements that
are received from each process (significant only at root).
.TP 1i
displs
Integer array (of length group size). Entry i specifies the displacement
relative to recvbuf at which to place the incoming data from process i (significant only at root).
.TP 1i
recvtype
Datatype of recv buffer elements (significant only at root) (handle).
.TP 1i
root
Rank of receiving process (integer).
.TP 1i
comm
Communicator (handle).
.SH OUTPUT PARAMETERS
.ft R
.TP 1i
recvbuf
Address of receive buffer (choice, significant only at root).
.TP 1i
request
Request (handle, non-blocking only).
.ft R
.TP 1i
IERROR
Fortran only: Error status (integer).
.SH DESCRIPTION
.ft R
MPI_Gatherv extends the functionality of MPI_Gather by allowing a varying count of data from each process, since recvcounts is now an array. It also allows more flexibility as to where the data is placed on the root, by providing the new argument, displs.
.sp
The outcome is as if each process, including the root process, sends a message to the root,
.sp
.nf
MPI_Send(sendbuf, sendcount, sendtype, root, \&...)
.fi
.sp
and the root executes n receives,
.sp
.nf
MPI_Recv(recvbuf + disp[i] * extent(recvtype), \\
recvcounts[i], recvtype, i, \&...)
.fi
.sp
Messages are placed in the receive buffer of the root process in rank order, that is, the data sent from process j is placed in the jth portion of the receive buffer recvbuf on process root. The jth portion of recvbuf begins at offset displs[j] elements (in terms of recvtype) into recvbuf.
.sp
The receive buffer is ignored for all nonroot processes.
.sp
The type signature implied by sendcount, sendtype on process i must be equal to the type signature implied by recvcounts[i], recvtype at the root. This implies that the amount of data sent must be equal to the amount of data received, pairwise between each process and the root. Distinct type maps between sender and receiver are still allowed, as illustrated in Example 2, below.
.sp
All arguments to the function are significant on process root, while on other processes, only arguments sendbuf, sendcount, sendtype, root, comm are significant. The arguments root and comm must have identical values on all processes.
.sp
The specification of counts, types, and displacements should not cause any location on the root to be written more than once. Such a call is erroneous.
.sp
\fBExample 1:\fP Now have each process send 100 ints to root, but place
each set (of 100) stride ints apart at receiving end. Use MPI_Gatherv and
the displs argument to achieve this effect. Assume stride >= 100.
.sp
.nf
MPI_Comm comm;
int gsize,sendarray[100];
int root, *rbuf, stride;
int *displs,i,*rcounts;
\&...
MPI_Comm_size(comm, &gsize);
rbuf = (int *)malloc(gsize*stride*sizeof(int));
displs = (int *)malloc(gsize*sizeof(int));
rcounts = (int *)malloc(gsize*sizeof(int));
for (i=0; i<gsize; ++i) {
displs[i] = i*stride;
rcounts[i] = 100;
}
MPI_Gatherv(sendarray, 100, MPI_INT, rbuf, rcounts,
displs, MPI_INT, root, comm);
.fi
.sp
Note that the program is erroneous if stride < 100.
.sp
\fBExample 2:\fP Same as Example 1 on the receiving side, but send the 100
ints from the 0th column of a 100 * 150 int array, in C.
.sp
.nf
MPI_Comm comm;
int gsize,sendarray[100][150];
int root, *rbuf, stride;
MPI_Datatype stype;
int *displs,i,*rcounts;
\&...
MPI_Comm_size(comm, &gsize);
rbuf = (int *)malloc(gsize*stride*sizeof(int));
displs = (int *)malloc(gsize*sizeof(int));
rcounts = (int *)malloc(gsize*sizeof(int));
for (i=0; i<gsize; ++i) {
displs[i] = i*stride;
rcounts[i] = 100;
}
/* Create datatype for 1 column of array
*/
MPI_Type_vector(100, 1, 150, MPI_INT, &stype);
MPI_Type_commit( &stype );
MPI_Gatherv(sendarray, 1, stype, rbuf, rcounts,
displs, MPI_INT, root, comm);
.fi
.sp
\fBExample 3:\fP Process i sends (100-i) ints from the ith column of a 100
x 150 int array, in C. It is received into a buffer with stride, as in the
previous two examples.
.sp
.nf
MPI_Comm comm;
int gsize,sendarray[100][150],*sptr;
int root, *rbuf, stride, myrank;
MPI_Datatype stype;
int *displs,i,*rcounts;
\&...
MPI_Comm_size(comm, &gsize);
MPI_Comm_rank( comm, &myrank );
rbuf = (int *)malloc(gsize*stride*sizeof(int));
displs = (int *)malloc(gsize*sizeof(int));
rcounts = (int *)malloc(gsize*sizeof(int));
for (i=0; i<gsize; ++i) {
displs[i] = i*stride;
rcounts[i] = 100-i; /* note change from previous example */
}
/* Create datatype for the column we are sending
*/
MPI_Type_vector(100-myrank, 1, 150, MPI_INT, &stype);
MPI_Type_commit( &stype );
/* sptr is the address of start of "myrank" column
*/
sptr = &sendarray[0][myrank];
MPI_Gatherv(sptr, 1, stype, rbuf, rcounts, displs, MPI_INT,
root, comm);
.fi
.sp
Note that a different amount of data is received from each process.
.sp
\fBExample 4:\fP Same as Example 3, but done in a different way at the sending end. We create a datatype that causes the correct striding at the sending end so that we read a column of a C array.
.sp
.nf
MPI_Comm comm;
int gsize,sendarray[100][150],*sptr;
int root, *rbuf, stride, myrank, disp[2], blocklen[2];
MPI_Datatype stype,type[2];
int *displs,i,*rcounts;
\&...
MPI_Comm_size(comm, &gsize);
MPI_Comm_rank( comm, &myrank );
rbuf = (int *)alloc(gsize*stride*sizeof(int));
displs = (int *)malloc(gsize*sizeof(int));
rcounts = (int *)malloc(gsize*sizeof(int));
for (i=0; i<gsize; ++i) {
displs[i] = i*stride;
rcounts[i] = 100-i;
}
/* Create datatype for one int, with extent of entire row
*/
disp[0] = 0; disp[1] = 150*sizeof(int);
type[0] = MPI_INT; type[1] = MPI_UB;
blocklen[0] = 1; blocklen[1] = 1;
MPI_Type_struct( 2, blocklen, disp, type, &stype );
MPI_Type_commit( &stype );
sptr = &sendarray[0][myrank];
MPI_Gatherv(sptr, 100-myrank, stype, rbuf, rcounts,
displs, MPI_INT, root, comm);
.fi
.sp
\fBExample 5:\fP Same as Example 3 at sending side, but at receiving side
we make the stride between received blocks vary from block to block.
.sp
.nf
MPI_Comm comm;
int gsize,sendarray[100][150],*sptr;
int root, *rbuf, *stride, myrank, bufsize;
MPI_Datatype stype;
int *displs,i,*rcounts,offset;
\&...
MPI_Comm_size( comm, &gsize);
MPI_Comm_rank( comm, &myrank );
stride = (int *)malloc(gsize*sizeof(int));
\&...
/* stride[i] for i = 0 to gsize-1 is set somehow
*/
/* set up displs and rcounts vectors first
*/
displs = (int *)malloc(gsize*sizeof(int));
rcounts = (int *)malloc(gsize*sizeof(int));
offset = 0;
for (i=0; i<gsize; ++i) {
displs[i] = offset;
offset += stride[i];
rcounts[i] = 100-i;
}
/* the required buffer size for rbuf is now easily obtained
*/
bufsize = displs[gsize-1]+rcounts[gsize-1];
rbuf = (int *)malloc(bufsize*sizeof(int));
/* Create datatype for the column we are sending
*/
MPI_Type_vector(100-myrank, 1, 150, MPI_INT, &stype);
MPI_Type_commit( &stype );
sptr = &sendarray[0][myrank];
MPI_Gatherv(sptr, 1, stype, rbuf, rcounts,
displs, MPI_INT, root, comm);
.fi
.sp
\fBExample 6:\fP Process i sends num ints from the ith column of a 100 x
150 int array, in C. The complicating factor is that the various values of num are not known to root, so a separate gather must first be run to find these out. The data is placed contiguously at the receiving end.
.sp
.nf
MPI_Comm comm;
int gsize,sendarray[100][150],*sptr;
int root, *rbuf, stride, myrank, disp[2], blocklen[2];
MPI_Datatype stype,types[2];
int *displs,i,*rcounts,num;
\&...
MPI_Comm_size( comm, &gsize);
MPI_Comm_rank( comm, &myrank );
/* First, gather nums to root
*/
rcounts = (int *)malloc(gsize*sizeof(int));
MPI_Gather( &num, 1, MPI_INT, rcounts, 1, MPI_INT, root, comm);
/* root now has correct rcounts, using these we set
* displs[] so that data is placed contiguously (or
* concatenated) at receive end
*/
displs = (int *)malloc(gsize*sizeof(int));
displs[0] = 0;
for (i=1; i<gsize; ++i) {
displs[i] = displs[i-1]+rcounts[i-1];
}
/* And, create receive buffer
*/
rbuf = (int *)malloc(gsize*(displs[gsize-1]+rcounts[gsize-1])
*sizeof(int));
/* Create datatype for one int, with extent of entire row
*/
disp[0] = 0; disp[1] = 150*sizeof(int);
type[0] = MPI_INT; type[1] = MPI_UB;
blocklen[0] = 1; blocklen[1] = 1;
MPI_Type_struct( 2, blocklen, disp, type, &stype );
MPI_Type_commit( &stype );
sptr = &sendarray[0][myrank];
MPI_Gatherv(sptr, num, stype, rbuf, rcounts,
displs, MPI_INT, root, comm);
.fi
.SH USE OF IN-PLACE OPTION
The in-place option operates in the same way as it does for MPI_Gather. When the communicator is an intracommunicator, you can perform a gather operation in-place (the output buffer is used as the input buffer). Use the variable MPI_IN_PLACE as the value of the root process \fIsendbuf\fR. In this case, \fIsendcount\fR and \fIsendtype\fR are ignored, and the contribution of the root process to the gathered vector is assumed to already be in the correct place in the receive buffer.
.sp
Note that MPI_IN_PLACE is a special kind of value; it has the same restrictions on its use as MPI_BOTTOM.
.sp
Because the in-place option converts the receive buffer into a send-and-receive buffer, a Fortran binding that includes INTENT must mark these as INOUT, not OUT.
.sp
.SH WHEN COMMUNICATOR IS AN INTER-COMMUNICATOR
.sp
When the communicator is an inter-communicator, the root process in the first group gathers data from all the processes in the second group. The first group defines the root process. That process uses MPI_ROOT as the value of its \fIroot\fR argument. The remaining processes use MPI_PROC_NULL as the value of their \fIroot\fR argument. All processes in the second group use the rank of that root process in the first group as the value of their \fIroot\fR argument. The send buffer argument of the processes in the first group must be consistent with the receive buffer argument of the root process in the second group.
.sp
.SH ERRORS
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument.
.sp
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for I/O function errors. The error handler may be changed with MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned. Note that MPI does not guarantee that an MPI program can continue past an error.
.SH SEE ALSO
.ft R
.sp
.nf
MPI_Gather
MPI_Scatter
MPI_Scatterv

379
ompi/mpi/man/man3/MPI_Gatherv.md Обычный файл
Просмотреть файл

@ -0,0 +1,379 @@
# Name
`MPI_Gatherv`, `MPI_Igatherv` - Gathers varying amounts of data from all
processes to the root process
# Syntax
## C Syntax
```c
#include <mpi.h>
int MPI_Gatherv(const void *sendbuf, int sendcount, MPI_Datatype sendtype,
void *recvbuf, const int recvcounts[], const int displs[], MPI_Datatype recvtype,
int root, MPI_Comm comm)
int MPI_Igatherv(const void *sendbuf, int sendcount, MPI_Datatype sendtype,
void *recvbuf, const int recvcounts[], const int displs[], MPI_Datatype recvtype,
int root, MPI_Comm comm, MPI_Request *request)
```
## Fortran Syntax
```fortran
USE MPI
! or the older form: INCLUDE 'mpif.h'
MPI_GATHERV(SENDBUF, SENDCOUNT, SENDTYPE, RECVBUF, RECVCOUNTS,
DISPLS, RECVTYPE, ROOT, COMM, IERROR)
<type> SENDBUF(*), RECVBUF(*)
INTEGER SENDCOUNT, SENDTYPE, RECVCOUNTS(*), DISPLS(*)
INTEGER RECVTYPE, ROOT, COMM, IERROR
MPI_IGATHERV(SENDBUF, SENDCOUNT, SENDTYPE, RECVBUF, RECVCOUNTS,
DISPLS, RECVTYPE, ROOT, COMM, REQUEST, IERROR)
<type> SENDBUF(*), RECVBUF(*)
INTEGER SENDCOUNT, SENDTYPE, RECVCOUNTS(*), DISPLS(*)
INTEGER RECVTYPE, ROOT, COMM, REQUEST, IERROR
```
## Fortran 2008 Syntax
```fortran
USE mpi_f08
MPI_Gatherv(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs,
recvtype, root, comm, ierror)
TYPE(*), DIMENSION(..), INTENT(IN) :: sendbuf
TYPE(*), DIMENSION(..) :: recvbuf
INTEGER, INTENT(IN) :: sendcount, recvcounts(*), displs(*), root
TYPE(MPI_Datatype), INTENT(IN) :: sendtype, recvtype
TYPE(MPI_Comm), INTENT(IN) :: comm
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
MPI_Igatherv(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs,
recvtype, root, comm, request, ierror)
TYPE(*), DIMENSION(..), INTENT(IN), ASYNCHRONOUS :: sendbuf
TYPE(*), DIMENSION(..), ASYNCHRONOUS :: recvbuf
INTEGER, INTENT(IN) :: sendcount, root
INTEGER, INTENT(IN), ASYNCHRONOUS :: recvcounts(*), displs(*)
TYPE(MPI_Datatype), INTENT(IN) :: sendtype, recvtype
TYPE(MPI_Comm), INTENT(IN) :: comm
TYPE(MPI_Request), INTENT(OUT) :: request
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
```
# Input Parameters
* `sendbuf` : Starting address of send buffer (choice).
* `sendcount` : Number of elements in send buffer (integer).
* `sendtype` : Datatype of send buffer elements (handle).
* `recvcounts` : Integer array (of length group size) containing the number of
elements that are received from each process (significant only at
root).
* `displs` : Integer array (of length group size). Entry i specifies the
displacement relative to recvbuf at which to place the incoming data
from process i (significant only at root).
* `recvtype` : Datatype of recv buffer elements (significant only at root)
(handle).
* `root` : Rank of receiving process (integer).
* `comm` : Communicator (handle).
# Output Parameters
* `recvbuf` : Address of receive buffer (choice, significant only at root).
* `request` : Request (handle, non-blocking only).
* `IERROR` : Fortran only: Error status (integer).
# Description
`MPI_Gatherv` extends the functionality of `MPI_Gather` by allowing a
varying count of data from each process, since `recvcounts` is now an
array. It also allows more flexibility as to where the data is placed on
the root, by providing the new argument, `displs`.
The outcome is as if each process, including the root process, sends a
message to the root,
```c
MPI_Send(sendbuf, sendcount, sendtype, root, ...)
```
and the root executes n receives,
```c
MPI_Recv(recvbuf + disp[i] * extent(recvtype),
recvcounts[i], recvtype, i, ...)
```
Messages are placed in the receive buffer of the root process in rank
order, that is, the data sent from process j is placed in the jth
portion of the receive buffer `recvbuf` on process root. The jth portion
of `recvbuf` begins at offset displs[j] elements (in terms of `recvtype`)
into `recvbuf`.
The receive buffer is ignored for all nonroot processes.
The type signature implied by `sendcount`, `sendtype` on process i must be
equal to the type signature implied by `recvcounts[i]`, `recvtype` at the
root. This implies that the amount of data sent must be equal to the
amount of data received, pairwise between each process and the root.
Distinct type maps between sender and receiver are still allowed, as
illustrated in Example 2, below.
All arguments to the function are significant on process `root`, while on
other processes, only arguments `sendbuf`, `sendcount`, `sendtype`, `root`, `comm`
are significant. The arguments `root` and `comm` must have identical values
on all processes.
The specification of counts, types, and displacements should not cause
any location on the `root` to be written more than once. Such a call is
erroneous.
Example 1: Now have each process send 100 ints to `root`, but place
each set (of 100) stride ints apart at receiving end. Use `MPI_Gatherv`
and the `displs` argument to achieve this effect. Assume stride >= 100.
```c
MPI_Comm comm;
int gsize,sendarray[100];
int root, *rbuf, stride;
int *displs,i,*rcounts;
// ...
MPI_Comm_size(comm, &gsize);
rbuf = (int *)malloc(gsize*stride*sizeof(int));
displs = (int *)malloc(gsize*sizeof(int));
rcounts = (int *)malloc(gsize*sizeof(int));
for (i=0; i<gsize; ++i) {
displs[i] = i*stride;
rcounts[i] = 100;
}
MPI_Gatherv(sendarray, 100, MPI_INT, rbuf, rcounts,
displs, MPI_INT, root, comm);
```
Note that the program is erroneous if stride < 100.
Example 2: Same as Example 1 on the receiving side, but send the 100
ints from the 0th column of a 100 150 int array, in C.
```c
MPI_Comm comm;
int gsize,sendarray[100][150];
int root, *rbuf, stride;
MPI_Datatype stype;
int *displs,i,*rcounts;
// ...
MPI_Comm_size(comm, &gsize);
rbuf = (int *)malloc(gsize*stride*sizeof(int));
displs = (int *)malloc(gsize*sizeof(int));
rcounts = (int *)malloc(gsize*sizeof(int));
for (i=0; i<gsize; ++i) {
displs[i] = i*stride;
rcounts[i] = 100;
}
/* Create datatype for 1 column of array
*/
MPI_Type_vector(100, 1, 150, MPI_INT, &stype);
MPI_Type_commit( &stype );
MPI_Gatherv(sendarray, 1, stype, rbuf, rcounts,
displs, MPI_INT, root, comm);
```
Example 3: Process i sends (100-i) ints from the ith column of a 100
x 150 int array, in C. It is received into a buffer with stride, as in
the previous two examples.
```c
MPI_Comm comm;
int gsize,sendarray[100][150],*sptr;
int root, *rbuf, stride, myrank;
MPI_Datatype stype;
int *displs,i,*rcounts;
// ...
MPI_Comm_size(comm, &gsize);
MPI_Comm_rank( comm, &myrank );
rbuf = (int *)malloc(gsize*stride*sizeof(int));
displs = (int *)malloc(gsize*sizeof(int));
rcounts = (int *)malloc(gsize*sizeof(int));
for (i=0; i<gsize; ++i) {
displs[i] = i*stride;
rcounts[i] = 100-i; /* note change from previous example */
}
/* Create datatype for the column we are sending
*/
MPI_Type_vector(100-myrank, 1, 150, MPI_INT, &stype);
MPI_Type_commit( &stype );
/* sptr is the address of start of "myrank" column
*/
sptr = &sendarray[0][myrank];
MPI_Gatherv(sptr, 1, stype, rbuf, rcounts, displs, MPI_INT,
root, comm);
```
Note that a different amount of data is received from each process.
Example 4: Same as Example 3, but done in a different way at the
sending end. We create a datatype that causes the correct striding at
the sending end so that we read a column of a C array.
```c
MPI_Comm comm;
int gsize,sendarray[100][150],*sptr;
int root, *rbuf, stride, myrank, disp[2], blocklen[2];
MPI_Datatype stype,type[2];
int *displs,i,*rcounts;
// ...
MPI_Comm_size(comm, &gsize);
MPI_Comm_rank( comm, &myrank );
rbuf = (int *)alloc(gsize*stride*sizeof(int));
displs = (int *)malloc(gsize*sizeof(int));
rcounts = (int *)malloc(gsize*sizeof(int));
for (i=0; i<gsize; ++i) {
displs[i] = i*stride;
rcounts[i] = 100-i;
}
/* Create datatype for one int, with extent of entire row
*/
disp[0] = 0; disp[1] = 150*sizeof(int);
type[0] = MPI_INT; type[1] = MPI_UB;
blocklen[0] = 1; blocklen[1] = 1;
MPI_Type_struct( 2, blocklen, disp, type, &stype );
MPI_Type_commit( &stype );
sptr = &sendarray[0][myrank];
MPI_Gatherv(sptr, 100-myrank, stype, rbuf, rcounts,
displs, MPI_INT, root, comm);
```
Example 5: Same as Example 3 at sending side, but at receiving side
we make the stride between received blocks vary from block to block.
```c
MPI_Comm comm;
int gsize,sendarray[100][150],*sptr;
int root, *rbuf, *stride, myrank, bufsize;
MPI_Datatype stype;
int *displs,i,*rcounts,offset;
// ...
MPI_Comm_size( comm, &gsize);
MPI_Comm_rank( comm, &myrank );
de = (int *)malloc(gsize*sizeof(int));
// ...
/* stride[i] for i = 0 to gsize-1 is set somehow
*/
/*set up displs and rcounts vectors first
*/
displs = (int *)malloc(gsize*sizeof(int));
rcounts = (int *)malloc(gsize*sizeof(int));
offset = 0;
for (i=0; i<gsize; ++i) {
displs[i] = offset;
offset += stride[i];
rcounts[i] = 100-i;
}
/* the required buffer size for rbuf is now easily obtained
*/
bufsize = displs[gsize-1]+rcounts[gsize-1];
rbuf = (int *)malloc(bufsize*sizeof(int));
/* Create datatype for the column we are sending
*/
MPI_Type_vector(100-myrank, 1, 150, MPI_INT, &stype);
MPI_Type_commit( &stype );
sptr = &sendarray[0][myrank];
MPI_Gatherv(sptr, 1, stype, rbuf, rcounts,
displs, MPI_INT, root, comm);
```
Example 6: Process i sends num ints from the ith column of a 100 x
150 int array, in C. The complicating factor is that the various values
of num are not known to `root`, so a separate gather must first be run to
find these out. The data is placed contiguously at the receiving end.
```c
MPI_Comm comm;
int gsize,sendarray[100][150],*sptr;
int root, *rbuf, stride, myrank, disp[2], blocklen[2];
MPI_Datatype stype,types[2];
int *displs,i,*rcounts,num;
// ...
MPI_Comm_size( comm, &gsize);
MPI_Comm_rank( comm, &myrank );
/*First, gather nums to root
*/
rcounts = (int *)malloc(gsize*sizeof(int));
MPI_Gather( &num, 1, MPI_INT, rcounts, 1, MPI_INT, root, comm);
/* root now has correct rcounts, using these we set
* displs[] so that data is placed contiguously (or
* concatenated) at receive end
*/
displs = (int *)malloc(gsize*sizeof(int));
displs[0] = 0;
for (i=1; i<gsize; ++i) {
displs[i] = displs[i-1]+rcounts[i-1];
}
/* And, create receive buffer
*/
rbuf = (int *)malloc(gsize*(displs[gsize-1]+rcounts[gsize-1])
*sizeof(int));
/* Create datatype for one int, with extent of entire row
*/
disp[0] = 0; disp[1] = 150*sizeof(int);
type[0] = MPI_INT; type[1] = MPI_UB;
blocklen[0] = 1; blocklen[1] = 1;
MPI_Type_struct( 2, blocklen, disp, type, &stype );
MPI_Type_commit( &stype );
sptr = &sendarray[0][myrank];
MPI_Gatherv(sptr, num, stype, rbuf, rcounts,
displs, MPI_INT, root, comm);
```
# Use Of In-Place Option
The in-place option operates in the same way as it does for `MPI_Gather.`
When the communicator is an intracommunicator, you can perform a gather
operation in-place (the output buffer is used as the input buffer). Use
the variable `MPI_IN_PLACE` as the value of the root process `sendbuf`. In
this case, `sendcount` and `sendtype` are ignored, and the contribution
of the `root` process to the gathered vector is assumed to already be in
the correct place in the receive buffer.
Note that `MPI_IN_PLACE` is a special kind of value; it has the same
restrictions on its use as `MPI_BOTTOM.`
Because the in-place option converts the receive buffer into a
send-and-receive buffer, a Fortran binding that includes INTENT must
mark these as INOUT, not OUT.
# When Communicator Is An Inter-Communicator
When the communicator is an inter-communicator, the `root` process in the
first group gathers data from all the processes in the second group. The
first group defines the root process. That process uses `MPI_ROOT` as the
value of its `root` argument. The remaining processes use `MPI_PROC_NULL`
as the value of their `root` argument. All processes in the second group
use the rank of that root process in the first group as the value of
their `root` argument. The send buffer argument of the processes in the
first group must be consistent with the receive buffer argument of the
`root` process in the second group.
# Errors
Almost all MPI routines return an error value; C routines as the value
of the function and Fortran routines in the last argument.
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for
I/O function errors. The error handler may be changed with
`MPI_Comm_set_errhandler`; the predefined error handler `MPI_ERRORS_RETURN`
may be used to cause error values to be returned. Note that MPI does not
guarantee that an MPI program can continue past an error.
# See Also
[`MPI_Gather`(3)](MPI_Gather.html)
[`MPI_Scatter`(3)](MPI_Scatter.html)
[`MPI_Scatterv`(3)](MPI_Scatterv.html)

Просмотреть файл

@ -1,136 +0,0 @@
.\" -*- nroff -*-
.\" Copyright 2010 Cisco Systems, Inc. All rights reserved.
.\" Copyright 2006-2008 Sun Microsystems, Inc.
.\" Copyright (c) 1996 Thinking Machines Corporation
.\" Copyright 2014 Los Alamos National Security, LLC. All rights reserved.
.\" Copyright (c) 2020 Google, LLC. All rights reserved.
.\" $COPYRIGHT$
.TH MPI_Get 3 "#OMPI_DATE#" "#PACKAGE_VERSION#" "#PACKAGE_NAME#"
.SH NAME
\fBMPI_Get\fP, \fBMPI_Rget\fP \- Copies data from the target memory to the origin.
.SH SYNTAX
.ft R
.SH C Syntax
.nf
#include <mpi.h>
MPI_Get(void *\fIorigin_addr\fP, int \fIorigin_count\fP, MPI_Datatype
\fIorigin_datatype\fP, int \fItarget_rank\fP, MPI_Aint \fItarget_disp\fP,
int \fItarget_count\fP, MPI_Datatype \fItarget_datatype\fP, MPI_Win \fIwin\fP)
MPI_Rget(void *\fIorigin_addr\fP, int \fIorigin_count\fP, MPI_Datatype
\fIorigin_datatype\fP, int \fItarget_rank\fP, MPI_Aint \fItarget_disp\fP,
int \fItarget_count\fP, MPI_Datatype \fItarget_datatype\fP, MPI_Win \fIwin\fP,
MPI_Request *\fIrequest\fP)
.fi
.SH Fortran Syntax (see FORTRAN 77 NOTES)
.nf
USE MPI
! or the older form: INCLUDE 'mpif.h'
MPI_GET(\fIORIGIN_ADDR, ORIGIN_COUNT, ORIGIN_DATATYPE, TARGET_RANK,
TARGET_DISP, TARGET_COUNT, TARGET_DATATYPE, WIN, IERROR\fP)
<type> \fIORIGIN_ADDR\fP(*)
INTEGER(KIND=MPI_ADDRESS_KIND) \fITARGET_DISP\fP
INTEGER \fIORIGIN_COUNT, ORIGIN_DATATYPE, TARGET_RANK,
TARGET_COUNT, TARGET_DATATYPE, WIN, IERROR\fP
MPI_RGET(\fIORIGIN_ADDR, ORIGIN_COUNT, ORIGIN_DATATYPE, TARGET_RANK,
TARGET_DISP, TARGET_COUNT, TARGET_DATATYPE, WIN, REQUEST, IERROR\fP)
<type> \fIORIGIN_ADDR\fP(*)
INTEGER(KIND=MPI_ADDRESS_KIND) \fITARGET_DISP\fP
INTEGER \fIORIGIN_COUNT, ORIGIN_DATATYPE, TARGET_RANK,
TARGET_COUNT, TARGET_DATATYPE, WIN, REQUEST, IERROR\fP
.fi
.SH Fortran 2008 Syntax
.nf
USE mpi_f08
MPI_Get(\fIorigin_addr\fP, \fIorigin_count\fP, \fIorigin_datatype\fP, \fItarget_rank\fP,
\fItarget_disp\fP, \fItarget_count\fP, \fItarget_datatype\fP, \fIwin\fP, \fIierror\fP)
TYPE(*), DIMENSION(..), ASYNCHRONOUS :: \fIorigin_addr\fP
INTEGER, INTENT(IN) :: \fIorigin_count\fP, \fItarget_rank\fP, \fItarget_count\fP
TYPE(MPI_Datatype), INTENT(IN) :: \fIorigin_datatype\fP, \fItarget_datatype\fP
INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(IN) :: \fItarget_disp\fP
TYPE(MPI_Win), INTENT(IN) :: \fIwin\fP
INTEGER, OPTIONAL, INTENT(OUT) :: \fIierror\fP
MPI_Rget(\fIorigin_addr\fP, \fIorigin_count\fP, \fIorigin_datatype\fP, \fItarget_rank\fP,
\fItarget_disp\fP, \fItarget_count\fP, \fItarget_datatype\fP, \fIwin\fP, \fIrequest,\fP
\fIierror\fP)
TYPE(*), DIMENSION(..), ASYNCHRONOUS :: \fIorigin_addr\fP
INTEGER, INTENT(IN) :: \fIorigin_count\fP, \fItarget_rank\fP, \fItarget_count\fP
TYPE(MPI_Datatype), INTENT(IN) :: \fIorigin_datatype\fP, \fItarget_datatype\fP
INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(IN) :: \fItarget_disp\fP
TYPE(MPI_Win), INTENT(IN) :: \fIwin\fP
TYPE(MPI_Request), INTENT(OUT) :: \fIrequest\fP
INTEGER, OPTIONAL, INTENT(OUT) :: \fIierror\fP
.fi
.SH INPUT PARAMETERS
.ft R
.TP 1i
origin_addr
Initial address of origin buffer (choice).
.TP 1i
origin_count
Number of entries in origin buffer (nonnegative integer).
.TP 1i
origin_datatype
Data type of each entry in origin buffer (handle).
.TP 1i
target_rank
Rank of target (nonnegative integer).
.TP 1i
target_disp
Displacement from window start to the beginning of the target buffer (nonnegative integer).
.TP 1i
target_count
Number of entries in target buffer (nonnegative integer).
.TP 1i
target datatype
datatype of each entry in target buffer (handle)
.TP 1i
win
window object used for communication (handle)
.SH OUTPUT PARAMETER
.ft R
.TP li
request
MPI_Rget: RMA request
.TP 1i
IERROR
Fortran only: Error status (integer).
.SH DESCRIPTION
.ft R
\fBMPI_Get\fP copies data from the target memory to the origin, similar to MPI_Put, except that the direction of data transfer is reversed. The \fIorigin_datatype\fP may not specify overlapping entries in the origin buffer. The target buffer must be contained within the target window, and the copied data must fit, without truncation, in the origin buffer. Only processes within the same node can access the target window.
.sp
\fBMPI_Rget\fP is similar to \fBMPI_Get\fP, except that it allocates a communication request object and associates it with the request handle (the argument \fIrequest\fP) that can be used to wait or test for completion. The completion of an MPI_Rget operation indicates that the data is available in the origin buffer. If \fIorigin_addr\fP points to memory attached to a window, then the data becomes available in the private copy of this window.
.SH FORTRAN 77 NOTES
.ft R
The MPI standard prescribes portable Fortran syntax for
the \fITARGET_DISP\fP argument only for Fortran 90. FORTRAN 77
users may use the non-portable syntax
.sp
.nf
INTEGER*MPI_ADDRESS_KIND \fITARGET_DISP\fP
.fi
.sp
where MPI_ADDRESS_KIND is a constant defined in mpif.h
and gives the length of the declared integer in bytes.
.SH ERRORS
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument.
.sp
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for I/O function errors. The error handler may be changed with MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned. Note that MPI does not guarantee that an MPI program can continue past an error.
.SH SEE ALSO
.ft R
.sp
MPI_Put

131
ompi/mpi/man/man3/MPI_Get.md Обычный файл
Просмотреть файл

@ -0,0 +1,131 @@
# Name
`MPI_Get`, `MPI_Rget` - Copies data from the target memory to the
origin.
# Syntax
## C Syntax
```c
#include <mpi.h>
MPI_Get(void *origin_addr, int origin_count, MPI_Datatype
origin_datatype, int target_rank, MPI_Aint target_disp,
int target_count, MPI_Datatype target_datatype, MPI_Win win)
MPI_Rget(void *origin_addr, int origin_count, MPI_Datatype
origin_datatype, int target_rank, MPI_Aint target_disp,
int target_count, MPI_Datatype target_datatype, MPI_Win win,
MPI_Request *request)
```
## Fortran Syntax (See Fortran 77 Notes)
```fortran
USE MPI
! or the older form: INCLUDE 'mpif.h'
MPI_GET(ORIGIN_ADDR, ORIGIN_COUNT, ORIGIN_DATATYPE, TARGET_RANK,
TARGET_DISP, TARGET_COUNT, TARGET_DATATYPE, WIN, IERROR)
<type> ORIGIN_ADDR(*)
INTEGER(KIND=MPI_ADDRESS_KIND) TARGET_DISP
INTEGER ORIGIN_COUNT, ORIGIN_DATATYPE, TARGET_RANK,
TARGET_COUNT, TARGET_DATATYPE, WIN, IERROR
MPI_RGET(ORIGIN_ADDR, ORIGIN_COUNT, ORIGIN_DATATYPE, TARGET_RANK,
TARGET_DISP, TARGET_COUNT, TARGET_DATATYPE, WIN, REQUEST, IERROR)
<type> ORIGIN_ADDR(*)
INTEGER(KIND=MPI_ADDRESS_KIND) TARGET_DISP
INTEGER ORIGIN_COUNT, ORIGIN_DATATYPE, TARGET_RANK,
TARGET_COUNT, TARGET_DATATYPE, WIN, REQUEST, IERROR
```
## Fortran 2008 Syntax
```fortran
USE mpi_f08
MPI_Get(origin_addr, origin_count, origin_datatype, target_rank,
target_disp, target_count, target_datatype, win, ierror)
TYPE(*), DIMENSION(..), ASYNCHRONOUS :: origin_addr
INTEGER, INTENT(IN) :: origin_count, target_rank, target_count
TYPE(MPI_Datatype), INTENT(IN) :: origin_datatype, target_datatype
INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(IN) :: target_disp
TYPE(MPI_Win), INTENT(IN) :: win
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
MPI_Rget(origin_addr, origin_count, origin_datatype, target_rank,
target_disp, target_count, target_datatype, win, request,
ierror)
TYPE(*), DIMENSION(..), ASYNCHRONOUS :: origin_addr
INTEGER, INTENT(IN) :: origin_count, target_rank, target_count
TYPE(MPI_Datatype), INTENT(IN) :: origin_datatype, target_datatype
INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(IN) :: target_disp
TYPE(MPI_Win), INTENT(IN) :: win
TYPE(MPI_Request), INTENT(OUT) :: request
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
```
# Input Parameters
* `origin_addr` : Initial address of origin buffer (choice).
* `origin_count` : Number of entries in origin buffer (nonnegative integer).
* `origin_datatype` : Data type of each entry in origin buffer (handle).
* `target_rank` : Rank of target (nonnegative integer).
* `target_disp` : Displacement from window start to the beginning of the target buffer
(nonnegative integer).
* `target_count` : Number of entries in target buffer (nonnegative integer).
* `target datatype` : datatype of each entry in target buffer (handle)
* `win` : window object used for communication (handle)
# Output Parameter
* `request` : MPI_Rget: RMA request
* `IERROR` : Fortran only: Error status (integer).
# Description
`MPI_Get` copies data from the target memory to the origin, similar to
`MPI_Put`, except that the direction of data transfer is reversed. The
`origin_datatype` may not specify overlapping entries in the origin
buffer. The target buffer must be contained within the target window,
and the copied data must fit, without truncation, in the origin buffer.
Only processes within the same node can access the target window.
`MPI_Rget` is similar to `MPI_Get`, except that it allocates a
communication `request` object and associates it with the `request` handle
(the argument `request`) that can be used to wait or test for
completion. The completion of an `MPI_Rget` operation indicates that the
data is available in the origin buffer. If `origin_addr` points to
memory attached to a window, then the data becomes available in the
private copy of this window.
# Fortran 77 Notes
The MPI standard prescribes portable Fortran syntax for the
`TARGET_DISP` argument only for Fortran 90. FORTRAN 77 users may use the
non-portable syntax
```fortran
INTEGER*MPI_ADDRESS_KIND TARGET_DISP
```
where `MPI_ADDRESS_KIND` is a constant defined in mpif.h and gives the
length of the declared integer in bytes.
# Errors
Almost all MPI routines return an error value; C routines as the value
of the function and Fortran routines in the last argument.
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for
I/O function errors. The error handler may be changed with
`MPI_Comm_set_errhandler`; the predefined error handler `MPI_ERRORS_RETURN`
may be used to cause error values to be returned. Note that MPI does not
guarantee that an MPI program can continue past an error.
# See Also
[`MPI_Put`(3)](MPI_Put.html)

Просмотреть файл

@ -1,191 +0,0 @@
.\" -*- nroff -*-
.\" Copyright 2013-2014 Los Alamos National Security, LLC. All rights reserved.
.\" Copyright 2010 Cisco Systems, Inc. All rights reserved.
.\" Copyright 2006-2008 Sun Microsystems, Inc.
.\" Copyright (c) 1996 Thinking Machines Corporation
.\" $COPYRIGHT$
.TH MPI_Get_accumulate 3 "#OMPI_DATE#" "#PACKAGE_VERSION#" "#PACKAGE_NAME#"
.SH NAME
\fBMPI_Get_accumulate\fP, \fBMPI_Rget_accumulate\fP \- Combines the contents of the origin buffer with that of a target buffer and returns the target buffer value.
.SH SYNTAX
.ft R
.SH C Syntax
.nf
#include <mpi.h>
int MPI_Get_accumulate(const void *\fIorigin_addr\fP, int \fIorigin_count\fP,
MPI_Datatype \fIorigin_datatype\fP, void *\fIresult_addr\fP,
int \fIresult_count\fP, MPI_Datatype \fIresult_datatype\fP,
int \fItarget_rank\fP, MPI_Aint \fItarget_disp\fP, int \fItarget_count\fP,
MPI_Datatype \fItarget_datatype\fP, MPI_Op \fIop\fP, MPI_Win \fIwin\fP)
int MPI_Rget_accumulate(const void *\fIorigin_addr\fP, int \fIorigin_count\fP,
MPI_Datatype \fIorigin_datatype\fP, void *\fIresult_addr\fP,
int \fIresult_count\fP, MPI_Datatype \fIresult_datatype\fP,
int \fItarget_rank\fP, MPI_Aint \fItarget_disp\fP, int \fItarget_count\fP,
MPI_Datatype \fItarget_datatype\fP, MPI_Op \fIop\fP, MPI_Win \fIwin\fP,
MPI_Request *\fIrequest\fP)
.fi
.SH Fortran Syntax (see FORTRAN 77 NOTES)
.nf
USE MPI
! or the older form: INCLUDE 'mpif.h'
MPI_GET_ACCUMULATE(\fIORIGIN_ADDR, ORIGIN_COUNT, ORIGIN_DATATYPE, RESULT_ADDR,
RESULT_COUNT, RESULT_DATATYPE, TARGET_RANK, TARGET_DISP, TARGET_COUNT,
TARGET_DATATYPE, OP, WIN, IERROR\fP)
<type> \fIORIGIN_ADDR\fP, \fIRESULT_ADDR\fP(*)
INTEGER(KIND=MPI_ADDRESS_KIND) \fITARGET_DISP\fP
INTEGER \fIORIGIN_COUNT, ORIGIN_DATATYPE, TARGET_COUNT, TARGET_DATATYPE,
TARGET_RANK, TARGET_COUNT, TARGET_DATATYPE, OP, WIN, IERROR \fP
MPI_RGET_ACCUMULATE(\fIORIGIN_ADDR, ORIGIN_COUNT, ORIGIN_DATATYPE, RESULT_ADDR,
RESULT_COUNT, RESULT_DATATYPE, TARGET_RANK, TARGET_DISP, TARGET_COUNT,
TARGET_DATATYPE, OP, WIN, REQUEST, IERROR\fP)
<type> \fIORIGIN_ADDR\fP, \fIRESULT_ADDR\fP(*)
INTEGER(KIND=MPI_ADDRESS_KIND) \fITARGET_DISP\fP
INTEGER \fIORIGIN_COUNT, ORIGIN_DATATYPE, TARGET_COUNT, TARGET_DATATYPE,
TARGET_RANK, TARGET_COUNT, TARGET_DATATYPE, OP, WIN, REQUEST, IERROR \fP
.fi
.SH Fortran 2008 Syntax
.nf
USE mpi_f08
MPI_Get_accumulate(\fIorigin_addr\fP, \fIorigin_count\fP, \fIorigin_datatype\fP, \fIresult_addr\fP,
\fIresult_count\fP, \fIresult_datatype\fP, \fItarget_rank\fP, \fItarget_disp\fP,
\fItarget_count\fP, \fItarget_datatype\fP, \fIop\fP, \fIwin\fP, \fIierror\fP)
TYPE(*), DIMENSION(..), INTENT(IN) :: \fIorigin_addr\fP
TYPE(*), DIMENSION(..) :: \fIresult_addr\fP
INTEGER, INTENT(IN) :: \fIorigin_count\fP, \fIresult_count\fP, \fItarget_rank\fP, \fItarget_count\fP
TYPE(MPI_Datatype), INTENT(IN) :: \fIorigin_datatype\fP, \fItarget_datatype\fP, \fIresult_datatype\fP
INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(IN) :: \fItarget_dist\fP
TYPE(MPI_Op), INTENT(IN) :: \fIop\fP
TYPE(MPI_Win), INTENT(IN) :: \fIwin\fP
INTEGER, OPTIONAL, INTENT(OUT) :: \fIierror\fP
MPI_Rget_accumulate(\fIorigin_addr\fP, \fIorigin_count\fP, \fIorigin_datatype\fP,
\fIresult_addr\fP, \fIresult_count\fP, \fIresult_datatype\fP, \fItarget_rank\fP,
\fItarget_disp\fP, \fItarget_count\fP, \fItarget_datatype\fP, \fIop\fP, \fIwin\fP, \fIrequest\fP,
\fIierror\fP)
TYPE(*), DIMENSION(..), INTENT(IN) :: \fIorigin_addr\fP
TYPE(*), DIMENSION(..) :: \fIresult_addr\fP
INTEGER, INTENT(IN) :: \fIorigin_count\fP, \fIresult_count\fP, \fItarget_rank\fP, \fItarget_count\fP
TYPE(MPI_Datatype), INTENT(IN) :: \fIorigin_datatype\fP, \fItarget_datatype\fP, \fIresult_datatype\fP
INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(IN) :: \fItarget_dist\fP
TYPE(MPI_Op), INTENT(IN) :: \fIop\fP
TYPE(MPI_Win), INTENT(IN) :: \fIwin\fP
TYPE(MPI_Request), INTENT(OUT) :: \fIrequest\fP
INTEGER, OPTIONAL, INTENT(OUT) :: \fIierror\fP
.fi
.SH INPUT PARAMETERS
.ft R
.TP 1i
origin_addr
Initial address of buffer (choice).
.ft R
.TP 1i
origin_count
Number of entries in buffer (nonnegative integer).
.ft R
.TP 1i
origin_datatype
Data type of each buffer entry (handle).
.ft R
.TP
result_addr
Initial address of result buffer (choice).
.ft R
.TP
result_count
Number of entries in result buffer (nonnegative integer).
.ft R
.TP
result_datatype
Data type of each result buffer entry (handle).
.ft R
.TP 1i
target_rank
Rank of target (nonnegative integer).
.ft R
.TP 1i
target_disp
Displacement from start of window to beginning of target buffer (nonnegative integer).
.ft R
.TP 1i
target_count
Number of entries in target buffer (nonnegative integer).
.ft R
.TP 1i
target_datatype
Data type of each entry in target buffer (handle).
.ft R
.TP 1i
op
Reduce operation (handle).
.ft R
.TP 1i
win
Window object (handle).
.SH OUTPUT PARAMETER
.ft R
.TP 1i
MPI_Rget_accumulate: RMA request
.TP 1i
IERROR
Fortran only: Error status (integer).
.SH DESCRIPTION
.ft R
\fBMPI_Get_accumulate\fP is a function used for one-sided MPI communication that adds the contents of the origin buffer (as defined by \fIorigin_addr\fP, \fIorigin_count\fP, and \fIorigin_datatype\fP) to the buffer specified by the arguments \fItarget_count\fP and \fItarget_datatype\fP, at offset \fItarget_disp\fP, in the target window specified by \fItarget_rank\fP and \fIwin\fP, using the operation \fIop\fP. \fBMPI_Get_accumulate\fP returns in the result buffer \fIresult_addr\fP the contents of the target buffer before the accumulation.
.sp
Any of the predefined operations for MPI_Reduce, as well as MPI_NO_OP, can be used. User-defined functions cannot be used. For example, if \fIop\fP is MPI_SUM, each element of the origin buffer is added to the corresponding element in the target, replacing the former value in the target.
.sp
Each datatype argument must be a predefined data type or a derived data type, where all basic components are of the same predefined data type. Both datatype arguments must be constructed from the same predefined data type. The operation \fIop\fP applies to elements of that predefined type. The \fItarget_datatype\fP argument must not specify overlapping entries, and the target buffer must fit in the target window.
.sp
A new predefined operation, MPI_REPLACE, is defined. It corresponds to the associative function f(a, b) =b; that is, the current value in the target memory is replaced by the value supplied by the origin.
.sp
A new predefined operation, MPI_NO_OP, is defined. It corresponds to the assiciative function f(a, b) = a; that is the current value in the target memory is returned in the result buffer at the origin and no operation is performed on the target buffer.
.sp
\fBMPI_Rget_accumulate\fP is similar to \fBMPI_Get_accumulate\fP, except that it allocates a communication request object and associates it with the request handle (the argument \fIrequest\fP) that can be used to wait or test for completion. The completion of an \fBMPI_Rget_accumulate\fP operation indicates that the data is available in the result buffer and the origin buffer is free to be updated. It does not indicate that the operation has been completed at the target window.
.SH FORTRAN 77 NOTES
.ft R
The MPI standard prescribes portable Fortran syntax for
the \fITARGET_DISP\fP argument only for Fortran 90. FORTRAN 77
users may use the non-portable syntax
.sp
.nf
INTEGER*MPI_ADDRESS_KIND \fITARGET_DISP\fP
.fi
.sp
where MPI_ADDRESS_KIND is a constant defined in mpif.h
and gives the length of the declared integer in bytes.
.SH NOTES
The generic functionality of \fBMPI_Get_accumulate\fP might limit the performance of fetch-and-increment or fetch-and-add calls that might be supported by special hardware operations. MPI_Fetch_and_op thus allows for a fast implementation of a commonly used subset of the functionality of \fBMPI_Get_accumulate\fP.
.sp
MPI_Get is a special case of \fBMPI_Get_accumulate\fP, with the operation MPI_NO_OP. Note, however, that MPI_Get and \fBMPI_Get_accumulate\fP have different constraints on concurrent updates.
.sp
It is the user's responsibility to guarantee that, when
using the accumulate functions, the target displacement argument is such
that accesses to the window are properly aligned according to the data
type arguments in the call to the \fBMPI_Get_accumulate\fP function.
.SH ERRORS
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument.
.sp
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for I/O function errors. The error handler
may be changed with MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned. Note that MPI does not guarantee that an MPI program can continue past an error.
.SH SEE ALSO
.ft R
.sp
MPI_Put
MPI_Get
MPI_Accumulate
MPI_Fetch_and_op
.br
MPI_Reduce

193
ompi/mpi/man/man3/MPI_Get_accumulate.md Обычный файл
Просмотреть файл

@ -0,0 +1,193 @@
# Name
`MPI_Get_accumulate`, `MPI_Rget_accumulate` - Combines the contents
of the origin buffer with that of a target buffer and returns the target
buffer value.
# Syntax
## C Syntax
```c
#include <mpi.h>
int MPI_Get_accumulate(const void *origin_addr, int origin_count,
MPI_Datatype origin_datatype, void *result_addr,
int result_count, MPI_Datatype result_datatype,
int target_rank, MPI_Aint target_disp, int target_count,
MPI_Datatype target_datatype, MPI_Op op, MPI_Win win)
int MPI_Rget_accumulate(const void *origin_addr, int origin_count,
MPI_Datatype origin_datatype, void *result_addr,
int result_count, MPI_Datatype result_datatype,
int target_rank, MPI_Aint target_disp, int target_count,
MPI_Datatype target_datatype, MPI_Op op, MPI_Win win,
MPI_Request *request)
```
## Fortran Syntax (See Fortran 77 Notes)
```fortran
USE MPI
! or the older form: INCLUDE 'mpif.h'
MPI_GET_ACCUMULATE(ORIGIN_ADDR, ORIGIN_COUNT, ORIGIN_DATATYPE, RESULT_ADDR,
RESULT_COUNT, RESULT_DATATYPE, TARGET_RANK, TARGET_DISP, TARGET_COUNT,
TARGET_DATATYPE, OP, WIN, IERROR)
<type> ORIGIN_ADDR, RESULT_ADDR(*)
INTEGER(KIND=MPI_ADDRESS_KIND) TARGET_DISP
INTEGER ORIGIN_COUNT, ORIGIN_DATATYPE, TARGET_COUNT, TARGET_DATATYPE,
TARGET_RANK, TARGET_COUNT, TARGET_DATATYPE, OP, WIN, IERROR
MPI_RGET_ACCUMULATE(ORIGIN_ADDR, ORIGIN_COUNT, ORIGIN_DATATYPE, RESULT_ADDR,
RESULT_COUNT, RESULT_DATATYPE, TARGET_RANK, TARGET_DISP, TARGET_COUNT,
TARGET_DATATYPE, OP, WIN, REQUEST, IERROR)
<type> ORIGIN_ADDR, RESULT_ADDR(*)
INTEGER(KIND=MPI_ADDRESS_KIND) TARGET_DISP
INTEGER ORIGIN_COUNT, ORIGIN_DATATYPE, TARGET_COUNT, TARGET_DATATYPE,
TARGET_RANK, TARGET_COUNT, TARGET_DATATYPE, OP, WIN, REQUEST, IERROR
```
## Fortran 2008 Syntax
```fortran
USE mpi_f08
MPI_Get_accumulate(origin_addr, origin_count, origin_datatype, result_addr,
result_count, result_datatype, target_rank, target_disp,
target_count, target_datatype, op, win, ierror)
TYPE(*), DIMENSION(..), INTENT(IN) :: origin_addr
TYPE(*), DIMENSION(..) :: result_addr
INTEGER, INTENT(IN) :: origin_count, result_count, target_rank, target_count
TYPE(MPI_Datatype), INTENT(IN) :: origin_datatype, target_datatype, result_datatype
INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(IN) :: target_dist
TYPE(MPI_Op), INTENT(IN) :: op
TYPE(MPI_Win), INTENT(IN) :: win
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
MPI_Rget_accumulate(origin_addr, origin_count, origin_datatype,
result_addr, result_count, result_datatype, target_rank,
target_disp, target_count, target_datatype, op, win, request,
ierror)
TYPE(*), DIMENSION(..), INTENT(IN) :: origin_addr
TYPE(*), DIMENSION(..) :: result_addr
INTEGER, INTENT(IN) :: origin_count, result_count, target_rank, target_count
TYPE(MPI_Datatype), INTENT(IN) :: origin_datatype, target_datatype, result_datatype
INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(IN) :: target_dist
TYPE(MPI_Op), INTENT(IN) :: op
TYPE(MPI_Win), INTENT(IN) :: win
TYPE(MPI_Request), INTENT(OUT) :: request
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
```
# Input Parameters
* `origin_addr` : Initial address of buffer (choice).
* `origin_count` : Number of entries in buffer (nonnegative integer).
* `origin_datatype` : Data type of each buffer entry (handle).
* `result_addr` : Initial address of result buffer (choice).
* `result_count` : Number of entries in result buffer (nonnegative integer).
* `result_datatype` : Data type of each result buffer entry (handle).
* `target_rank` : Rank of target (nonnegative integer).
* `target_disp` : Displacement from start of window to beginning of target buffer
(nonnegative integer).
* `target_count` : Number of entries in target buffer (nonnegative integer).
* `target_datatype` : Data type of each entry in target buffer (handle).
* `op` : Reduce operation (handle).
* `win` : Window object (handle).
# Output Parameter
* `MPI_Rget_accumulate`: RMA request
* `IERROR` : Fortran only: Error status (integer).
# Description
`MPI_Get_accumulate` is a function used for one-sided MPI
communication that adds the contents of the origin buffer (as defined by
`origin_addr`, `origin_count`, and `origin_datatype`) to the buffer
specified by the arguments `target_count` and `target_datatype`, at
offset `target_disp`, in the target window specified by `target_rank`
and `win`, using the operation `op`. `MPI_Get_accumulate` returns in
the result buffer `result_addr` the contents of the target buffer before
the accumulation.
Any of the predefined operations for `MPI_Reduce`, as well as `MPI_NO_OP,`
can be used. User-defined functions cannot be used. For example, if `op`
is `MPI_SUM`, each element of the origin buffer is added to the
corresponding element in the target, replacing the former value in the
target.
Each datatype argument must be a predefined data type or a derived data
type, where all basic components are of the same predefined data type.
Both datatype arguments must be constructed from the same predefined
data type. The operation `op` applies to elements of that predefined
type. The `target_datatype` argument must not specify overlapping
entries, and the target buffer must fit in the target window.
A new predefined operation, `MPI_REPLACE`, is defined. It corresponds to
the associative function f(a, b) =b; that is, the current value in the
target memory is replaced by the value supplied by the origin.
A new predefined operation, `MPI_NO_OP`, is defined. It corresponds to the
assiciative function f(a, b) = a; that is the current value in the
target memory is returned in the result buffer at the origin and no
operation is performed on the target buffer.
`MPI_Rget_accumulate` is similar to `MPI_Get_accumulate`, except
that it allocates a communication request object and associates it with
the request handle (the argument request) that can be used to wait or
test for completion. The completion of an `MPI_Rget_accumulate`
operation indicates that the data is available in the result buffer and
the origin buffer is free to be updated. It does not indicate that the
operation has been completed at the target window.
# Fortran 77 Notes
The MPI standard prescribes portable Fortran syntax for the
`TARGET_DISP` argument only for Fortran 90. FORTRAN 77 users may use the
non-portable syntax
```fortran
INTEGER*MPI_ADDRESS_KIND TARGET_DISP
```
where MPI_ADDRESS_KIND is a constant defined in mpif.h and gives the
length of the declared integer in bytes.
# Notes
The generic functionality of `MPI_Get_accumulate` might limit the
performance of fetch-and-increment or fetch-and-add calls that might be
supported by special hardware operations. `MPI_Fetch_and_op` thus allows
for a fast implementation of a commonly used subset of the functionality
of `MPI_Get_accumulate`.
`MPI_Get` is a special case of `MPI_Get_accumulate`, with the operation
`MPI_NO_OP`. Note, however, that `MPI_Get` and `MPI_Get_accumulate` have
different constraints on concurrent updates.
It is the user's responsibility to guarantee that, when using the
accumulate functions, the target displacement argument is such that
accesses to the window are properly aligned according to the data type
arguments in the call to the `MPI_Get_accumulate` function.
# Errors
Almost all MPI routines return an error value; C routines as the value
of the function and Fortran routines in the last argument.
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for
I/O function errors. The error handler may be changed with
`MPI_Comm_set_errhandler`; the predefined error handler `MPI_ERRORS_RETURN`
may be used to cause error values to be returned. Note that MPI does not
guarantee that an MPI program can continue past an error.
# See Also
[`MPI_Put`(3)](MPI_Put.html)
[`MPI_Get`(3)](MPI_Get.html)
[`MPI_Accumulate`(3)](MPI_Accumulate.html)
[`MPI_Fetch_and_op`(3)](MPI_Fetch_and_op.html)
[`MPI_Reduce`(3)](MPI_Reduce.html)

Просмотреть файл

@ -1,86 +0,0 @@
.\" -*- nroff -*-
.\" Copyright 2013 Los Alamos National Security, LLC. All rights reserved.
.\" Copyright 2010 Cisco Systems, Inc. All rights reserved.
.\" Copyright 2006-2008 Sun Microsystems, Inc.
.\" Copyright (c) 1996 Thinking Machines
.\" Copyright (c) 2020 Google, LLC. All rights reserved.
.\" $COPYRIGHT$
.TH MPI_Get_address 3 "#OMPI_DATE#" "#PACKAGE_VERSION#" "#PACKAGE_NAME#"
.SH NAME
\fBMPI_Get_address\fP \- Gets the address of a location in memory.
.SH SYNTAX
.ft R
.SH C Syntax
.nf
#include <mpi.h>
int MPI_Get_address(const void *\fIlocation\fP, MPI_Aint *\fIaddress\fP)
.fi
.SH Fortran Syntax
.nf
USE MPI
! or the older form: INCLUDE 'mpif.h'
MPI_GET_ADDRESS(\fILOCATION, ADDRESS, IERROR\fP)
<type> \fILOCATION\fP(*)
INTEGER(KIND=MPI_ADDRESS_KIND) \fIADDRESS\fP
INTEGER \fIIERROR\fP
.fi
.SH Fortran 2008 Syntax
.nf
USE mpi_f08
MPI_Get_address(\fIlocation\fP, \fIaddress\fP, \fIierror\fP)
TYPE(*), DIMENSION(..), ASYNCHRONOUS :: \fIlocation\fP
INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(OUT) :: \fIaddress\fP
INTEGER, OPTIONAL, INTENT(OUT) :: \fIierror\fP
.fi
.SH INPUT PARAMETERS
.ft R
.TP 1i
location
Location in caller memory (choice).
.SH OUTPUT PARAMETERS
.ft R
.TP 1i
address
Address of location (integer).
.TP 1i
IERROR
Fortran only: Error status (integer).
.SH DESCRIPTION
.ft R
MPI_Get_address returns the byte address of a location in memory.
.sp
Example: Using MPI_Get_address for an array.
.sp
.nf
EAL A(100,100)
.fi
.br
INTEGER I1, I2, DIFF
.br
CALL MPI_GET_ADDRESS(A(1,1), I1, IERROR)
.br
CALL MPI_GET_ADDRESS(A(10,10), I2, IERROR)
.br
DIFF = I2 - I1
.br
! The value of DIFF is 909*sizeofreal; the values of I1 and I2 are
.br
! implementation dependent.
.fi
.SH NOTES
.ft R
Current Fortran MPI codes will run unmodified and will port to any system. However, they may fail if addresses larger than 2^32 - 1 are used in the program. New codes should be written so that they use the new functions. This provides compatibility with C and avoids errors on 64-bit architectures. However, such newly written codes may need to be (slightly) rewritten to port to old Fortran 77 environments that do not support KIND declarations.
.SH ERRORS
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument.
.sp
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for I/O function errors. The error handler may be changed with MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned. Note that MPI does not guarantee that an MPI program can continue past an error.

84
ompi/mpi/man/man3/MPI_Get_address.md Обычный файл
Просмотреть файл

@ -0,0 +1,84 @@
# Name
`MPI_Get_address` - Gets the address of a location in memory.
# Syntax
## C Syntax
```c
#include <mpi.h>
int MPI_Get_address(const void *location, MPI_Aint *address)
```
## Fortran Syntax
```fortran
USE MPI
! or the older form: INCLUDE 'mpif.h'
MPI_GET_ADDRESS(LOCATION, ADDRESS, IERROR)
<type> LOCATION(*)
INTEGER(KIND=MPI_ADDRESS_KIND) ADDRESS
INTEGER IERROR
```
## Fortran 2008 Syntax
```fortran
USE mpi_f08
MPI_Get_address(location, address, ierror)
TYPE(*), DIMENSION(..), ASYNCHRONOUS :: location
INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(OUT) :: address
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
```
# Input Parameters
* `location` : Location in caller memory (choice).
# Output Parameters
* `address` : Address of location (integer).
* `IERROR` : Fortran only: Error status (integer).
# Description
`MPI_Get_address` returns the byte `address` of a location in memory.
Example: Using `MPI_Get_address` for an array.
```fortran
EAL A(100,100)
INTEGER I1, I2, DIFF
CALL MPI_GET_ADDRESS(A(1,1), I1, IERROR)
CALL MPI_GET_ADDRESS(A(10,10), I2, IERROR)
DIFF = I2 - I1
! The value of DIFF is 909*sizeofreal; the values of I1 and I2 are
! implementation dependent.
```
# Notes
Current Fortran MPI codes will run unmodified and will port to any
system. However, they may fail if `addresses` larger than 2^32 - 1 are
used in the program. New codes should be written so that they use the
new functions. This provides compatibility with C and avoids errors on
64-bit architectures. However, such newly written codes may need to be
(slightly) rewritten to port to old Fortran 77 environments that do not
support KIND declarations.
# Errors
Almost all MPI routines return an error value; C routines as the value
of the function and Fortran routines in the last argument.
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for
I/O function errors. The error handler may be changed with
`MPI_Comm_set_errhandler`; the predefined error handler `MPI_ERRORS_RETURN`
may be used to cause error values to be returned. Note that MPI does not
guarantee that an MPI program can continue past an error.

Просмотреть файл

@ -1,95 +0,0 @@
.\" -*- nroff -*-
.\" Copyright 2013 Los Alamos National Security, LLC. All rights reserved.
.\" Copyright 2006-2008 Sun Microsystems, Inc.
.\" Copyright (c) 1996 Thinking Machines Corporation
.\" Copyright (c) 2020 Google, LLC. All rights reserved.
.\" $COPYRIGHT$
.TH MPI_Get_count 3 "#OMPI_DATE#" "#PACKAGE_VERSION#" "#PACKAGE_NAME#"
.SH NAME
\fBMPI_Get_count \fP \- Gets the number of top-level elements received.
.SH SYNTAX
.ft R
.SH C Syntax
.nf
#include <mpi.h>
int MPI_Get_count(const MPI_Status *\fIstatus\fP, MPI_Datatype\fI datatype\fP,
int\fI *count\fP)
.fi
.SH Fortran Syntax
.nf
USE MPI
! or the older form: INCLUDE 'mpif.h'
MPI_GET_COUNT(\fISTATUS, DATATYPE, COUNT, IERROR\fP)
INTEGER \fISTATUS(MPI_STATUS_SIZE), DATATYPE, COUNT, IERROR\fP
.fi
.SH Fortran 2008 Syntax
.nf
USE mpi_f08
MPI_Get_count(\fIstatus\fP, \fIdatatype\fP, \fIcount\fP, \fIierror\fP)
TYPE(MPI_Status), INTENT(IN) :: \fIstatus\fP
TYPE(MPI_Datatype), INTENT(IN) :: \fIdatatype\fP
INTEGER, INTENT(OUT) :: \fIcount\fP
INTEGER, OPTIONAL, INTENT(OUT) :: \fIierror\fP
.fi
.SH INPUT PARAMETERS
.ft R
.TP 1i
status
Return status of receive operation (status).
.TP 1i
datatype
Datatype of each receive buffer element (handle).
.SH OUTPUT PARAMETERS
.ft R
.TP 1i
count
Number of received elements (integer).
.ft R
.TP 1i
IERROR
Fortran only: Error status (integer).
.SH DESCRIPTION
.ft R
Returns the number of entries received. (We count entries, each of type
datatype, not bytes.) The datatype argument should match the argument
provided by the receive call that set the status variable. (As explained in Section 3.12.5 in the MPI-1 Standard, "Use of General Datatypes in Communication," MPI_Get_count may, in certain situations, return the value MPI_UNDEFINED.)
.sp
The datatype argument is passed to MPI_Get_count to improve performance. A message might be received without counting the number of elements it contains, and the count value is often not needed. Also, this allows the same function to be used after a call to MPI_Probe.
.SH NOTES
If the size of the datatype is zero, this routine will return a count of
zero. If the amount of data in
.I status
is not an exact multiple of the
size of
.I datatype
(so that
.I count
would not be integral), a
.I count
of
.I MPI_UNDEFINED
is returned instead.
.SH ERRORS
If the value to be returned is larger than can fit into the
.I count
parameter, an MPI_ERR_TRUNCATE error is raised.
.sp
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument.
.sp
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for I/O function errors. The error handler may be changed with MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned. Note that MPI does not guarantee that an MPI program can continue past an error.
.SH SEE ALSO
.ft R
.sp
MPI_Get_elements

87
ompi/mpi/man/man3/MPI_Get_count.md Обычный файл
Просмотреть файл

@ -0,0 +1,87 @@
# Name
`MPI_Get_count` - Gets the number of top-level elements received.
# Syntax
## C Syntax
```c
#include <mpi.h>
int MPI_Get_count(const MPI_Status *status, MPI_Datatype datatype,
int *count)
```
## Fortran Syntax
```fortran
USE MPI
! or the older form: INCLUDE 'mpif.h'
MPI_GET_COUNT(STATUS, DATATYPE, COUNT, IERROR)
INTEGER STATUS(MPI_STATUS_SIZE), DATATYPE, COUNT, IERROR
```
## Fortran 2008 Syntax
```fortran
USE mpi_f08
MPI_Get_count(status, datatype, count, ierror)
TYPE(MPI_Status), INTENT(IN) :: status
TYPE(MPI_Datatype), INTENT(IN) :: datatype
INTEGER, INTENT(OUT) :: count
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
```
# Input Parameters
* `status` : Return status of receive operation (status).
* `datatype` : Datatype of each receive buffer element (handle).
# Output Parameters
* `count` : Number of received elements (integer).
* `IERROR` : Fortran only: Error status (integer).
# Description
Returns the number of entries received. (We count entries, each of type
`datatype`, not bytes.) The `datatype` argument should match the argument
provided by the receive call that set the `status` variable. (As explained
in Section 3.12.5 in the MPI-1 Standard, "Use of General Datatypes in
Communication," `MPI_Get_count` may, in certain situations, return the
value `MPI_UNDEFINED`.)
The `datatype` argument is passed to `MPI_Get_count` to improve performance.
A message might be received without counting the number of elements it
contains, and the `count` value is often not needed. Also, this allows the
same function to be used after a call to `MPI_Probe`.
# Notes
If the size of the `datatype` is zero, this routine will return a `count` of
zero. If the amount of data in `status` is not an exact multiple of the
size of `datatype` (so that `count` would not be integral), a `count` of
`MPI_UNDEFINED` is returned instead.
# Errors
If the value to be returned is larger than can fit into the `count`
parameter, an `MPI_ERR_TRUNCATE` error is raised.
Almost all MPI routines return an error value; C routines as the value
of the function and Fortran routines in the last argument.
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for
I/O function errors. The error handler may be changed with
`MPI_Comm_set_errhandler`; the predefined error handler `MPI_ERRORS_RETURN`
may be used to cause error values to be returned. Note that MPI does not
guarantee that an MPI program can continue past an error.
# See Also
[`MPI_Get_elements`(3)](MPI_Get_elements.html)

Просмотреть файл

@ -1,117 +0,0 @@
.\" -*- nroff -*-
.\" Copyright 2013 Los Alamos National Security, LLC. All rights reserved.
.\" Copyright 2010 Cisco Systems, Inc. All rights reserved.
.\" Copyright 2006-2008 Sun Microsystems, Inc.
.\" Copyright (c) 1996 Thinking Machines Corporation
.\" Copyright (c) 2020 Google, LLC. All rights reserved.
.\" $COPYRIGHT$
.TH MPI_Get_elements 3 "#OMPI_DATE#" "#PACKAGE_VERSION#" "#PACKAGE_NAME#"
.SH NAME
\fBMPI_Get_elements, MPI_Get_elements_x\fP \- Returns the number of basic elements in a data type.
.SH SYNTAX
.ft R
.SH C Syntax
.nf
#include <mpi.h>
int MPI_Get_elements(const MPI_Status *\fIstatus\fP, MPI_Datatype\fI datatype\fP,
int\fI *count\fP)
int MPI_Get_elements_x(const MPI_Status *\fIstatus\fP, MPI_Datatype\fI datatype\fP,
MPI_Count\fI *count\fP)
.fi
.SH Fortran Syntax
.nf
USE MPI
! or the older form: INCLUDE 'mpif.h'
MPI_GET_ELEMENTS(\fISTATUS, DATATYPE, COUNT, IERROR\fP)
INTEGER \fISTATUS(MPI_STATUS_SIZE), DATATYPE, COUNT, IERROR\fP
MPI_GET_ELEMENTS_X(\fISTATUS, DATATYPE, COUNT, IERROR\fP)
INTEGER \fISTATUS(MPI_STATUS_SIZE), DATATYPE\fP
INTEGER(KIND=MPI_COUNT_KIND) \fICOUNT\fP
INTEGER \fIIERROR\fP
.fi
.SH Fortran 2008 Syntax
.nf
USE mpi_f08
MPI_Get_elements(\fIstatus\fP, \fIdatatype\fP, \fIcount\fP, \fIierror\fP)
TYPE(MPI_Status), INTENT(IN) :: \fIstatus\fP
TYPE(MPI_Datatype), INTENT(IN) :: \fIdatatype\fP
INTEGER, INTENT(OUT) :: \fIcount\fP
INTEGER, OPTIONAL, INTENT(OUT) :: \fIierror\fP
MPI_Get_elements_x(\fIstatus\fP, \fIdatatype\fP, \fIcount\fP, \fIierror\fP)
TYPE(MPI_Status), INTENT(IN) :: \fIstatus\fP
TYPE(MPI_Datatype), INTENT(IN) :: \fIdatatype\fP
INTEGER(KIND = MPI_COUNT_KIND), INTENT(OUT) :: \fIcount\fP
INTEGER, OPTIONAL, INTENT(OUT) :: \fIierror\fP
.fi
.SH INPUT PARAMETERS
.ft R
.TP 1i
status
Return status of receive operation (status).
.TP 1i
datatype
Datatype used by receive operation (handle).
.SH OUTPUT PARAMETERS
.ft R
count Number of received basic elements (integer).
.ft R
.TP 1i
IERROR
Fortran only: Error status (integer).
.SH DESCRIPTION
.ft R
MPI_Get_elements and MPI_Get_elements_x behave different from MPI_Get_count, which returns the number of "top-level entries" received, i.e., the number of "copies" of type datatype. MPI_Get_count may return any integer value k, where 0 =< k =< count. If MPI_Get_count returns k, then the number of basic elements received (and the value returned by MPI_Get_elements and MPI_Get_elements_x) is n * k, where n is the number of basic elements in the type map of datatype. If the number of basic elements received is not a multiple of n, that is, if the receive operation has not received an integral number of datatype "copies," then MPI_Get_count returns the value MPI_UNDEFINED. For both functions, if the \fIcount\fP parameter cannot express the value to be returned (e.g., if the parameter is too small to hold the output value), it is set to MPI_UNDEFINED.
.sp
\fBExample:\fP Usage of MPI_Get_count and MPI_Get_element:
.sp
.nf
\&...
CALL MPI_TYPE_CONTIGUOUS(2, MPI_REAL, Type2, ierr)
CALL MPI_TYPE_COMMIT(Type2, ierr)
\&...
CALL MPI_COMM_RANK(comm, rank, ierr)
IF(rank.EQ.0) THEN
CALL MPI_SEND(a, 2, MPI_REAL, 1, 0, comm, ierr)
CALL MPI_SEND(a, 3, MPI_REAL, 1, 0, comm, ierr)
ELSE
CALL MPI_RECV(a, 2, Type2, 0, 0, comm, stat, ierr)
CALL MPI_GET_COUNT(stat, Type2, i, ierr) ! returns i=1
CALL MPI_GET_ELEMENTS(stat, Type2, i, ierr) ! returns i=2
CALL MPI_RECV(a, 2, Type2, 0, 0, comm, stat, ierr)
CALL MPI_GET_COUNT(stat, Type2, i, ierr) ! returns i=MPI_UNDEFINED
CALL MPI_GET_ELEMENTS(stat, Type2, i, ierr) ! returns i=3
END IF
.fi
.sp
The function MPI_Get_elements can also be used after a probe to find the number of elements in the probed message. Note that the two functions MPI_Get_count and MPI_Get_elements return the same values when they are used with primitive data types.
.SH ERRORS
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument.
.sp
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for I/O function errors. The error handler may be changed with MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned. Note that MPI does not guarantee that an MPI program can continue past an error.
.SH FORTRAN 77 NOTES
.ft R
The MPI standard prescribes portable Fortran syntax for
the \fICOUNT\fP argument of MPI_Get_elements_x only for
Fortran 90. FORTRAN 77 users may use the non-portable syntax
.sp
.nf
INTEGER*MPI_COUNT_KIND \fICOUNT\fP
.fi
.sp
where MPI_COUNT_KIND is a constant defined in mpif.h
and gives the length of the declared integer in bytes.
.SH SEE ALSO
.ft R
.sp
MPI_Get_count

132
ompi/mpi/man/man3/MPI_Get_elements.md Обычный файл
Просмотреть файл

@ -0,0 +1,132 @@
# Name
`MPI_Get_elements`, `MPI_Get_elements_x` - Returns the number of basic
elements in a data type.
# Syntax
## C Syntax
```c
#include <mpi.h>
int MPI_Get_elements(const MPI_Status *status, MPI_Datatype datatype,
int *count)
int MPI_Get_elements_x(const MPI_Status *status, MPI_Datatype datatype,
MPI_Count *count)
```
## Fortran Syntax
```fortran
USE MPI
! or the older form: INCLUDE 'mpif.h'
MPI_GET_ELEMENTS(STATUS, DATATYPE, COUNT, IERROR)
INTEGER STATUS(MPI_STATUS_SIZE), DATATYPE, COUNT, IERROR
MPI_GET_ELEMENTS_X(STATUS, DATATYPE, COUNT, IERROR)
INTEGER STATUS(MPI_STATUS_SIZE), DATATYPE
INTEGER(KIND=MPI_COUNT_KIND) COUNT
INTEGER IERROR
```
## Fortran 2008 Syntax
```fortran
USE mpi_f08
MPI_Get_elements(status, datatype, count, ierror)
TYPE(MPI_Status), INTENT(IN) :: status
TYPE(MPI_Datatype), INTENT(IN) :: datatype
INTEGER, INTENT(OUT) :: count
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
MPI_Get_elements_x(status, datatype, count, ierror)
TYPE(MPI_Status), INTENT(IN) :: status
TYPE(MPI_Datatype), INTENT(IN) :: datatype
INTEGER(KIND = MPI_COUNT_KIND), INTENT(OUT) :: count
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
```
# Input Parameters
* `status` : Return status of receive operation (status).
* `datatype` : Datatype used by receive operation (handle).
# Output Parameters
* `IERROR` : Fortran only: Error status (integer).
# Description
`MPI_Get_elements` and `MPI_Get_elements_x` behave different from
`MPI_Get_count`, which returns the number of "top-level entries"
received, i.e., the number of "copies" of type `datatype`. `MPI_Get_count`
may return any integer value k, where 0 =< k =< count. If
`MPI_Get_count` returns k, then the number of basic elements received (and
the value returned by `MPI_Get_elements` and `MPI_Get_elements_x`) is n
k, where n is the number of basic elements in the type map of `datatype`.
If the number of basic elements received is not a multiple of n, that
is, if the receive operation has not received an integral number of
`datatype` "copies," then `MPI_Get_count` returns the value `MPI_UNDEFINED.`
For both functions, if the count parameter cannot express the value to
be returned (e.g., if the parameter is too small to hold the output
value), it is set to `MPI_UNDEFINED`.
Example: Usage of `MPI_Get_count` and `MPI_Get_element`:
```fortran
//...
MPI_TYPE_CONTIGUOUS(2, MPI_REAL, Type2, ierr)
MPI_TYPE_COMMIT(Type2, ierr)
// ...
MPI_COMM_RANK(comm, rank, ierr)
IF(rank.EQ.0) THEN
CALL MPI_SEND(a, 2, MPI_REAL, 1, 0, comm, ierr)
CALL MPI_SEND(a, 3, MPI_REAL, 1, 0, comm, ierr)
ELSE
CALL MPI_RECV(a, 2, Type2, 0, 0, comm, stat, ierr)
CALL MPI_GET_COUNT(stat, Type2, i, ierr) ! returns i=1
CALL MPI_GET_ELEMENTS(stat, Type2, i, ierr) ! returns i=2
CALL MPI_RECV(a, 2, Type2, 0, 0, comm, stat, ierr)
CALL MPI_GET_COUNT(stat, Type2, i, ierr) ! returns i=MPI_UNDEFINED
CALL MPI_GET_ELEMENTS(stat, Type2, i, ierr) ! returns i=3
END IF
```
The function `MPI_Get_elements` can also be used after a probe to find the
number of elements in the probed message. Note that the two functions
`MPI_Get_count` and `MPI_Get_elements` return the same values when they are
used with primitive data types.
# Errors
Almost all MPI routines return an error value; C routines as the value
of the function and Fortran routines in the last argument.
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for
I/O function errors. The error handler may be changed with
`MPI_Comm_set_errhandler`; the predefined error handler `MPI_ERRORS_RETURN`
may be used to cause error values to be returned. Note that MPI does not
guarantee that an MPI program can continue past an error.
# Fortran 77 Notes
The MPI standard prescribes portable Fortran syntax for the COUNT
argument of `MPI_Get_elements_x` only for Fortran 90. FORTRAN 77 users may
use the non-portable syntax
```Fortran
INTEGER*MPI_COUNT_KIND COUNT
```
where `MPI_COUNT_KIND` is a constant defined in mpif.h and gives the
length of the declared integer in bytes.
# See Also
[`MPI_Get_count`(3)](MPI_Get_count.html)

Просмотреть файл

@ -1,89 +0,0 @@
.\" -*- nroff -*-
.\" Copyright (c) 2010-2012 Cisco Systems, Inc. All rights reserved.
.\" Copyright 2006-2008 Sun Microsystems, Inc.
.\" Copyright (c) 1996 Thinking Machines Corporation
.\" Copyright (c) 2020 Google, LLC. All rights reserved.
.\" $COPYRIGHT$
.TH MPI_Get_library_version 3 "#OMPI_DATE#" "#PACKAGE_VERSION#" "#PACKAGE_NAME#"
.SH NAME
\fBMPI_Get_library_version\fP \- Returns a string of the current Open MPI version
.SH SYNTAX
.ft R
.SH C Syntax
.nf
#include <mpi.h>
int MPI_Get_library_version(char \fI*version\fP, int \fI*resultlen\fP)
.fi
.SH Fortran Syntax
.nf
USE MPI
! or the older form: INCLUDE 'mpif.h'
MPI_GET_LIBRARY_VERSION(\fIVERSION\fP, \fIRESULTLEN\fP, \fIIERROR\fP)
CHARACTER*(*) \fINAME\fP
INTEGER \fIRESULTLEN\fP, \fIIERROR\fP
.fi
.SH Fortran 2008 Syntax
.nf
USE mpi_f08
MPI_Get_library_version(\fIversion\fP, \fIresulten\fP, \fIierror\fP)
CHARACTER(LEN=MPI_MAX_LIBRARY_VERSION_STRING), INTENT(OUT) :: \fIversion\fP
INTEGER, INTENT(OUT) :: \fIresultlen\fP
INTEGER, OPTIONAL, INTENT(OUT) :: \fIierror\fP
.fi
.SH OUTPUT PARAMETERS
.ft R
.TP 1i
version
A string containing the Open MPI version (string).
.ft R
.TP 1i
resultlen
Length (in characters) of result returned in \fIversion\fP (integer).
.ft R
.TP 1i
IERROR
Fortran only: Error status (integer).
.SH DESCRIPTION
.ft R
This routine returns a string representing the version of the MPI
library. The version argument is a character string for maximum
flexibility.
.sp
The number of characters actually written is returned in the output
argument, \fIresultlen\fP. In C, a '\\0' character is additionally
stored at \fIversion[resultlen]\fP. The \fIresultlen\fP cannot be
larger than (MPI_MAX_LIBRARY_VERSION_STRING - 1). In Fortran, version
is padded on the right with blank characters. The \fIresultlen\fP
cannot be larger than MPI_MAX_LIBRARY_VERSION_STRING.
.SH NOTE
.ft R
The \fIversion\fP string that is passed must be at least
MPI_MAX_LIBRARY_VERSION_STRING characters long.
.sp
MPI_Get_library_version is one of the few functions that can be called
before MPI_Init and after MPI_Finalize.
.SH ERRORS
Almost all MPI routines return an error value; C routines as the value
of the function and Fortran routines in the last argument.
.sp
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for
I/O function errors. The error handler may be changed with
MPI_Comm_set_errhandler; the predefined error handler
MPI_ERRORS_RETURN may be used to cause error values to be
returned. Note that MPI does not guarantee that an MPI program can
continue past an error.
.SH SEE ALSO
.ft R
.nf
MPI_Get_version

Просмотреть файл

@ -0,0 +1,78 @@
# Name
`MPI_Get_library_version` - Returns a string of the current Open MPI
version
# Syntax
## C Syntax
```c
#include <mpi.h>
int MPI_Get_library_version(char *version, int *resultlen)
```
## Fortran Syntax
```fortran
USE MPI
! or the older form: INCLUDE 'mpif.h'
MPI_GET_LIBRARY_VERSION(VERSION, RESULTLEN, IERROR)
CHARACTER*(*) NAME
INTEGER RESULTLEN, IERROR
```
## Fortran 2008 Syntax
```fortran
USE mpi_f08
MPI_Get_library_version(version, resulten, ierror)
CHARACTER(LEN=MPI_MAX_LIBRARY_VERSION_STRING), INTENT(OUT) :: version
INTEGER, INTENT(OUT) :: resultlen
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
```
# Output Parameters
* `version` : A string containing the Open MPI version (string).
* `resultlen` : Length (in characters) of result returned in `version` (integer).
* `IERROR` : Fortran only: Error status (integer).
# Description
This routine returns a string representing the `version` of the MPI
library. The `version` argument is a character string for maximum
flexibility.
The number of characters actually written is returned in the output
argument, `resultlen`. In C, a '0' character is additionally stored
at `version[resultlen]`. The `resultlen` cannot be larger than
(`MPI_MAX_LIBRARY_VERSION_STRING` - 1). In Fortran, `version` is padded on
the right with blank characters. The `resultlen` cannot be larger than `MPI_MAX_LIBRARY_VERSION_STRING`.
# Note
The `version` string that is passed must be at least
`MPI_MAX_LIBRARY_VERSION_STRING` characters long.
`MPI_Get_library_version` is one of the few functions that can be called
before `MPI_Init` and after `MPI_Finalize.`
# Errors
Almost all MPI routines return an error value; C routines as the value
of the function and Fortran routines in the last argument.
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for
I/O function errors. The error handler may be changed with
`MPI_Comm_set_errhandler`; the predefined error handler `MPI_ERRORS_RETURN`
may be used to cause error values to be returned. Note that MPI does not
guarantee that an MPI program can continue past an error.
# See Also
[`MPI_Get_version`(3)](MPI_Get_version.html)

Просмотреть файл

@ -1,69 +0,0 @@
.\" -*- nroff -*-
.\" Copyright 2010 Cisco Systems, Inc. All rights reserved.
.\" Copyright 2006-2008 Sun Microsystems, Inc. All rights reserved. Use is subject to license terms.
.\" Copyright (c) 1996 Thinking Machines Corporation
.\" Copyright (c) 2020 Google, LLC. All rights reserved.
.\" $COPYRIGHT$
.TH MPI_Get_processor_name 3 "#OMPI_DATE#" "#PACKAGE_VERSION#" "#PACKAGE_NAME#"
.SH NAME
\fBMPI_Get_processor_name \fP \- Gets the name of the processor.
.SH SYNTAX
.ft R
.SH C Syntax
.nf
#include <mpi.h>
int MPI_Get_processor_name(char *\fIname\fP, int *\fIresultlen\fP)
.fi
.SH Fortran Syntax
.nf
USE MPI
! or the older form: INCLUDE 'mpif.h'
MPI_GET_PROCESSOR_NAME(\fINAME, RESULTLEN, IERROR\fP)
CHARACTER*(*) \fINAME\fP
INTEGER \fIRESULTLEN, IERROR \fP
.fi
.SH Fortran 2008 Syntax
.nf
USE mpi_f08
MPI_Get_processor_name(\fIname\fP, \fIresultlen\fP, \fIierror\fP)
CHARACTER(LEN=MPI_MAX_PROCESSOR_NAME), INTENT(OUT) :: \fIname\fP
INTEGER, INTENT(OUT) :: \fIresultlen\fP
INTEGER, OPTIONAL, INTENT(OUT) :: \fIierror\fP
.fi
.SH OUTPUT PARAMETERS
.ft R
.TP 1i
name
A unique specifier for the actual (as opposed to virtual) node.
.TP 1i
resultlen
Length (in characters) of result returned in name.
.ft R
.TP 1i
IERROR
Fortran only: Error status (integer).
.SH DESCRIPTION
.ft R
This routine returns the name of the processor on which it was called at the moment of the call. The name is a character string for maximum flexibility. From this value it must be possible to identify a specific piece of hardware. The argument name must represent storage that is at least MPI_MAX_PROCESSOR_NAME characters long.
.sp
The number of characters actually written is returned in the output
argument, resultlen.
.sp
.SH NOTES
.ft R
The user must provide at least MPI_MAX_PROCESSOR_NAME space to write the processor name; processor names can be this long. The user should examine the output argument, resultlen, to determine the actual length of the name.
.sp
.SH ERRORS
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument.
.sp
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for I/O function errors. The error handler may be changed with MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned. Note that MPI does not guarantee that an MPI program can continue past an error.

71
ompi/mpi/man/man3/MPI_Get_processor_name.md Обычный файл
Просмотреть файл

@ -0,0 +1,71 @@
# Name
`MPI_Get_processor_name` - Gets the name of the processor.
# Syntax
## C Syntax
```c
#include <mpi.h>
int MPI_Get_processor_name(char *name, int *resultlen)
```
## Fortran Syntax
```fortran
USE MPI
! or the older form: INCLUDE 'mpif.h'
MPI_GET_PROCESSOR_NAME(NAME, RESULTLEN, IERROR)
CHARACTER*(*) NAME
INTEGER RESULTLEN, IERROR
```
## Fortran 2008 Syntax
```fortran
USE mpi_f08
MPI_Get_processor_name(name, resultlen, ierror)
CHARACTER(LEN=MPI_MAX_PROCESSOR_NAME), INTENT(OUT) :: name
INTEGER, INTENT(OUT) :: resultlen
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
```
# Output Parameters
* `name` : A unique specifier for the actual (as opposed to virtual) node.
* `resultlen` : Length (in characters) of result returned in name.
* `IERROR` : Fortran only: Error status (integer).
# Description
This routine returns the `name` of the processor on which it was called at
the moment of the call. The `name` is a character string for maximum
flexibility. From this value it must be possible to identify a specific
piece of hardware. The argument `name` must represent storage that is at
least `MPI_MAX_PROCESSOR_NAME` characters long.
The number of characters actually written is returned in the output
argument, `resultlen`.
# Notes
The user must provide at least `MPI_MAX_PROCESSOR_NAME` space to write the
processor `name`; processor `name`s can be this long. The user should
examine the output argument, `resultlen`, to determine the actual length
of the `name`.
# Errors
Almost all MPI routines return an error value; C routines as the value
of the function and Fortran routines in the last argument.
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for
I/O function errors. The error handler may be changed with
`MPI_Comm_set_errhandler`; the predefined error handler `MPI_ERRORS_RETURN`
may be used to cause error values to be returned. Note that MPI does not
guarantee that an MPI program can continue past an error.

Просмотреть файл

@ -1,65 +0,0 @@
.\" -*- nroff -*-
.\" Copyright (c) 2010-2012 Cisco Systems, Inc. All rights reserved.
.\" Copyright 2006-2008 Sun Microsystems, Inc.
.\" Copyright (c) 1996 Thinking Machines Corporation
.\" Copyright (c) 2020 Google, LLC. All rights reserved.
.\" $COPYRIGHT$
.TH MPI_Get_version 3 "#OMPI_DATE#" "#PACKAGE_VERSION#" "#PACKAGE_NAME#"
.SH NAME
\fBMPI_Get_version\fP \- Returns the version of the standard corresponding to the current implementation.
.SH SYNTAX
.ft R
.SH C Syntax
.nf
#include <mpi.h>
int MPI_Get_version(int \fI*version\fP, int \fI*subversion\fP)
.fi
.SH Fortran Syntax
.nf
USE MPI
! or the older form: INCLUDE 'mpif.h'
MPI_GET_VERSION(\fIVERSION\fP, \fISUBVERSION\fP, \fIIERROR\fP)
INTEGER \fIVERSION\fP, \fISUBVERSION\fP, \fIIERROR\fP
.fi
.SH Fortran 2008 Syntax
.nf
USE mpi_f08
MPI_Get_version(\fIversion\fP, \fIsubversion\fP, \fIierror\fP)
INTEGER, INTENT(OUT) :: \fIversion\fP, \fIsubversion\fP
INTEGER, OPTIONAL, INTENT(OUT) :: \fIierror\fP
.fi
.SH OUTPUT PARAMETERS
.ft R
.TP 1i
version
The major version number of the corresponding standard (integer).
.ft R
.TP 1i
subversion
The minor version number of the corresponding standard (integer).
.ft R
.TP 1i
IERROR
Fortran only: Error status (integer).
.SH DESCRIPTION
.ft R
Since Open MPI is MPI 3.1 compliant, this function will return a version value of 3 and a subversion value of 1 for this release.
.SH NOTE
.ft R
MPI_Get_version is one of the few functions that can be called before MPI_Init and after MPI_Finalize.
.SH ERRORS
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument.
.sp
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for I/O function errors. The error handler may be changed with MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned. Note that MPI does not guarantee that an MPI program can continue past an error.

62
ompi/mpi/man/man3/MPI_Get_version.md Обычный файл
Просмотреть файл

@ -0,0 +1,62 @@
# Name
`MPI_Get_version` - Returns the version of the standard corresponding
to the current implementation.
# Syntax
## C Syntax
```c
#include <mpi.h>
int MPI_Get_version(int *version, int *subversion)
```
## Fortran Syntax
```fortran
USE MPI
! or the older form: INCLUDE 'mpif.h'
MPI_GET_VERSION(VERSION, SUBVERSION, IERROR)
INTEGER VERSION, SUBVERSION, IERROR
```
## Fortran 2008 Syntax
```fortran
USE mpi_f08
MPI_Get_version(version, subversion, ierror)
INTEGER, INTENT(OUT) :: version, subversion
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
```
# Output Parameters
* `version` : The major version number of the corresponding standard (integer).
* `subversion` : The minor version number of the corresponding standard (integer).
* `IERROR` : Fortran only: Error status (integer).
# Description
Since Open MPI is MPI 3.1 compliant, this function will return a `version`
value of 3 and a subversion value of 1 for this release.
# Note
`MPI_Get_version` is one of the few functions that can be called before
`MPI_Init` and after `MPI_Finalize`.
# Errors
Almost all MPI routines return an error value; C routines as the value
of the function and Fortran routines in the last argument.
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for
I/O function errors. The error handler may be changed with
`MPI_Comm_set_errhandler`; the predefined error handler `MPI_ERRORS_RETURN`
may be used to cause error values to be returned. Note that MPI does not
guarantee that an MPI program can continue past an error.

Просмотреть файл

@ -40,7 +40,17 @@ MD_FILES = \
MPI_File_write_shared.md \
MPI_Finalize.md \
MPI_Finalized.md \
MPI_Free_mem.md
MPI_Free_mem.md \
MPI_Gather.md \
MPI_Gatherv.md \
MPI_Get.md \
MPI_Get_accumulate.md \
MPI_Get_address.md \
MPI_Get_count.md \
MPI_Get_elements.md \
MPI_Get_library_version.md \
MPI_Get_processor_name.md \
MPI_Get_version.md
TEMPLATE_FILES = \
MPI_Abort.3in \
@ -188,19 +198,9 @@ TEMPLATE_FILES = \
MPI_File_write_at_all.3in \
MPI_File_write_at_all_begin.3in \
MPI_File_write_at_all_end.3in \
MPI_Gather.3in \
MPI_Igather.3in \
MPI_Gatherv.3in \
MPI_Igatherv.3in \
MPI_Get.3in \
MPI_Get_accumulate.3in \
MPI_Get_address.3in \
MPI_Get_count.3in \
MPI_Get_elements.3in \
MPI_Get_elements_x.3in \
MPI_Get_library_version.3in \
MPI_Get_processor_name.3in \
MPI_Get_version.3in \
MPI_Graph_create.3in \
MPI_Graphdims_get.3in \
MPI_Graph_get.3in \