1
1
openmpi/ompi/mpi/man/man3/MPI_Bcast.3in
2012-02-20 22:12:43 +00:00

99 строки
4.0 KiB
Plaintext

.\" Copyright 2010 Cisco Systems, Inc. All rights reserved.
.\" Copyright 2006-2008 Sun Microsystems, Inc.
.\" Copyright (c) 1996 Thinking Machines Corporation
.TH MPI_Bcast 3 "#OMPI_DATE#" "#PACKAGE_VERSION#" "#PACKAGE_NAME#"
.SH NAME
\fBMPI_Bcast\fP \- Broadcasts a message from the process with rank \fIroot\fP to all other processes of the group.
.SH SYNTAX
.ft R
.SH C Syntax
.nf
#include <mpi.h>
int MPI_Bcast(void \fI*buffer\fP, int\fI count\fP, MPI_Datatype\fI datatype\fP,
int\fI root\fP, MPI_Comm\fI comm\fP)
.fi
.SH Fortran Syntax
.nf
INCLUDE 'mpif.h'
MPI_BCAST(\fIBUFFER\fP,\fI COUNT\fP, \fIDATATYPE\fP,\fI ROOT\fP,\fI COMM\fP,\fI IERROR\fP)
<type> \fIBUFFER\fP(*)
INTEGER \fICOUNT\fP,\fI DATATYPE\fP,\fI ROOT\fP,\fI COMM\fP,\fI IERROR\fP
.fi
.SH C++ Syntax
.nf
#include <mpi.h>
void MPI::Comm::Bcast(void* \fIbuffer\fP, int \fIcount\fP,
const MPI::Datatype& \fIdatatype\fP, int \fIroot\fP) const = 0
.fi
.SH Java Syntax
.nf
import mpi.*;
void MPI.COMM_WORLD.Bcast(Object \fIbuffer\fP, int \fIoffset\fP, int \fIcount\fP,
MPI.Datatype \fIdatatype\fP, int \fIroot\fP)
.fi
.SH INPUT/OUTPUT PARAMETERS
.ft R
.TP 1i
buffer
Starting address of buffer (choice).
.TP 1i
offset
Offset of starting point in buffer (Java-only)
.TP 1i
count
Number of entries in buffer (integer).
.TP 1i
datatype
Data type of buffer (handle).
.TP 1i
root
Rank of broadcast root (integer).
.TP 1i
comm
Communicator (handle).
.SH OUTPUT PARAMETER
.ft R
.TP 1i
IERROR
Fortran only: Error status (integer).
.SH DESCRIPTION
.ft R
MPI_Bcast broadcasts a message from the process with rank root to all processes of the group, itself included. It is called by all members of group using the same arguments for comm, root. On return, the contents of root's communication buffer has been copied to all processes.
.sp
General, derived datatypes are allowed for datatype. The type signature of count, datatype on any process must be equal to the type signature of count, datatype at the root. This implies that the amount of data sent must be equal to the amount received, pairwise between each process and the root. MPI_Bcast and all other data-movement collective routines make this restriction. Distinct type maps between sender and receiver are still allowed.
.sp
\fBExample:\fR Broadcast 100 ints from process 0 to every process in the group.
.nf
MPI_Comm comm;
int array[100];
int root=0;
\&...
MPI_Bcast( array, 100, MPI_INT, root, comm);
.fi
.sp
As in many of our sample code fragments, we assume that some of the variables (such as comm in the example above) have been assigned appropriate values.
.sp
WHEN COMMUNICATOR IS AN INTER-COMMUNICATOR
.sp
When the communicator is an inter-communicator, the root process in the first group broadcasts data to all the processes in the second group. The first group defines the root process. That process uses MPI_ROOT as the value of its \fIroot\fR argument. The remaining processes use MPI_PROC_NULL as the value of their \fIroot\fR argument. All processes in the second group use the rank of that root process in the first group as the value of their \fIroot\fR argument. The receive buffer arguments of the processes in the second group must be consistent with the send buffer argument of the root process in the first group.
.sp
.SH NOTES
This function does not support the in-place option.
.sp
.SH ERRORS
Almost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument. C++ functions do not return errors. If the default error handler is set to MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism will be used to throw an MPI::Exception object.
.sp
Before the error value is returned, the current MPI error handler is
called. By default, this error handler aborts the MPI job, except for I/O function errors. The error handler may be changed with MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned. Note that MPI does not guarantee that an MPI program can continue past an error.