2004-08-07 04:53:56 +04:00
|
|
|
/*
|
2005-11-05 22:57:48 +03:00
|
|
|
* Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
|
|
|
|
* University Research and Technology
|
|
|
|
* Corporation. All rights reserved.
|
Deal with the ticket #1239 and #712. This will upgrade the Open MPI support
for the F90 type create functions to the requirements of MPI 2.1 standard.
Advice to implementors. An application may often repeat a call to
MPI_TYPE_CREATE_F90_xxxx with the same combination of (xxxx,p,r).
The application is not allowed to free the returned predefined, unnamed
datatype handles. To prevent the creation of a potentially huge amount of
handles, the MPI implementation should return the same datatype handle for
the same (REAL/COMPLEX/INTEGER,p,r) combination. Checking for the
combination (p,r) in the preceding call to MPI_TYPE_CREATE_F90_xxxx and
using a hash-table to find formerly generated handles should limit the
overhead of finding a previously generated datatype with same combination
of (xxxx,p,r). (End of advice to implementors.)
This commit fixes trac:1239, and #712.
This commit was SVN r19458.
The following Trac tickets were found above:
Ticket 1239 --> https://svn.open-mpi.org/trac/ompi/ticket/1239
2008-08-31 22:36:32 +04:00
|
|
|
* Copyright (c) 2004-2008 The University of Tennessee and The University
|
2005-11-05 22:57:48 +03:00
|
|
|
* of Tennessee Research Foundation. All rights
|
|
|
|
* reserved.
|
2004-11-28 23:09:25 +03:00
|
|
|
* Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
|
|
|
|
* University of Stuttgart. All rights reserved.
|
2005-03-24 15:43:37 +03:00
|
|
|
* Copyright (c) 2004-2005 The Regents of the University of California.
|
|
|
|
* All rights reserved.
|
== Highlights ==
1. New mpifort wrapper compiler: you can utilize mpif.h, use mpi, and use mpi_f08 through this one wrapper compiler
1. mpif77 and mpif90 still exist, but are sym links to mpifort and may be removed in a future release
1. The mpi module has been re-implemented and is significantly "mo' bettah"
1. The mpi_f08 module offers many, many improvements over mpif.h and the mpi module
This stuff is coming from a VERY long-lived mercurial branch (3 years!); it'll almost certainly take a few SVN commits and a bunch of testing before I get it correctly committed to the SVN trunk.
== More details ==
Craig Rasmussen and I have been working with the MPI-3 Fortran WG and Fortran J3 committees for a long, long time to make a prototype MPI-3 Fortran bindings implementation. We think we're at a stable enough state to bring this stuff back to the trunk, with the goal of including it in OMPI v1.7.
Special thanks go out to everyone who has been incredibly patient and helpful to us in this journey:
* Rolf Rabenseifner/HLRS (mastermind/genius behind the entire MPI-3 Fortran effort)
* The Fortran J3 committee
* Tobias Burnus/gfortran
* Tony !Goetz/Absoft
* Terry !Donte/Oracle
* ...and probably others whom I'm forgetting :-(
There's still opportunities for optimization in the mpi_f08 implementation, but by and large, it is as far along as it can be until Fortran compilers start implementing the new F08 dimension(..) syntax.
Note that gfortran is currently unsupported for the mpi_f08 module and the new mpi module. gfortran users will a) fall back to the same mpi module implementation that is in OMPI v1.5.x, and b) not get the new mpi_f08 module. The gfortran maintainers are actively working hard to add the necessary features to support both the new mpi_f08 module and the new mpi module implementations. This will take some time.
As mentioned above, ompi/mpi/f77 and ompi/mpi/f90 no longer exist. All the fortran bindings implementations have been collated under ompi/mpi/fortran; each implementation has its own subdirectory:
{{{
ompi/mpi/fortran/
base/ - glue code
mpif-h/ - what used to be ompi/mpi/f77
use-mpi-tkr/ - what used to be ompi/mpi/f90
use-mpi-ignore-tkr/ - new mpi module implementation
use-mpi-f08/ - new mpi_f08 module implementation
}}}
There's also a prototype 6-function-MPI implementation under use-mpi-f08-desc that emulates the new F08 dimension(..) syntax that isn't fully available in Fortran compilers yet. We did that to prove it to ourselves that it could be done once the compilers fully support it. This directory/implementation will likely eventually replace the use-mpi-f08 version.
Other things that were done:
* ompi_info grew a few new output fields to describe what level of Fortran support is included
* Existing Fortran examples in examples/ were renamed; new mpi_f08 examples were added
* The old Fortran MPI libraries were renamed:
* libmpi_f77 -> libmpi_mpifh
* libmpi_f90 -> libmpi_usempi
* The configury for Fortran was consolidated and significantly slimmed down. Note that the F77 env variable is now IGNORED for configure; you should only use FC. Example:
{{{
shell$ ./configure CC=icc CXX=icpc FC=ifort ...
}}}
All of this work was done in a Mercurial branch off the SVN trunk, and hosted at Bitbucket. This branch has got to be one of OMPI's longest-running branches. Its first commit was Tue Apr 07 23:01:46 2009 -0400 -- it's over 3 years old! :-) We think we've pulled in all relevant changes from the OMPI trunk (e.g., Fortran implementations of the new MPI-3 MPROBE stuff for mpif.h, use mpi, and use mpi_f08, and the recent Fujitsu Fortran patches).
I anticipate some instability when we bring this stuff into the trunk, simply because it touches a LOT of code in the MPI layer in the OMPI code base. We'll try our best to make it as pain-free as possible, but please bear with us when it is committed.
This commit was SVN r26283.
2012-04-18 19:57:29 +04:00
|
|
|
* Copyright (c) 2006-2012 Cisco Systems, Inc. All rights reserved.
|
2007-02-09 23:17:37 +03:00
|
|
|
* Copyright (c) 2007 Los Alamos National Security, LLC. All rights
|
|
|
|
* reserved.
|
2008-05-01 19:06:10 +04:00
|
|
|
* Copyright (c) 2008 Sun Microsystems, Inc. All rights reserved.
|
2009-07-07 22:32:14 +04:00
|
|
|
* Copyright (c) 2009 University of Houston. All rights reserved.
|
When we abort during MPI_Init, we currently emit a totally incorrect error message stating that we were unable to aggregate error messages and cannot guarantee all other processes were killed. This simply isn't true IF the rte has been initialized.
So track that the rte has reached that point, and only emit the new message if it is accurate.
Note that we still generate a TON of output for a minor error:
Ralphs-iMac:examples rhc$ mpirun -n 3 -mca btl sm ./hello_c
--------------------------------------------------------------------------
At least one pair of MPI processes are unable to reach each other for
MPI communications. This means that no Open MPI device has indicated
that it can be used to communicate between these processes. This is
an error; Open MPI requires that all MPI processes be able to reach
each other. This error can sometimes be the result of forgetting to
specify the "self" BTL.
Process 1 ([[50239,1],2]) is on host: Ralphs-iMac
Process 2 ([[50239,1],2]) is on host: Ralphs-iMac
BTLs attempted: sm
Your MPI job is now going to abort; sorry.
--------------------------------------------------------------------------
*** An error occurred in MPI_Init
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
*** and potentially your MPI job)
*** An error occurred in MPI_Init
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
*** and potentially your MPI job)
*** An error occurred in MPI_Init
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
*** and potentially your MPI job)
--------------------------------------------------------------------------
MPI_INIT has failed because at least one MPI process is unreachable
from another. This *usually* means that an underlying communication
plugin -- such as a BTL or an MTL -- has either not loaded or not
allowed itself to be used. Your MPI job will now abort.
You may wish to try to narrow down the problem;
* Check the output of ompi_info to see which BTL/MTL plugins are
available.
* Run your application with MPI_THREAD_SINGLE.
* Set the MCA parameter btl_base_verbose to 100 (or mtl_base_verbose,
if using MTL-based communications) to see exactly which
communication plugins were considered and/or discarded.
--------------------------------------------------------------------------
-------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code.. Per user-direction, the job has been aborted.
-------------------------------------------------------
--------------------------------------------------------------------------
mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:
Process name: [[50239,1],2]
Exit code: 1
--------------------------------------------------------------------------
[Ralphs-iMac.local:23227] 2 more processes have sent help message help-mca-bml-r2.txt / unreachable proc
[Ralphs-iMac.local:23227] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
[Ralphs-iMac.local:23227] 2 more processes have sent help message help-mpi-runtime / mpi_init:startup:pml-add-procs-fail
Ralphs-iMac:examples rhc$
Hopefully, we can agree on a way to reduce this verbage!
This commit was SVN r31686.
The following SVN revision numbers were found above:
r2 --> open-mpi/ompi@58fdc188553052bc2e893ba28fb28fddbe78435a
2014-05-08 19:48:16 +04:00
|
|
|
* Copyright (c) 2014 Intel, Inc. All rights reserved.
|
2004-11-22 04:38:40 +03:00
|
|
|
* $COPYRIGHT$
|
|
|
|
*
|
|
|
|
* Additional copyrights may follow
|
|
|
|
*
|
2004-08-07 04:53:56 +04:00
|
|
|
* $HEADER$
|
|
|
|
*/
|
|
|
|
|
|
|
|
/**
|
|
|
|
* @file
|
|
|
|
*
|
|
|
|
* Interface into the MPI portion of the Open MPI Run Time Environment
|
|
|
|
*/
|
|
|
|
|
2004-08-14 05:56:05 +04:00
|
|
|
#ifndef OMPI_MPI_MPIRUNTIME_H
|
|
|
|
#define OMPI_MPI_MPIRUNTIME_H
|
2004-08-07 04:53:56 +04:00
|
|
|
|
|
|
|
#include "ompi_config.h"
|
|
|
|
|
2009-03-21 04:28:31 +03:00
|
|
|
#include "opal/class/opal_list.h"
|
Deal with the ticket #1239 and #712. This will upgrade the Open MPI support
for the F90 type create functions to the requirements of MPI 2.1 standard.
Advice to implementors. An application may often repeat a call to
MPI_TYPE_CREATE_F90_xxxx with the same combination of (xxxx,p,r).
The application is not allowed to free the returned predefined, unnamed
datatype handles. To prevent the creation of a potentially huge amount of
handles, the MPI implementation should return the same datatype handle for
the same (REAL/COMPLEX/INTEGER,p,r) combination. Checking for the
combination (p,r) in the preceding call to MPI_TYPE_CREATE_F90_xxxx and
using a hash-table to find formerly generated handles should limit the
overhead of finding a previously generated datatype with same combination
of (xxxx,p,r). (End of advice to implementors.)
This commit fixes trac:1239, and #712.
This commit was SVN r19458.
The following Trac tickets were found above:
Ticket 1239 --> https://svn.open-mpi.org/trac/ompi/ticket/1239
2008-08-31 22:36:32 +04:00
|
|
|
#include "opal/class/opal_hash_table.h"
|
2007-12-07 16:09:07 +03:00
|
|
|
|
2007-05-16 19:46:52 +04:00
|
|
|
BEGIN_C_DECLS
|
|
|
|
|
|
|
|
/** forward type declaration */
|
|
|
|
struct ompi_communicator_t;
|
|
|
|
/** forward type declaration */
|
|
|
|
struct opal_thread_t;
|
== Highlights ==
1. New mpifort wrapper compiler: you can utilize mpif.h, use mpi, and use mpi_f08 through this one wrapper compiler
1. mpif77 and mpif90 still exist, but are sym links to mpifort and may be removed in a future release
1. The mpi module has been re-implemented and is significantly "mo' bettah"
1. The mpi_f08 module offers many, many improvements over mpif.h and the mpi module
This stuff is coming from a VERY long-lived mercurial branch (3 years!); it'll almost certainly take a few SVN commits and a bunch of testing before I get it correctly committed to the SVN trunk.
== More details ==
Craig Rasmussen and I have been working with the MPI-3 Fortran WG and Fortran J3 committees for a long, long time to make a prototype MPI-3 Fortran bindings implementation. We think we're at a stable enough state to bring this stuff back to the trunk, with the goal of including it in OMPI v1.7.
Special thanks go out to everyone who has been incredibly patient and helpful to us in this journey:
* Rolf Rabenseifner/HLRS (mastermind/genius behind the entire MPI-3 Fortran effort)
* The Fortran J3 committee
* Tobias Burnus/gfortran
* Tony !Goetz/Absoft
* Terry !Donte/Oracle
* ...and probably others whom I'm forgetting :-(
There's still opportunities for optimization in the mpi_f08 implementation, but by and large, it is as far along as it can be until Fortran compilers start implementing the new F08 dimension(..) syntax.
Note that gfortran is currently unsupported for the mpi_f08 module and the new mpi module. gfortran users will a) fall back to the same mpi module implementation that is in OMPI v1.5.x, and b) not get the new mpi_f08 module. The gfortran maintainers are actively working hard to add the necessary features to support both the new mpi_f08 module and the new mpi module implementations. This will take some time.
As mentioned above, ompi/mpi/f77 and ompi/mpi/f90 no longer exist. All the fortran bindings implementations have been collated under ompi/mpi/fortran; each implementation has its own subdirectory:
{{{
ompi/mpi/fortran/
base/ - glue code
mpif-h/ - what used to be ompi/mpi/f77
use-mpi-tkr/ - what used to be ompi/mpi/f90
use-mpi-ignore-tkr/ - new mpi module implementation
use-mpi-f08/ - new mpi_f08 module implementation
}}}
There's also a prototype 6-function-MPI implementation under use-mpi-f08-desc that emulates the new F08 dimension(..) syntax that isn't fully available in Fortran compilers yet. We did that to prove it to ourselves that it could be done once the compilers fully support it. This directory/implementation will likely eventually replace the use-mpi-f08 version.
Other things that were done:
* ompi_info grew a few new output fields to describe what level of Fortran support is included
* Existing Fortran examples in examples/ were renamed; new mpi_f08 examples were added
* The old Fortran MPI libraries were renamed:
* libmpi_f77 -> libmpi_mpifh
* libmpi_f90 -> libmpi_usempi
* The configury for Fortran was consolidated and significantly slimmed down. Note that the F77 env variable is now IGNORED for configure; you should only use FC. Example:
{{{
shell$ ./configure CC=icc CXX=icpc FC=ifort ...
}}}
All of this work was done in a Mercurial branch off the SVN trunk, and hosted at Bitbucket. This branch has got to be one of OMPI's longest-running branches. Its first commit was Tue Apr 07 23:01:46 2009 -0400 -- it's over 3 years old! :-) We think we've pulled in all relevant changes from the OMPI trunk (e.g., Fortran implementations of the new MPI-3 MPROBE stuff for mpif.h, use mpi, and use mpi_f08, and the recent Fujitsu Fortran patches).
I anticipate some instability when we bring this stuff into the trunk, simply because it touches a LOT of code in the MPI layer in the OMPI code base. We'll try our best to make it as pain-free as possible, but please bear with us when it is committed.
This commit was SVN r26283.
2012-04-18 19:57:29 +04:00
|
|
|
/** forward type declaration */
|
|
|
|
struct ompi_predefined_datatype_t;
|
2007-05-16 19:46:52 +04:00
|
|
|
|
|
|
|
/* Global variables and symbols for the MPI layer */
|
|
|
|
|
2011-03-07 19:45:45 +03:00
|
|
|
/** Did mpi start to initialize? */
|
|
|
|
OMPI_DECLSPEC extern bool ompi_mpi_init_started;
|
2007-05-16 19:46:52 +04:00
|
|
|
/** Is mpi initialized? */
|
2007-07-16 17:23:57 +04:00
|
|
|
OMPI_DECLSPEC extern bool ompi_mpi_initialized;
|
2007-05-16 19:46:52 +04:00
|
|
|
/** Has mpi been finalized? */
|
2007-07-16 17:23:57 +04:00
|
|
|
OMPI_DECLSPEC extern bool ompi_mpi_finalized;
|
When we abort during MPI_Init, we currently emit a totally incorrect error message stating that we were unable to aggregate error messages and cannot guarantee all other processes were killed. This simply isn't true IF the rte has been initialized.
So track that the rte has reached that point, and only emit the new message if it is accurate.
Note that we still generate a TON of output for a minor error:
Ralphs-iMac:examples rhc$ mpirun -n 3 -mca btl sm ./hello_c
--------------------------------------------------------------------------
At least one pair of MPI processes are unable to reach each other for
MPI communications. This means that no Open MPI device has indicated
that it can be used to communicate between these processes. This is
an error; Open MPI requires that all MPI processes be able to reach
each other. This error can sometimes be the result of forgetting to
specify the "self" BTL.
Process 1 ([[50239,1],2]) is on host: Ralphs-iMac
Process 2 ([[50239,1],2]) is on host: Ralphs-iMac
BTLs attempted: sm
Your MPI job is now going to abort; sorry.
--------------------------------------------------------------------------
*** An error occurred in MPI_Init
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
*** and potentially your MPI job)
*** An error occurred in MPI_Init
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
*** and potentially your MPI job)
*** An error occurred in MPI_Init
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
*** and potentially your MPI job)
--------------------------------------------------------------------------
MPI_INIT has failed because at least one MPI process is unreachable
from another. This *usually* means that an underlying communication
plugin -- such as a BTL or an MTL -- has either not loaded or not
allowed itself to be used. Your MPI job will now abort.
You may wish to try to narrow down the problem;
* Check the output of ompi_info to see which BTL/MTL plugins are
available.
* Run your application with MPI_THREAD_SINGLE.
* Set the MCA parameter btl_base_verbose to 100 (or mtl_base_verbose,
if using MTL-based communications) to see exactly which
communication plugins were considered and/or discarded.
--------------------------------------------------------------------------
-------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code.. Per user-direction, the job has been aborted.
-------------------------------------------------------
--------------------------------------------------------------------------
mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:
Process name: [[50239,1],2]
Exit code: 1
--------------------------------------------------------------------------
[Ralphs-iMac.local:23227] 2 more processes have sent help message help-mca-bml-r2.txt / unreachable proc
[Ralphs-iMac.local:23227] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
[Ralphs-iMac.local:23227] 2 more processes have sent help message help-mpi-runtime / mpi_init:startup:pml-add-procs-fail
Ralphs-iMac:examples rhc$
Hopefully, we can agree on a way to reduce this verbage!
This commit was SVN r31686.
The following SVN revision numbers were found above:
r2 --> open-mpi/ompi@58fdc188553052bc2e893ba28fb28fddbe78435a
2014-05-08 19:48:16 +04:00
|
|
|
/** Has the RTE been initialized? */
|
|
|
|
OMPI_DECLSPEC extern bool ompi_rte_initialized;
|
2007-05-16 19:46:52 +04:00
|
|
|
|
|
|
|
/** Do we have multiple threads? */
|
|
|
|
OMPI_DECLSPEC extern bool ompi_mpi_thread_multiple;
|
|
|
|
/** Thread level requested to \c MPI_Init_thread() */
|
|
|
|
OMPI_DECLSPEC extern int ompi_mpi_thread_requested;
|
|
|
|
/** Thread level provided by Open MPI */
|
|
|
|
OMPI_DECLSPEC extern int ompi_mpi_thread_provided;
|
|
|
|
/** Identifier of the main thread */
|
|
|
|
OMPI_DECLSPEC extern struct opal_thread_t *ompi_mpi_main_thread;
|
|
|
|
|
== Highlights ==
1. New mpifort wrapper compiler: you can utilize mpif.h, use mpi, and use mpi_f08 through this one wrapper compiler
1. mpif77 and mpif90 still exist, but are sym links to mpifort and may be removed in a future release
1. The mpi module has been re-implemented and is significantly "mo' bettah"
1. The mpi_f08 module offers many, many improvements over mpif.h and the mpi module
This stuff is coming from a VERY long-lived mercurial branch (3 years!); it'll almost certainly take a few SVN commits and a bunch of testing before I get it correctly committed to the SVN trunk.
== More details ==
Craig Rasmussen and I have been working with the MPI-3 Fortran WG and Fortran J3 committees for a long, long time to make a prototype MPI-3 Fortran bindings implementation. We think we're at a stable enough state to bring this stuff back to the trunk, with the goal of including it in OMPI v1.7.
Special thanks go out to everyone who has been incredibly patient and helpful to us in this journey:
* Rolf Rabenseifner/HLRS (mastermind/genius behind the entire MPI-3 Fortran effort)
* The Fortran J3 committee
* Tobias Burnus/gfortran
* Tony !Goetz/Absoft
* Terry !Donte/Oracle
* ...and probably others whom I'm forgetting :-(
There's still opportunities for optimization in the mpi_f08 implementation, but by and large, it is as far along as it can be until Fortran compilers start implementing the new F08 dimension(..) syntax.
Note that gfortran is currently unsupported for the mpi_f08 module and the new mpi module. gfortran users will a) fall back to the same mpi module implementation that is in OMPI v1.5.x, and b) not get the new mpi_f08 module. The gfortran maintainers are actively working hard to add the necessary features to support both the new mpi_f08 module and the new mpi module implementations. This will take some time.
As mentioned above, ompi/mpi/f77 and ompi/mpi/f90 no longer exist. All the fortran bindings implementations have been collated under ompi/mpi/fortran; each implementation has its own subdirectory:
{{{
ompi/mpi/fortran/
base/ - glue code
mpif-h/ - what used to be ompi/mpi/f77
use-mpi-tkr/ - what used to be ompi/mpi/f90
use-mpi-ignore-tkr/ - new mpi module implementation
use-mpi-f08/ - new mpi_f08 module implementation
}}}
There's also a prototype 6-function-MPI implementation under use-mpi-f08-desc that emulates the new F08 dimension(..) syntax that isn't fully available in Fortran compilers yet. We did that to prove it to ourselves that it could be done once the compilers fully support it. This directory/implementation will likely eventually replace the use-mpi-f08 version.
Other things that were done:
* ompi_info grew a few new output fields to describe what level of Fortran support is included
* Existing Fortran examples in examples/ were renamed; new mpi_f08 examples were added
* The old Fortran MPI libraries were renamed:
* libmpi_f77 -> libmpi_mpifh
* libmpi_f90 -> libmpi_usempi
* The configury for Fortran was consolidated and significantly slimmed down. Note that the F77 env variable is now IGNORED for configure; you should only use FC. Example:
{{{
shell$ ./configure CC=icc CXX=icpc FC=ifort ...
}}}
All of this work was done in a Mercurial branch off the SVN trunk, and hosted at Bitbucket. This branch has got to be one of OMPI's longest-running branches. Its first commit was Tue Apr 07 23:01:46 2009 -0400 -- it's over 3 years old! :-) We think we've pulled in all relevant changes from the OMPI trunk (e.g., Fortran implementations of the new MPI-3 MPROBE stuff for mpif.h, use mpi, and use mpi_f08, and the recent Fujitsu Fortran patches).
I anticipate some instability when we bring this stuff into the trunk, simply because it touches a LOT of code in the MPI layer in the OMPI code base. We'll try our best to make it as pain-free as possible, but please bear with us when it is committed.
This commit was SVN r26283.
2012-04-18 19:57:29 +04:00
|
|
|
/*
|
|
|
|
* These variables are for the MPI F03 bindings (F03 must bind Fortran
|
|
|
|
* varaiables to symbols; it cannot bind Fortran variables to the
|
|
|
|
* address of a C variable).
|
|
|
|
*/
|
|
|
|
|
|
|
|
OMPI_DECLSPEC extern struct ompi_predefined_datatype_t *ompi_mpi_character_addr;
|
|
|
|
OMPI_DECLSPEC extern struct ompi_predefined_datatype_t *ompi_mpi_logical_addr;
|
|
|
|
OMPI_DECLSPEC extern struct ompi_predefined_datatype_t *ompi_mpi_logical1_addr;
|
|
|
|
OMPI_DECLSPEC extern struct ompi_predefined_datatype_t *ompi_mpi_logical2_addr;
|
|
|
|
OMPI_DECLSPEC extern struct ompi_predefined_datatype_t *ompi_mpi_logical4_addr;
|
|
|
|
OMPI_DECLSPEC extern struct ompi_predefined_datatype_t *ompi_mpi_logical8_addr;
|
|
|
|
OMPI_DECLSPEC extern struct ompi_predefined_datatype_t *ompi_mpi_integer_addr;
|
|
|
|
OMPI_DECLSPEC extern struct ompi_predefined_datatype_t *ompi_mpi_integer1_addr;
|
|
|
|
OMPI_DECLSPEC extern struct ompi_predefined_datatype_t *ompi_mpi_integer2_addr;
|
|
|
|
OMPI_DECLSPEC extern struct ompi_predefined_datatype_t *ompi_mpi_integer4_addr;
|
|
|
|
OMPI_DECLSPEC extern struct ompi_predefined_datatype_t *ompi_mpi_integer8_addr;
|
|
|
|
OMPI_DECLSPEC extern struct ompi_predefined_datatype_t *ompi_mpi_integer16_addr;
|
|
|
|
OMPI_DECLSPEC extern struct ompi_predefined_datatype_t *ompi_mpi_real_addr;
|
|
|
|
OMPI_DECLSPEC extern struct ompi_predefined_datatype_t *ompi_mpi_real4_addr;
|
|
|
|
OMPI_DECLSPEC extern struct ompi_predefined_datatype_t *ompi_mpi_real8_addr;
|
|
|
|
OMPI_DECLSPEC extern struct ompi_predefined_datatype_t *ompi_mpi_real16_addr;
|
|
|
|
OMPI_DECLSPEC extern struct ompi_predefined_datatype_t *ompi_mpi_dblprec_addr;
|
|
|
|
OMPI_DECLSPEC extern struct ompi_predefined_datatype_t *ompi_mpi_cplex_addr;
|
|
|
|
OMPI_DECLSPEC extern struct ompi_predefined_datatype_t *ompi_mpi_complex8_addr;
|
|
|
|
OMPI_DECLSPEC extern struct ompi_predefined_datatype_t *ompi_mpi_complex16_addr;
|
|
|
|
OMPI_DECLSPEC extern struct ompi_predefined_datatype_t *ompi_mpi_complex32_addr;
|
|
|
|
OMPI_DECLSPEC extern struct ompi_predefined_datatype_t *ompi_mpi_dblcplex_addr;
|
|
|
|
OMPI_DECLSPEC extern struct ompi_predefined_datatype_t *ompi_mpi_2real_addr;
|
|
|
|
OMPI_DECLSPEC extern struct ompi_predefined_datatype_t *ompi_mpi_2dblprec_addr;
|
|
|
|
OMPI_DECLSPEC extern struct ompi_predefined_datatype_t *ompi_mpi_2integer_addr;
|
|
|
|
|
|
|
|
OMPI_DECLSPEC extern struct ompi_status_public_t *ompi_mpi_status_ignore_addr;
|
|
|
|
OMPI_DECLSPEC extern struct ompi_status_public_t *ompi_mpi_statuses_ignore_addr;
|
2009-07-07 22:32:14 +04:00
|
|
|
|
|
|
|
/** Bitflags to be used for the modex exchange for the various thread
|
|
|
|
* levels. Required to support heterogeneous environments */
|
|
|
|
#define OMPI_THREADLEVEL_SINGLE_BF 0x00000001
|
|
|
|
#define OMPI_THREADLEVEL_FUNNELED_BF 0x00000002
|
|
|
|
#define OMPI_THREADLEVEL_SERIALIZED_BF 0x00000004
|
|
|
|
#define OMPI_THREADLEVEL_MULTIPLE_BF 0x00000008
|
|
|
|
|
|
|
|
#define OMPI_THREADLEVEL_SET_BITFLAG(threadlevelin,threadlevelout) { \
|
|
|
|
if ( MPI_THREAD_SINGLE == threadlevelin ) { \
|
|
|
|
threadlevelout |= OMPI_THREADLEVEL_SINGLE_BF; \
|
|
|
|
} else if ( MPI_THREAD_FUNNELED == threadlevelin ) { \
|
|
|
|
threadlevelout |= OMPI_THREADLEVEL_FUNNELED_BF; \
|
|
|
|
} else if ( MPI_THREAD_SERIALIZED == threadlevelin ) { \
|
|
|
|
threadlevelout |= OMPI_THREADLEVEL_SERIALIZED_BF; \
|
|
|
|
} else if ( MPI_THREAD_MULTIPLE == threadlevelin ) { \
|
|
|
|
threadlevelout |= OMPI_THREADLEVEL_MULTIPLE_BF; \
|
|
|
|
}}
|
|
|
|
|
|
|
|
|
|
|
|
#define OMPI_THREADLEVEL_IS_MULTIPLE(threadlevel) (threadlevel & OMPI_THREADLEVEL_MULTIPLE_BF)
|
|
|
|
|
2008-08-06 18:22:03 +04:00
|
|
|
/** Do we want to be warned on fork or not? */
|
2008-08-06 21:29:41 +04:00
|
|
|
OMPI_DECLSPEC extern bool ompi_warn_on_fork;
|
2008-08-06 18:22:03 +04:00
|
|
|
|
2007-12-07 16:09:07 +03:00
|
|
|
/** In ompi_mpi_init: a list of all memory associated with calling
|
|
|
|
MPI_REGISTER_DATAREP so that we can free it during
|
|
|
|
MPI_FINALIZE. */
|
|
|
|
OMPI_DECLSPEC extern opal_list_t ompi_registered_datareps;
|
|
|
|
|
Deal with the ticket #1239 and #712. This will upgrade the Open MPI support
for the F90 type create functions to the requirements of MPI 2.1 standard.
Advice to implementors. An application may often repeat a call to
MPI_TYPE_CREATE_F90_xxxx with the same combination of (xxxx,p,r).
The application is not allowed to free the returned predefined, unnamed
datatype handles. To prevent the creation of a potentially huge amount of
handles, the MPI implementation should return the same datatype handle for
the same (REAL/COMPLEX/INTEGER,p,r) combination. Checking for the
combination (p,r) in the preceding call to MPI_TYPE_CREATE_F90_xxxx and
using a hash-table to find formerly generated handles should limit the
overhead of finding a previously generated datatype with same combination
of (xxxx,p,r). (End of advice to implementors.)
This commit fixes trac:1239, and #712.
This commit was SVN r19458.
The following Trac tickets were found above:
Ticket 1239 --> https://svn.open-mpi.org/trac/ompi/ticket/1239
2008-08-31 22:36:32 +04:00
|
|
|
/** In ompi_mpi_init: the lists of Fortran 90 mathing datatypes.
|
|
|
|
* We need these lists and hashtables in order to satisfy the new
|
|
|
|
* requirements introduced in MPI 2-1 Sect. 10.2.5,
|
|
|
|
* MPI_TYPE_CREATE_F90_xxxx, page 295, line 47.
|
|
|
|
*/
|
2008-09-01 00:58:23 +04:00
|
|
|
extern opal_hash_table_t ompi_mpi_f90_integer_hashtable;
|
|
|
|
extern opal_hash_table_t ompi_mpi_f90_real_hashtable;
|
|
|
|
extern opal_hash_table_t ompi_mpi_f90_complex_hashtable;
|
Deal with the ticket #1239 and #712. This will upgrade the Open MPI support
for the F90 type create functions to the requirements of MPI 2.1 standard.
Advice to implementors. An application may often repeat a call to
MPI_TYPE_CREATE_F90_xxxx with the same combination of (xxxx,p,r).
The application is not allowed to free the returned predefined, unnamed
datatype handles. To prevent the creation of a potentially huge amount of
handles, the MPI implementation should return the same datatype handle for
the same (REAL/COMPLEX/INTEGER,p,r) combination. Checking for the
combination (p,r) in the preceding call to MPI_TYPE_CREATE_F90_xxxx and
using a hash-table to find formerly generated handles should limit the
overhead of finding a previously generated datatype with same combination
of (xxxx,p,r). (End of advice to implementors.)
This commit fixes trac:1239, and #712.
This commit was SVN r19458.
The following Trac tickets were found above:
Ticket 1239 --> https://svn.open-mpi.org/trac/ompi/ticket/1239
2008-08-31 22:36:32 +04:00
|
|
|
|
2008-05-01 19:06:10 +04:00
|
|
|
/** version string of ompi */
|
|
|
|
OMPI_DECLSPEC extern const char ompi_version_string[];
|
2007-05-16 19:46:52 +04:00
|
|
|
|
2008-08-06 18:22:03 +04:00
|
|
|
OMPI_DECLSPEC void ompi_warn_fork(void);
|
|
|
|
|
2013-04-24 19:59:23 +04:00
|
|
|
/**
|
|
|
|
* Determine the thread level
|
|
|
|
*
|
|
|
|
* @param requested Thread support that is requested (IN)
|
|
|
|
* @param provided Thread support that is provided (OUT)
|
|
|
|
*/
|
|
|
|
void ompi_mpi_thread_level(int requested, int *provided);
|
|
|
|
|
2007-05-16 19:46:52 +04:00
|
|
|
/**
|
|
|
|
* Initialize the Open MPI MPI environment
|
|
|
|
*
|
|
|
|
* @param argc argc, typically from main() (IN)
|
|
|
|
* @param argv argv, typically from main() (IN)
|
|
|
|
* @param requested Thread support that is requested (IN)
|
|
|
|
* @param provided Thread support that is provided (OUT)
|
|
|
|
*
|
|
|
|
* @returns MPI_SUCCESS if successful
|
|
|
|
* @returns Error code if unsuccessful
|
|
|
|
*
|
|
|
|
* Intialize all support code needed for MPI applications. This
|
|
|
|
* function should only be called by MPI applications (including
|
|
|
|
* singletons). If this function is called, ompi_init() and
|
|
|
|
* ompi_rte_init() should *not* be called.
|
|
|
|
*
|
|
|
|
* It is permissable to pass in (0, NULL) for (argc, argv).
|
|
|
|
*/
|
|
|
|
int ompi_mpi_init(int argc, char **argv, int requested, int *provided);
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Finalize the Open MPI MPI environment
|
|
|
|
*
|
|
|
|
* @returns MPI_SUCCESS if successful
|
|
|
|
* @returns Error code if unsuccessful
|
|
|
|
*
|
|
|
|
* Should be called after all MPI functionality is complete (usually
|
|
|
|
* during MPI_FINALIZE).
|
|
|
|
*/
|
|
|
|
int ompi_mpi_finalize(void);
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Abort the processes of comm
|
|
|
|
*/
|
2008-09-01 21:37:32 +04:00
|
|
|
OMPI_DECLSPEC int ompi_mpi_abort(struct ompi_communicator_t* comm,
|
2007-05-16 19:46:52 +04:00
|
|
|
int errcode, bool kill_remote_of_intercomm);
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Do a preconnect of MPI connections (i.e., force connections to
|
|
|
|
* be made if they will be made).
|
|
|
|
*/
|
|
|
|
int ompi_init_preconnect_mpi(void);
|
|
|
|
|
|
|
|
END_C_DECLS
|
2004-08-07 04:53:56 +04:00
|
|
|
|
2004-08-14 05:56:05 +04:00
|
|
|
#endif /* OMPI_MPI_MPIRUNTIME_H */
|