Absoft has determined exactly what the problem is (private members in
derived data typed when compiled with -g), but does not yet have a
timeline for fixing it.
Add a temporary override for Absoft until they are able to fix their
compiler. This switch will at least allow us to MTT test the rest of
the mpi_f08 functionality with Absoft.
This commit was SVN r27184.
The following Trac tickets were found above:
Ticket 3248 --> https://svn.open-mpi.org/trac/ompi/ticket/3248
that causes MPI jobs to abort if there is not enough registered memory
available (vs. just warning).
This commit was SVN r27140.
The following Trac tickets were found above:
Ticket 3258 --> https://svn.open-mpi.org/trac/ompi/ticket/3258
The Open MPI configure automatically adds the -D_REENTRANT flag to CPPFLAGS. This causes that one of the PGI STL headers includes the omp.h header - unfortunately the fake one located in tools/vtwrapper/ instead of the real one. Thus, several OpenMP symbols were undefined and the compiler aborted.
This commit was SVN r27130.
* More "assumed shape" -> "assumed rank" text fixes
* Don't put a comment after "#endif" in .F90 files; gfortran hates that
* Fix OMPI_PROCEDURE to work properly (i.e., OMPI_HAVE_PROCEDURE ->
OMPI_FORTRAN_HAVE_PROCEDURE), and add all the required "use ::
mpi_f08_interface_callbacks" now that OMPI_PROCEDURE is now working
This commit was SVN r27119.
The project includes following components and frameworks:
- ML Collective component
- NETPATTERNS and COMMPATTERNS common components
- BCOL framework
- SBGP framework
Note: By default the ML collective component is disabled. In order to enable
new collectives user should bump up the priority of ml component (coll_ml_priority)
=============================================
Primary Contributors (in alphabetical order):
Ishai Rabinovich (Mellanox)
Joshua S. Ladd (ORNL / Mellanox)
Manjunath Gorentla Venkata (ORNL)
Mike Dubman (Mellanox)
Noam Bloch (Mellanox)
Pavel (Pasha) Shamis (ORNL / Mellanox)
Richard Graham (ORNL / Mellanox)
Vasily Filipov (Mellanox)
This commit was SVN r27078.
technically this is a necessary thing to do, it wasn't a tragedy that
we didn't have it because err was initialize to 0 in the beginning of
the functions where this problem occurred. Also, OMPI will likely
abort if one of the MCA_PML_CALLs actually incurs an error (or, even
if it doesn't, MPI doesn't define the behavior anyway ;-) ).
But looking forward to an FT-aware world, fixing this issue is a Good
Thing. Many thanks to Hristo Iliev for pointing out the issue.
This commit was SVN r27070.
- OMPI_SUCCESS
- OMPI_ERROR
- OMPI_ERR_RESOURCE_BUSY
If an "OMPI_ERR_OUT_OF_RESOURCE" occurs, the request is added to the pending list, and will be handled later. An error message
should not be printed to the user in this case. This is not an error, but rather a notification of a possible valid condition.
Only in the case of "OMPI_ERROR" should it be printed to the user.
This commit was SVN r27065.
btl_openib_connect_udcm when notifying not to listen to an fd to ensure
that the main thread does not continue until the service thread has
processed the message
Adds ability to send message to openib async thread to tell it to
ignore the ERR state on a specific QP. Adds this call to udcm_module_finalize
so when we set the error state on the QP it doesn't cause the
openib async thread to abort the mpi program prematurely
Fixes trac:3161
This commit was SVN r27064.
The following Trac tickets were found above:
Ticket 3161 --> https://svn.open-mpi.org/trac/ompi/ticket/3161
structure
* Minor optimization: if MPI_Test returns flags==.FALSE., don't copy
over the request/status to the OUT variables
* Update comments about .TRUE./.FALSE. compiler values
This commit was SVN r27041.
forever.
Don't copy a value back to the user's buffer unless the FLAG was set
to .TRUE. (i.e., indicating that we found the key).
This commit was SVN r27040.
vals.
* Note that the pre-defined Info objects don't need to have fortran
indexes assigned; they should already be assigned in the
constructor. So add an assert() to ensure that this really happens
properly.
* Add MPI_ENV_INFO to the Fortran interfaces
This commit was SVN r27039.