1
1
openmpi/ompi
Nathan Hjelm 8445c885ce pml/cm: update for request changes
This fixes a hang caused by the request refactor work. The cm pml was
not updated and was hanging is most cases.

Signed-off-by: Nathan Hjelm <hjelmn@lanl.gov>
2016-05-25 15:35:32 -06:00
..
attribute more c99 updates 2015-06-25 10:14:13 -06:00
class Purge whitespace from the repo 2015-06-23 20:59:57 -07:00
communicator Refactor the request completion (#1422) 2016-05-24 18:20:51 -05:00
contrib Purge whitespace from the repo 2015-06-23 20:59:57 -07:00
datatype Fix MPI datatype args. 2016-05-24 23:36:54 -04:00
debuggers Merge pull request #1640 from jsquyres/pr/mpir-cleanup 2016-05-05 21:23:30 -04:00
dpm ompi/dpm: retrieves OPAL_PMIX_ARCH in heterogeneous mode 2016-02-22 11:01:06 +09:00
errhandler Per request from Jeff, aggregate all help messages during MPI_Init thru MPI_Finalize as long as the RTE is available 2016-04-15 13:37:22 -07:00
etc Purge whitespace from the repo 2015-06-23 20:59:57 -07:00
file io: do not cast way the const modifier when this is not necessary 2015-09-09 09:18:58 +09:00
group ompi/group: fix sparse group proc reference counting 2016-04-27 15:55:13 -06:00
include fortran: add missing constants for MPI_WIN_CREATE_FLAVOR and MPI_WIN_MODEL 2016-03-14 10:19:21 +09:00
info ompi: fixup hostname max length usage 2016-04-25 07:08:23 +02:00
mca pml/cm: update for request changes 2016-05-25 15:35:32 -06:00
message Purge whitespace from the repo 2015-06-23 20:59:57 -07:00
mpi Refactor the request completion (#1422) 2016-05-24 18:20:51 -05:00
mpiext configury: remove the --enable-mpi-profiling option 2015-10-13 08:52:35 +09:00
op op: allow user operations in ompi_3buff_op_reduce 2015-10-02 10:35:21 -06:00
patterns Purge whitespace from the repo 2015-06-23 20:59:57 -07:00
peruse more c99 updates 2015-06-25 10:14:13 -06:00
proc ompi_proc_complete_init_single: make the subroutine public 2016-02-22 11:01:06 +09:00
request request: fix compilation error 2016-05-25 09:52:23 -06:00
runtime When direct launching applications, we must allow the MPI layer to progress during RTE-level barriers. Neither SLURM nor Cray provide non-blocking fence functions, so push those calls into a separate event thread (use the OPAL async thread for this purpose so we don't create another one) and let the MPI thread sping in wait_for_completion. This also restores the "lazy" completion during MPI_Finalize to minimize cpu utilization. 2016-05-14 16:37:00 -07:00
tools Merge pull request #1552 from kmroz/wip-hostname-len-cleanup-1 2016-05-02 09:44:18 -04:00
win win: add support for accumulate_ordering info key 2016-05-24 11:13:30 -06:00
Makefile.am use-mpi extensions do not have a .la lib, so the fortran module should not depend on them. 2016-05-03 11:54:35 -04:00