1
1
openmpi/ompi/runtime
2016-05-14 16:37:00 -07:00
..
help-mpi-runtime.txt init/finalize: extensions 2015-10-15 12:39:15 -04:00
Makefile.am mpi: infrastructure to gracefully disable MPI dyn procs 2015-10-14 13:42:56 -07:00
mpiruntime.h init/finalize: extensions 2015-10-15 12:39:15 -04:00
ompi_cr.c Purge whitespace from the repo 2015-06-23 20:59:57 -07:00
ompi_cr.h Purge whitespace from the repo 2015-06-23 20:59:57 -07:00
ompi_info_support.c opal: fix multiple bugs in MCA and opal 2015-04-07 19:13:20 -06:00
ompi_info_support.h tools: Add oshmem_info utility 2013-10-12 19:03:32 +00:00
ompi_mpi_abort.c debuggers: remove some useless code 2016-05-05 14:22:55 -07:00
ompi_mpi_dynamics.c mpi: infrastructure to gracefully disable MPI dyn procs 2015-10-14 13:42:56 -07:00
ompi_mpi_finalize.c When direct launching applications, we must allow the MPI layer to progress during RTE-level barriers. Neither SLURM nor Cray provide non-blocking fence functions, so push those calls into a separate event thread (use the OPAL async thread for this purpose so we don't create another one) and let the MPI thread sping in wait_for_completion. This also restores the "lazy" completion during MPI_Finalize to minimize cpu utilization. 2016-05-14 16:37:00 -07:00
ompi_mpi_init.c When direct launching applications, we must allow the MPI layer to progress during RTE-level barriers. Neither SLURM nor Cray provide non-blocking fence functions, so push those calls into a separate event thread (use the OPAL async thread for this purpose so we don't create another one) and let the MPI thread sping in wait_for_completion. This also restores the "lazy" completion during MPI_Finalize to minimize cpu utilization. 2016-05-14 16:37:00 -07:00
ompi_mpi_params.c ompi_mpi_params.c: set mpi_add_procs_cutoff default to 0 2016-02-09 09:41:36 -08:00
ompi_mpi_preconnect.c Purge whitespace from the repo 2015-06-23 20:59:57 -07:00
params.h Remove extraneous declaration. 2015-12-19 01:34:48 -05:00