Для этого сайта требуется поддержка JavaScript.
Обзор
Помощь
Вход
ports
/
openmpi
Следить
1
В избранное
1
Форкнуть
0
Вы уже форкнули openmpi
Код
Релизы
Активность
openmpi
/
opal
/
mca
/
pmix
История
Ralph Castain
55923eacd3
Stealing some pieces of Josh Hursey's PR
#1583
and modifying a bit, allow the opal/pmix external component to handle both PMIx 1.1.4 and PMIx 2.0 versions. Automatically detect the version of the target external library and adjust the only two APIs that changed (PMIx_Init and PMIx_Finalize)
...
Rename temp vars in .m4 to avoid conflict with Travis
2016-05-27 08:06:31 -07:00
..
base
Fix registration of error handlers thru the pmix120 component. A thread-shift operation was hanging on the sync_event_base, which made it dependent on someone calling opal_progress. Unfortunately, a process in "sleep" or spinning outside the MPI library won't do that, and so we never complete errhandler registration.
2016-03-02 15:01:01 -08:00
cray
Merge pull request
#1668
from rhc54/topic/slurm
2016-05-16 12:23:19 -07:00
external
Stealing some pieces of Josh Hursey's PR
#1583
and modifying a bit, allow the opal/pmix external component to handle both PMIx 1.1.4 and PMIx 2.0 versions. Automatically detect the version of the target external library and adjust the only two APIs that changed (PMIx_Init and PMIx_Finalize)
2016-05-27 08:06:31 -07:00
isolated
Fix a number of issues, some of which have lingered for a long time:
2016-03-01 06:53:00 -08:00
pmix114
pmix: update .gitignore
2016-05-23 11:58:07 +09:00
s1
When direct launching applications, we must allow the MPI layer to progress during RTE-level barriers. Neither SLURM nor Cray provide non-blocking fence functions, so push those calls into a separate event thread (use the OPAL async thread for this purpose so we don't create another one) and let the MPI thread sping in wait_for_completion. This also restores the "lazy" completion during MPI_Finalize to minimize cpu utilization.
2016-05-14 16:37:00 -07:00
s2
When direct launching applications, we must allow the MPI layer to progress during RTE-level barriers. Neither SLURM nor Cray provide non-blocking fence functions, so push those calls into a separate event thread (use the OPAL async thread for this purpose so we don't create another one) and let the MPI thread sping in wait_for_completion. This also restores the "lazy" completion during MPI_Finalize to minimize cpu utilization.
2016-05-14 16:37:00 -07:00
Makefile.am
Integrate PMIx 1.0 with OMPI.
2015-08-29 16:04:10 -07:00
pmix_server.h
Add pmix120 component, update the error handling functions in the PMIx API.
2015-12-28 23:15:44 +09:00
pmix_types.h
Per user request, add some missing data and definitions:
2016-05-09 08:39:01 -07:00
pmix.h
When direct launching applications, we must allow the MPI layer to progress during RTE-level barriers. Neither SLURM nor Cray provide non-blocking fence functions, so push those calls into a separate event thread (use the OPAL async thread for this purpose so we don't create another one) and let the MPI thread sping in wait_for_completion. This also restores the "lazy" completion during MPI_Finalize to minimize cpu utilization.
2016-05-14 16:37:00 -07:00