Для этого сайта требуется поддержка JavaScript.
Обзор
Помощь
Вход
ports
/
openmpi
Следить
1
В избранное
1
Форкнуть
0
Вы уже форкнули openmpi
Код
Релизы
Активность
openmpi
/
opal
/
mca
/
pmix
История
Ralph Castain
12ecf972af
Split the pmix external component into one for the 1.1.4 release, and another for the upcoming 2.0 release. Clean up the configury so the components look for a series-specific function instead of running a program.
...
NOTE: the changes for the 2.0 series are not yet in the PMIx master.
2016-06-01 14:15:24 -07:00
..
base
Fix registration of error handlers thru the pmix120 component. A thread-shift operation was hanging on the sync_event_base, which made it dependent on someone calling opal_progress. Unfortunately, a process in "sleep" or spinning outside the MPI library won't do that, and so we never complete errhandler registration.
2016-03-02 15:01:01 -08:00
cray
Merge pull request
#1668
from rhc54/topic/slurm
2016-05-16 12:23:19 -07:00
ext20
Split the pmix external component into one for the 1.1.4 release, and another for the upcoming 2.0 release. Clean up the configury so the components look for a series-specific function instead of running a program.
2016-06-01 14:15:24 -07:00
ext114
Split the pmix external component into one for the 1.1.4 release, and another for the upcoming 2.0 release. Clean up the configury so the components look for a series-specific function instead of running a program.
2016-06-01 14:15:24 -07:00
isolated
Fix a number of issues, some of which have lingered for a long time:
2016-03-01 06:53:00 -08:00
pmix114
pmix: update .gitignore
2016-05-23 11:58:07 +09:00
s1
When direct launching applications, we must allow the MPI layer to progress during RTE-level barriers. Neither SLURM nor Cray provide non-blocking fence functions, so push those calls into a separate event thread (use the OPAL async thread for this purpose so we don't create another one) and let the MPI thread sping in wait_for_completion. This also restores the "lazy" completion during MPI_Finalize to minimize cpu utilization.
2016-05-14 16:37:00 -07:00
s2
When direct launching applications, we must allow the MPI layer to progress during RTE-level barriers. Neither SLURM nor Cray provide non-blocking fence functions, so push those calls into a separate event thread (use the OPAL async thread for this purpose so we don't create another one) and let the MPI thread sping in wait_for_completion. This also restores the "lazy" completion during MPI_Finalize to minimize cpu utilization.
2016-05-14 16:37:00 -07:00
Makefile.am
Integrate PMIx 1.0 with OMPI.
2015-08-29 16:04:10 -07:00
pmix_server.h
Split the pmix external component into one for the 1.1.4 release, and another for the upcoming 2.0 release. Clean up the configury so the components look for a series-specific function instead of running a program.
2016-06-01 14:15:24 -07:00
pmix_types.h
Per user request, add some missing data and definitions:
2016-05-09 08:39:01 -07:00
pmix.h
When direct launching applications, we must allow the MPI layer to progress during RTE-level barriers. Neither SLURM nor Cray provide non-blocking fence functions, so push those calls into a separate event thread (use the OPAL async thread for this purpose so we don't create another one) and let the MPI thread sping in wait_for_completion. This also restores the "lazy" completion during MPI_Finalize to minimize cpu utilization.
2016-05-14 16:37:00 -07:00