Для этого сайта требуется поддержка JavaScript.
Обзор
Помощь
Вход
ports
/
openmpi
Следить
1
В избранное
1
Форкнуть
0
Вы уже форкнули openmpi
Код
Релизы
Активность
openmpi
/
orte
/
mca
/
oob
/
tcp
История
Ralph Castain
0a91fdf85f
If an initial address fails to connect, record that fact and attempt the next address for that proc. If nothing succeeds, then declare failure.
...
cmr=v1.8.2:reviewer=edgar This commit was SVN r32553.
2014-08-19 19:48:24 +00:00
..
configure.m4
Remove the old configure option for disabling full rte support - we now use the OMPI rte framework for such purposes
2013-02-28 01:35:55 +00:00
help-oob-tcp.txt
Try again to get an error message printed when a daemon fails to successfully report back to mpirun. In this case, there is no guaranteed way for the daemon to output the error report itself - we don't have a connection back to the HNP, and we have tied stderr off to /dev/null (for good reasons). So the HNP has to detect the failure itself and report it.
2014-05-01 19:48:21 +00:00
Makefile.am
Use the correct abstraction layer name for the data dirs
2014-05-08 14:32:24 +00:00
oob_tcp_common.c
Now that the BTLs are moving down to OPAL and becoming available to ORTE, there no longer is a need/desire to push performance in the OOB/TCP component. So we don't need multiple modules driving NICs in parallel, and can drop all the complicated distribution logic. Fall back to the simplified single module model, but retain the ability to run that module in its own progress thread if so directed.
2014-06-06 02:24:17 +00:00
oob_tcp_common.h
Now that the BTLs are moving down to OPAL and becoming available to ORTE, there no longer is a need/desire to push performance in the OOB/TCP component. So we don't need multiple modules driving NICs in parallel, and can drop all the complicated distribution logic. Fall back to the simplified single module model, but retain the ability to run that module in its own progress thread if so directed.
2014-06-06 02:24:17 +00:00
oob_tcp_component.c
Some small leak cleanups
2014-07-30 15:46:02 +00:00
oob_tcp_component.h
Now that the BTLs are moving down to OPAL and becoming available to ORTE, there no longer is a need/desire to push performance in the OOB/TCP component. So we don't need multiple modules driving NICs in parallel, and can drop all the complicated distribution logic. Fall back to the simplified single module model, but retain the ability to run that module in its own progress thread if so directed.
2014-06-06 02:24:17 +00:00
oob_tcp_connection.c
If an initial address fails to connect, record that fact and attempt the next address for that proc. If nothing succeeds, then declare failure.
2014-08-19 19:48:24 +00:00
oob_tcp_connection.h
Now that the BTLs are moving down to OPAL and becoming available to ORTE, there no longer is a need/desire to push performance in the OOB/TCP component. So we don't need multiple modules driving NICs in parallel, and can drop all the complicated distribution logic. Fall back to the simplified single module model, but retain the ability to run that module in its own progress thread if so directed.
2014-06-06 02:24:17 +00:00
oob_tcp_hdr.h
Don't use "size_t" for the nbytes field in the header - use uint32_t to ensure that ntohl/htonl correctly match it
2013-12-23 21:39:49 +00:00
oob_tcp_listener.c
Now that the BTLs are moving down to OPAL and becoming available to ORTE, there no longer is a need/desire to push performance in the OOB/TCP component. So we don't need multiple modules driving NICs in parallel, and can drop all the complicated distribution logic. Fall back to the simplified single module model, but retain the ability to run that module in its own progress thread if so directed.
2014-06-06 02:24:17 +00:00
oob_tcp_listener.h
As per the RFC, bring in the ORTE async progress code and the rewrite of OOB:
2013-08-22 16:37:40 +00:00
oob_tcp_peer.h
Now that the BTLs are moving down to OPAL and becoming available to ORTE, there no longer is a need/desire to push performance in the OOB/TCP component. So we don't need multiple modules driving NICs in parallel, and can drop all the complicated distribution logic. Fall back to the simplified single module model, but retain the ability to run that module in its own progress thread if so directed.
2014-06-06 02:24:17 +00:00
oob_tcp_ping.h
Now that the BTLs are moving down to OPAL and becoming available to ORTE, there no longer is a need/desire to push performance in the OOB/TCP component. So we don't need multiple modules driving NICs in parallel, and can drop all the complicated distribution logic. Fall back to the simplified single module model, but retain the ability to run that module in its own progress thread if so directed.
2014-06-06 02:24:17 +00:00
oob_tcp_sendrecv.c
Now that the BTLs are moving down to OPAL and becoming available to ORTE, there no longer is a need/desire to push performance in the OOB/TCP component. So we don't need multiple modules driving NICs in parallel, and can drop all the complicated distribution logic. Fall back to the simplified single module model, but retain the ability to run that module in its own progress thread if so directed.
2014-06-06 02:24:17 +00:00
oob_tcp_sendrecv.h
Now that the BTLs are moving down to OPAL and becoming available to ORTE, there no longer is a need/desire to push performance in the OOB/TCP component. So we don't need multiple modules driving NICs in parallel, and can drop all the complicated distribution logic. Fall back to the simplified single module model, but retain the ability to run that module in its own progress thread if so directed.
2014-06-06 02:24:17 +00:00
oob_tcp.c
Now that the BTLs are moving down to OPAL and becoming available to ORTE, there no longer is a need/desire to push performance in the OOB/TCP component. So we don't need multiple modules driving NICs in parallel, and can drop all the complicated distribution logic. Fall back to the simplified single module model, but retain the ability to run that module in its own progress thread if so directed.
2014-06-06 02:24:17 +00:00
oob_tcp.h
Now that the BTLs are moving down to OPAL and becoming available to ORTE, there no longer is a need/desire to push performance in the OOB/TCP component. So we don't need multiple modules driving NICs in parallel, and can drop all the complicated distribution logic. Fall back to the simplified single module model, but retain the ability to run that module in its own progress thread if so directed.
2014-06-06 02:24:17 +00:00