Для этого сайта требуется поддержка JavaScript.
Обзор
Помощь
Вход
ports
/
openmpi
Следить
1
В избранное
1
Форкнуть
0
Вы уже форкнули openmpi
Код
Релизы
Активность
openmpi
/
orte
/
mca
/
oob
/
tcp
История
Ralph Castain
5cdbc00136
Re-enable the usock oob component. Ensure the TCP component promotes messages for other procs to the OOB base so that other components have a chance to send the relay. Seems to be passing MTT, so let's see how it works for others.
...
This commit was SVN r32650.
2014-08-30 19:33:46 +00:00
..
configure.m4
Remove the old configure option for disabling full rte support - we now use the OMPI rte framework for such purposes
2013-02-28 01:35:55 +00:00
help-oob-tcp.txt
Try again to get an error message printed when a daemon fails to successfully report back to mpirun. In this case, there is no guaranteed way for the daemon to output the error report itself - we don't have a connection back to the HNP, and we have tied stderr off to /dev/null (for good reasons). So the HNP has to detect the failure itself and report it.
2014-05-01 19:48:21 +00:00
Makefile.am
Use the correct abstraction layer name for the data dirs
2014-05-08 14:32:24 +00:00
oob_tcp_common.c
Now that the BTLs are moving down to OPAL and becoming available to ORTE, there no longer is a need/desire to push performance in the OOB/TCP component. So we don't need multiple modules driving NICs in parallel, and can drop all the complicated distribution logic. Fall back to the simplified single module model, but retain the ability to run that module in its own progress thread if so directed.
2014-06-06 02:24:17 +00:00
oob_tcp_common.h
Now that the BTLs are moving down to OPAL and becoming available to ORTE, there no longer is a need/desire to push performance in the OOB/TCP component. So we don't need multiple modules driving NICs in parallel, and can drop all the complicated distribution logic. Fall back to the simplified single module model, but retain the ability to run that module in its own progress thread if so directed.
2014-06-06 02:24:17 +00:00
oob_tcp_component.c
Per the PMIx RFC:
2014-08-21 18:56:47 +00:00
oob_tcp_component.h
Now that the BTLs are moving down to OPAL and becoming available to ORTE, there no longer is a need/desire to push performance in the OOB/TCP component. So we don't need multiple modules driving NICs in parallel, and can drop all the complicated distribution logic. Fall back to the simplified single module model, but retain the ability to run that module in its own progress thread if so directed.
2014-06-06 02:24:17 +00:00
oob_tcp_connection.c
Track down the last piece of the connection problem. It appears that
2014-08-20 16:55:36 +00:00
oob_tcp_connection.h
Now that the BTLs are moving down to OPAL and becoming available to ORTE, there no longer is a need/desire to push performance in the OOB/TCP component. So we don't need multiple modules driving NICs in parallel, and can drop all the complicated distribution logic. Fall back to the simplified single module model, but retain the ability to run that module in its own progress thread if so directed.
2014-06-06 02:24:17 +00:00
oob_tcp_hdr.h
Don't use "size_t" for the nbytes field in the header - use uint32_t to ensure that ntohl/htonl correctly match it
2013-12-23 21:39:49 +00:00
oob_tcp_listener.c
Now that the BTLs are moving down to OPAL and becoming available to ORTE, there no longer is a need/desire to push performance in the OOB/TCP component. So we don't need multiple modules driving NICs in parallel, and can drop all the complicated distribution logic. Fall back to the simplified single module model, but retain the ability to run that module in its own progress thread if so directed.
2014-06-06 02:24:17 +00:00
oob_tcp_listener.h
As per the RFC, bring in the ORTE async progress code and the rewrite of OOB:
2013-08-22 16:37:40 +00:00
oob_tcp_peer.h
Now that the BTLs are moving down to OPAL and becoming available to ORTE, there no longer is a need/desire to push performance in the OOB/TCP component. So we don't need multiple modules driving NICs in parallel, and can drop all the complicated distribution logic. Fall back to the simplified single module model, but retain the ability to run that module in its own progress thread if so directed.
2014-06-06 02:24:17 +00:00
oob_tcp_ping.h
Now that the BTLs are moving down to OPAL and becoming available to ORTE, there no longer is a need/desire to push performance in the OOB/TCP component. So we don't need multiple modules driving NICs in parallel, and can drop all the complicated distribution logic. Fall back to the simplified single module model, but retain the ability to run that module in its own progress thread if so directed.
2014-06-06 02:24:17 +00:00
oob_tcp_sendrecv.c
Re-enable the usock oob component. Ensure the TCP component promotes messages for other procs to the OOB base so that other components have a chance to send the relay. Seems to be passing MTT, so let's see how it works for others.
2014-08-30 19:33:46 +00:00
oob_tcp_sendrecv.h
Now that the BTLs are moving down to OPAL and becoming available to ORTE, there no longer is a need/desire to push performance in the OOB/TCP component. So we don't need multiple modules driving NICs in parallel, and can drop all the complicated distribution logic. Fall back to the simplified single module model, but retain the ability to run that module in its own progress thread if so directed.
2014-06-06 02:24:17 +00:00
oob_tcp.c
Remove an unnecessary optimization that can cause more trouble than it's worth - just try all the addresses that are given to us.
2014-08-20 20:58:07 +00:00
oob_tcp.h
Now that the BTLs are moving down to OPAL and becoming available to ORTE, there no longer is a need/desire to push performance in the OOB/TCP component. So we don't need multiple modules driving NICs in parallel, and can drop all the complicated distribution logic. Fall back to the simplified single module model, but retain the ability to run that module in its own progress thread if so directed.
2014-06-06 02:24:17 +00:00