(1) rml_ofi_transports mca parameter. This parameter should have the list of transports (currently ethernet,fabric are valid)
fabric is higher priority if provided.
(2) ORTE_RML_TRANSPORT_TYPE key with values "ethernet" or "fabric". "fabric" is higher priority.
If specific provider is required use ORTE_RML_OFI_PROV_NAME key with values "socket" or "OPA" or any other supported in system.
modified: ../orte/mca/rml/ofi/rml_ofi.h
modified: ../orte/mca/rml/ofi/rml_ofi_component.c
modified: ../orte/mca/rml/ofi/rml_ofi_send.c
On send_msg choose the provider on local and peer to follow below rules -
1. if the user specified the transport for this conduit (even giving us a prioritized list of candidates), then the one we selected is the _only_ one we will use. If the remote peer has a matching endpoint, then we use it - otherwise, we error out
2. if the user didn't specify a transport, then we look for matches against _all_ of our available transports, starting with fabric and then going to Ethernet, taking the first one that matches.
3. if we can't find any match, then we error out
modified: ../orte/mca/rml/ofi/rml_ofi.h
modified: ../orte/mca/rml/ofi/rml_ofi_component.c
modified: ../orte/mca/rml/ofi/rml_ofi_send.c
send_msg() -> Fixed case when the local provider chosen at time of opening conduit
is not present in peer (destination) node
modified: ../orte/mca/rml/ofi/rml_ofi.h
modified: ../orte/mca/rml/ofi/rml_ofi_send.c
When opening conduit, checking for the transport preference in below order -
(1) rml_ofi_transports mca parameter. This parameter should have the list of transports (currently ethernet,fabric are valid)
fabric is higher priority if provided.
(2) ORTE_RML_TRANSPORT_TYPE key with values "ethernet" or "fabric". "fabric" is higher priority.
If specific provider is required use ORTE_RML_OFI_PROV_NAME key with values "socket" or "OPA" or any other supported in system.
modified: ../orte/mca/rml/ofi/rml_ofi.h
modified: ../orte/mca/rml/ofi/rml_ofi_component.c
modified: ../orte/mca/rml/ofi/rml_ofi_send.c
On send_msg choose the provider on local and peer to follow below rules -
1. if the user specified the transport for this conduit (even giving us a prioritized list of candidates), then the one we selected is the _only_ one we will use. If the remote peer has a matching endpoint, then we use it - otherwise, we error out
2. if the user didn't specify a transport, then we look for matches against _all_ of our available transports, starting with fabric and then going to Ethernet, taking the first one that matches.
3. if we can't find any match, then we error out
modified: ../orte/mca/rml/ofi/rml_ofi.h
modified: ../orte/mca/rml/ofi/rml_ofi_component.c
modified: ../orte/mca/rml/ofi/rml_ofi_send.c
send_msg() -> Fixed case when the local provider chosen at time of opening conduit
is not present in peer (destination) node
modified: ../orte/mca/rml/ofi/rml_ofi.h
modified: ../orte/mca/rml/ofi/rml_ofi_send.c
Signed-off-by: Anandhi Jayakumar <anandhi.s.jayakumar@intel.com>
Remove the opal_ignore from the RML/OFI component, but disable that component unless the user specifically requests it via the "rml_ofi_desired=1" MCA param. This will let us test compile in various environments without interfering with operations while we continue to debug
Fix an error when computing the number of infos during server init
Signed-off-by: Ralph Castain <rhc@open-mpi.org>
This now passes the loop test, and so we believe it resolves the random hangs in finalize.
Changes in PMIx master that are included here:
* Fixed a bug in the PMIx_Get logic
* Fixed self-notification procedure
* Made pmix_output functions thread safe
* Fixed a number of thread safety issues
* Updated configury to use 'uname -n' when hostname is unavailable
Work on cleaning up the event handler thread safety problem
Rarely used functions, but protect them anyway
Fix the last part of the intercomm problem
Ensure we don't cover any PMIx calls with the framework-level lock.
Protect against NULL argv comm_spawn
Signed-off-by: Ralph Castain <rhc@open-mpi.org>
Cleanup the way we look for matching OFI addresses by using the opal_net_samenetwork helper function. This now works for multi-network environments, but only using the socket provider
Signed-off-by: Ralph Castain <rhc@open-mpi.org>
Start updating the various mappers to the new procedure. Remove the stale lama component as it is now very out-of-date. Bring round_robin and PPR online, and modify the mindist component (but cannot test/debug it).
Remove unneeded test
Fix memory corruption by re-initializing variable to NULL in loop
Resolve the race condition identified by @ggouaillardet by resetting the
mapped flag within the same event where it was set. There is no need to
retain the flag beyond that point as it isn't used again.
Add a new job attribute ORTE_JOB_FULLY_DESCRIBED to indicate that all the job information (including locations and binding) is included in the launch message. Thus, the backend daemons do not need to do any map computation for the job. Use this for the seq, rankfile, and mindist mappers until someone decides to update them.
Note that this will maintain functionality, but means that users of those three mappers will see large launch messages and less performant scaling than those using the other mappers.
Have the mindist module add procs to the job's proc array as it is a fully described module
Protect the hnp-not-in-allocation case
Per path suggested by Gilles - protect the HNP node when it gets added in the absence of any other allocation or hostfile
Signed-off-by: Ralph Castain <rhc@open-mpi.org>
Fix the --nolocal option by ensuring we always check/remove the HNP from the list of available nodes if the flag is set
Ensure that the HNP node is included as available when nothing else is given
Signed-off-by: Ralph Castain <rhc@open-mpi.org>