Fine tuning of flux component
Fix a few minor issues with the initial cut:
* Job id could be obtained from the PMI kvsname like SLURM,
but simpler to getenv (FLUX_JOB_ID)
* Flux pmi-1 doesn't define PMI_BOOL, PMI_TRUE, PMI_FALSE
* Flux pmi-1 maps the deprecated PMI_Get_kvs_domain_id() to
PMI_KVS_Get_my_name() internally, so just call that instead.
* Drop residual slurm references.
Add wrappers for PMI functions so that if HAVE_FLUX_PMI_LIBRARY
is not defined, the component can dlopen libpmi.so at location
specified by the FLUX_PMI_LIBRARY_PATH env variable, which adds
flexibility. If HAVE_FLUX_PMI_LIBRARY is defined, link with
libpmi.so at build time in the usual way.
Update configury for flux component
Update m4 so the configure options work as follows:
--with-flux-pmi
Build Flux PMI support (default: yes)
--with-flux-pmi-library
Link Flux PMI support with PMI library at build
time. Otherwise the library is opened at runtime at
location specified by FLUX_PMI_LIBRARY_PATH environment
variable. Use this option to enable Flux support when
building statically or without dlopen support (default: no)
If the latter option is provided, the library/header is located at
build time using the pkg-config module 'flux-pmi'. Otherwise there
is no library/header dependency.
Handle the case where ompi is configured with --disable-dlopen
or --enable-statkc. In those cases, don't build the component
unless --with-flux-pmi-library is provided.
It is fatal if the user explicitly requests --with-flux-pmi but
it cannot be built (e.g. due to --disable-dlopen).
Add a schizo/flux component
Update schizo/flux component
Eliminate slurm-specific usage cases.
Since the module is only loaded if FLUX_JOB_ID is set, there are
only two cases to handle:
1) App was launched indirectly through mpirun. This is not yet
supported with Flux, but hook remains in case this mode is supported
in the future.
2) App was launched directly by Flux, with Flux providing
CPU binding, if any.
Fix up white space in pmix/flux component
Drop non-blocking fence from pmix:flux component
The flux PMI-1 library is not thread safe, therefore
register a regular blocking fence callback instead of the
thread-shifting fencenb().
pmix/flux component avoids extra PMI_KVS_Gets
Keys stored into the base cache under the wildcard
rank are not intended to be part of the global key namespace.
These keys therefore should not trigger a PMI_KVS_Get() if they
are not found in the cache.
Minor pmix/flux component cleanup
pmix/flux: drop code for fetching unused pmix_id
pmix/flux: err_exit must return error
Problem: in flux_init(), although 'ret' (variable holding
err_exit return code) is initialized to OPAL_ERROR, the
variable is reused as a temporary result code, so if there are
some successes followed by a failure that doesn't set 'ret',
flux_init() could return success with PMI not initialized.
Ensure that a "goto err_exit" returns OPAL_ERROR if 'ret'
is not set to some other error code.
pmix/flux: don't mix OPAL_ and PMI_ return codes
Problem: flux_init() can return both PMI_ and OPAL_ return
codes. Although OPAL_SUCCESS and PMI_SUCCESS are both defined
as 0, other codes are not compatible.
Ensure that flux_init() consistently uses 'rc' for PMI_
return codes and 'ret' for OPAL_ return codes.
pmix/flux: factor out repeated code for cache put
Signed-off-by: Ralph Castain <rhc@open-mpi.org>
Update ORTE support for dynamic PMIx operations e.g., PMIx_Spawn
Update to track master
Ensure that --disable-pmix-dstore actually disables the dstore. Sync to a few debugger updates
Signed-off-by: Ralph Castain <rhc@open-mpi.org>
fix an issue that can be evidenced with two nodes
n0$ mpirun --host n1:1 --mca oob_tcp_static_ipv4_ports 1234 -np 1 --mca routed radix --mca oob tcp true
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
Silence a warning in orted_submit
Protect against a free'd value in an error path when forming oob tcp connections
Signed-off-by: Ralph Castain <rhc@open-mpi.org>
The problem was observed for direct modex used with recursive doubling
algorithm (used for collective ID calculation prior to d52a2d081e9598a9ac9a50fb4b013a6d2a72375b)
that has pairwise nature and counter-connections are highly likely.
The following scenario was uncovering the issue:
* ranks `x` and `y` want to communicate with each other, `x` < `y`;
* rank `x` initiates the connection and sends the ack;
* rank `y` starts to `connect()` and gets the ack from `x`;
* `y` identifies that it already started connecting and `y` > `x` so it rejects incoming connection.
* `x` sees that his connection was rejected in `mca_oob_tcp_peer_recv_connect_ack()` when trying to
read the message header using `tcp_peer_recv_blocking()` which calls `mca_oob_tcp_peer_close()`
that effectively flushes all the messages in the peer->send_queue.
* `y` send the ack to `x` and the connection is established, however all the messages for the peer
at `x` are vanished (except the front one in peer->send_msg).
This commit introduces a "nack" function that will be used at `y` side to tell `x` that `y` has the
priority and `x`'s connection should be closed. This allows to avoid "guessing" on the unexpectedly
closed connection.
Signed-off-by: Artem Polyakov <artpol84@gmail.com>
PR open-mpi/ompi#2432 introduced a regression where configure
and build with --disable-dlopn caused build failure owing
to unresolved alps lli symbols in the libopal-pal shared library.
This commit fixes this problem.
Signed-off-by: Howard Pritchard <howardp@lanl.gov>
there is no need for a configure option as well - so remove the
--enable-orte-static-ports configure option. When decoding the daemon
nidmap, mark new daemons as ALIVE by default - we will discover dead
ones as we go.
Signed-off-by: Ralph Castain <rhc@open-mpi.org>
display the hop node used to send a message
(if the message is sent directly, then the hop is the destination)
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
* By default, make sure that we are using the short hostnames and not
the fully qualified hostnames when running under LSF.
* Related to commit open-mpi/ompi@d26dd2c20e
Signed-off-by: Joshua Hursey <jhursey@us.ibm.com>