`sturct mca_pml_ob1_comm_proc_t`, which is allocated per
connected rank in a communicator, had two paddings after
`expected_sequence` and `send_sequence` by alignments.
By changing the order of the members, the size of
`mca_pml_ob1_comm_proc_t` is reduced by 8 bytes on 64-bit
architectures.
Signed-off-by: KAWASHIMA Takahiro <t-kawashima@jp.fujitsu.com>
It turns that there is an incompatibility between the Cray PMI
library and the default configuration for building Open MPI (master).
To work around this, we now disable use of aprun for direct launch
of Open MPI jobs except under specific conditions.
The problem is that there are now (on master) packages getting
initialized that do not work properly across a fork operation.
As part of a constructor in the Cray PMI library, a fork operation
is done to simplify use of shared memory between the
processes in a job on the same node. This ends up thoroughly
messing up the Open MPI initialization process in the case
that dlopen support is enabled. The initialization process gets
about half-way through when the PMIX framework is opened and
components are loaded, which triggers the Cray PMI constructor
and hence the fork operation.
There are two workarounds for this:
1) configure Open MPI for Cray XE/XC systems using aprun with the
--disable-dlopen option
2) set the PMI_NO_FORK environment variable in the shell in which
the aprun command is run.
Without taking these measures, a Open MPI job will just hang at
job startup in the first attempt to "thread-shift" the PMIx
fence_nb operation. Additional hangs occur at shutdown if this
problem is worked around, again due to the insertion of a fork
operation halfway through the Open MPI initialization procedure.
This commit detects if the conditions that bring out the hang
situation are present, and if so, prints out a message and
aborts the job launch.
Note on systems using slurm, the PMI_NO_FORK environment variable
is set as part of the srun job launch, hence this issue is avoided
on those systems.
Signed-off-by: Howard Pritchard <howardp@lanl.gov>
This fixes a bug reported in-house occuring with this component. It is triggered if the data assigned to different aggregators is highly differing, leading to different number of internal iterations required to handle it.
Signed-off-by: Edgar Gabriel <egabriel@central.uh.edu>
An file might have been destroyed by an other task between
readdir() and stat(), so simply ignore stat() failure.
That typically occurs when one task is removing the job_session_dir
and an other task is still removing its proc_session_dir.
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
opal_convertor_pack() might pack less bytes than requested,
so always set frag->segments[0].seg_len.
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
PR open-mpi/ompi#2432 introduced a regression where configure
and build with --disable-dlopn caused build failure owing
to unresolved alps lli symbols in the libopal-pal shared library.
This commit fixes this problem.
Signed-off-by: Howard Pritchard <howardp@lanl.gov>
Reasons for removal are:
- the function is only used by the shmem_lock code
- only a subset of the function is used by the shmem_lock
- for the general case the function is not correct
Signed-off-by: Alex Mikheev <alexm@mellanox.com>
protect the mca_coll_libnbc_component.active_requests list with
the new mca_coll_libnbc_component.lock mutex.
Thanks Jie Hu for the report
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>
Enhance the cray pmix component to set some OMPI internal
env. variables used to set some key/value pairs
on the MPI_INFO_ENV object. This allows more of the
ompi-tests ibm unit tests to pass when using aprun/srun
direct launch and Cray PMI.
Signed-off-by: Howard Pritchard <howardp@lanl.gov>
- instead of coll_base_comm_get_reqs(2) for irecv/isend, use only
one request allocated in the stack and do a irecv/send
- instead of ompi_request_wait_all(2), simpy ompi_request_wait
Signed-off-by: Gilles Gouaillardet <gilles@rist.or.jp>