There were two bugs in osc/rdma when using threads:
- Deadlock is ompi_osc_rdma_start_atomic. This occurs because
ompi_osc_rdma_frag_alloc is called with the module lock. To fix the
issue the module lock is now recursive. In the future I will add a
new lock to protect just the current rdma fragment.
- Do not drop the lock in ompi_osc_rdma_frag_alloc when calling
ompi_osc_rdma_frag_complete. Not only is it not needed but dropping
the lock at this point can cause a competing thread to mess up the
state.
Signed-off-by: Nathan Hjelm <hjelmn@lanl.gov>
We should invoke OBJ_CONTRUCT/OBJ_DESTRUCT only on regular requests
(which are embedded inside UCX requests) and for the completed request.
Persistent requests are already constructed/destructed by the free list.
This fixes an assertion in ompi_request_destruct.
Without this modification, gfortran throw the following error
if these variables are used for `MPI_DIST_GRAPH_CREATE_ADJACENT` or
`MPI_DIST_GRAPH_CREATE_ADJACENT`.
Error: There is no specific subroutine for the generic
'mpi_dist_graph_create_adjacent' at (1)
`MPI_ARGVS_NULL` should be a two-dimensional array.
Without this modification, gfortran throw the following error
if `MPI_ARGVS_NULL` is used for `MPI_COMM_SPAWN_MULTIPLE`.
Error: There is no specific subroutine for the generic
'mpi_comm_spawn_multiple' at (1)
During component finalize, mtl-portals4 would blindly release
resources without testing if the handle was valid. This was OK,
but resource allocation is now delayed until add_procs(). If
mtl-portals4 is deselected, it will be finalized without
add_procs() ever being called. This commit ensures that invalid
handles are not released.
Writing to the pml_monitoring_flush variable will set the filename of
the output file.
Stopping a session for the pml_monitoring_flush will force the
generation of the nobitoring output file (as long as the filename
is not NULL).
To reset the monitoring, une has to bind the pml_monitoring_flush to a
session.
using performance variables "pml_monitoring_messages_count" and
"pml_monitoring_messages_size"
Per Brice suggestion make all data count and message length be
uint64_t.
counting or not the collective traffic as a separate entity. The need
for such a PML is simply because the PMPI interface doesn't allow us to
identify the collective generated traffic.
to continue current default behavior.
Also add an MCA param pmix_base_collect_data to direct that the blocking fence shall return all data to each process. Obviously, this param has no effect if async_
modex is used.
This commit fixes a bug that can occur on Cray Gemini networks. If
multiple registrations are used for the local state then we looks the
atomicity guarantees. To avoid issues like this use only a single
registration handle for all local state on a node.
Signed-off-by: Nathan Hjelm <hjelmn@lanl.gov>
This commit fixes the following:
- CIDs 1328491, 1328492: Dead code caused by typos in a prior
commit.
- Fix the calculation of dynamic memory regions. This was causes
incorrect RMA range errors when accessing the last partial page of
an attachment.
- Fix a SEGV when using dynamic memory windows with local state (all
processes on the same node).
Signed-off-by: Nathan Hjelm <hjelmn@lanl.gov>
When using dynamic memory windows the displacement becomes a
pointer. Since the high bit may be set on valid pointers on some
platforms the check for disp > 0 is invalid. This commit adds the
window flavor to ompi_win_t and disables the displacement check when
operating on dynamic memory windows.
Signed-off-by: Nathan Hjelm <hjelmn@lanl.gov>
This commit fixes a bug that occurs when a post message comes in when
sending complete messages or while waiting for all outgoing messages
to flush. In that case the post message might get incorrecly
associated with the ending sync object.
References open-mpi/ompi#1012
Signed-off-by: Nathan Hjelm <hjelmn@lanl.gov>
This commit add protection to the group, ob1, and bml endpoint lookup
code. For ob1 and the bml a lock has been added. For performance
reasons the lock is only held if a bml or ob1 endpoint does not
exist. ompi_group_dense_lookup no uses opal_atomic_cmpset to ensure
the proc is only retained by the thread that actually updates the
group.
Signed-off-by: Nathan Hjelm <hjelmn@lanl.gov>
This commit fixes several bugs in the osc/rdma component:
- Complete aggregated requests immediately. Completion of RMA
requests indicates local completion anyway. This fixes a hang in
the c_reqops test.
- Correctly mark Rget_accumulate requests.
- Set the local base flag correctly on the local peer.
- Clear or set the no locks flag on the window if the value is
changed by MPI_Win_set_info.
- Actually update the target when using MPI_OP_REPLACE.
Fixesopen-mpi/ompi#1010
Signed-off-by: Nathan Hjelm <hjelmn@lanl.gov>
currently, MPI fails with MPI_ERR_ARG. This is counter intuitive since
MPI_IN_PLACE is a legitimate parameter. MPI_IN_PLACE might not be correctly
implemented by all the non blocking modules (libnbc, ...) so fail with
MPI_ERR_INTERN for the time being.
This commit changes the priority of mtl components to be relative to
pml/ob1 and updates the mtl interface to expose this priority. cm now
sets its own priority based on the priority of the selected mtl
component.
Signed-off-by: Nathan Hjelm <hjelmn@lanl.gov>
The mca_base_select function uses returned priorities to select the
best component/module. This priority may be of use to the caller so
pass that information back in an optional argument. If the priority is
not needed pass NULL.
Signed-off-by: Nathan Hjelm <hjelmn@lanl.gov>
This commit removes code that checks the ob1 priority vs the previous
priority. The previous priority is meaningless here and may only cause
ob1 to disable itself when it shouldn't.
Signed-off-by: Nathan Hjelm <hjelmn@lanl.gov>
This patch removes a priority check that disables cm if the previous
pml had higher priority. The check was incorrect as coded and is
unnecessary as we finalize all but one pml anyway.
Fixesopen-mpi/ompi#1035
Signed-off-by: Nathan Hjelm <hjelmn@lanl.gov>
Proposed extensions for Open MPI:
- If MPI_INITLIZED is invoked and MPI is only partially initialized,
wait until MPI is fully initialized before returning.
- If MPI_FINALIZED is invoked and MPI is only partially finalized,
wait until MPI is fully finalized before returning.
- If the ompi_mpix_allow_multi_init MCA param is true, allow MPI_INIT
and MPI_INIT_THREAD to be invoked multiple times without error (MPI
will be safely initialized only the first time it is invoked).
Once a FIN control message is appended to the pending list,
the ob1 PML attempts to send the FIN again in the `mca_pml_ob1_process_pending_packets` function.
But if the PML failed to sent the FIN again, the `mca_pml_ob1_send_fin`
function creates a new `mca_pml_ob1_pckt_pending_t` object and the
old object is not retured to the free list.
Add ompi_mpi_dynamics_disable() function to disable MPI dynamic
process functionality (i.e., such that if MPI_COMM_SPAWN/etc. are
invoked, you'll get a show_help error explaining that MPI dynamic
process functionality is disabled in this environment -- instead of a
potentially-cryptic network or hardware error).
Fixes#984
This commit fixes an issue identified by @rolfv. The local peer was
not being correctly initialized when running with a single process on
a node.
This fixesopen-mpi/ompi#1010
Signed-off-by: Nathan Hjelm <hjelmn@lanl.gov>
This commit adds implementations of gather and igather using
Portals4 triggered operations. The default algorithm is linear,
but binomial can be selected using an MCA parameter -
coll_portals4_use_binomial_gather_algorithm.
when profiling is built.
This prevents oshmem subroutines from being wrapped twice by third
party tools (e.g. once in oshmem and once in MPI)
see discussion starting at http://www.open-mpi.org/community/lists/devel/2015/08/17842.php
Thanks to Bert Wesarg for bringing this to our attention
This patch fixes the issues identified by @ggouaillardet in the IBM tests (collectives and topologies). It also improves the memory usage of OMPI, as a communicator without collective communications will never allocate the array of requests needed to coordinate the basic collective algorithms. This ticket replaced #790.
Fix code paths that didn't convert the MPI datatype to the
corresponding Portals4 datatype.
Thanks to Nicolas Chevalier (@shawone) for finding this bug and
submitting a patch.