Fixes a wrong answer from MPI_Ireduce when the red_sched_chain()
path was taken (which only happens for np<=4 and mesgsize>=64k).
The way libnbc treats MPI_IN_PLACE is to set sbuf == rbuf, and
whether an algorithm will work cleanly or not after that depends on the
details.
In this case the last steps of the algorithm amounted to
(right neighbor is sending us reduction results from ranks 1..n-1)
recv into rbuf from right neighbor
add the contribution from our sbuf into rbuf
this would be fine in general, but if sbuf==rbuf, that recv overwrites
the sbuf. I changed it to recv into a tmpbuf if MPI_IN_PLACE was used.
Signed-off-by: Geoffrey Paulsen <gpaulsen@us.ibm.com>
MPI_Allgatherv with MPI_IN_PLACE reads data from wrong location.
They were locating the MPI_IN_PLACE send buffer as
```c
send_buf = (char*)rbuf;
for (i = 0; i < rank; ++i) {
send_buf += ((ptrdiff_t)rcounts[i] * extent);
}
```
when it should be
```c
send_buf = (char*)rbuf;
send_buf += ((ptrdiff_t)disps[rank] * extent);
```
because disps[] specifies where things are in the v-style buffers.
Signed-off-by: Joshua Hursey <jhursey@us.ibm.com>
when a file is opened a second time for shared file pointer operations,
avoid setting the create and exclusive flag.
Signed-off-by: Edgar Gabriel <egabriel@central.uh.edu>
it looks like disabling the lazy_open flag for sharedfp components
revealead a bug that lead to a crash in file_close in some tests. Make
sure the SHAREDFP_IS_SET flag is correctly set (and not overwritten again),
and we use that to avoid a double-free of the communicator.
Signed-off-by: Edgar Gabriel <egabriel@central.uh.edu>
Revert the logic of io_ompio_sharedfp_lazy_open. The user now has to explicitely
disable shared fp in order for the structures not to be allocated.
Otherwise, resetting the shared fp e.g. in case the file was opened
in append mode will not work correctly, the code could deadlock.
Signed-off-by: Edgar Gabriel <egabriel@central.uh.edu>
Fixes a bug reported on the mailing list. ompio did only reposition the individual
file pointer when the file was opened in append mode. Set the shared file
pointer also to point to the end of the file, similarly to the individual
file pointer.
Signed-off-by: Edgar Gabriel <egabriel@central.uh.edu>
There are only five places in the non-daemon code paths where opal_hwloc_topology is currently referenced:
* shared memory BTLs (sm, smcuda). I have added a code path to those components that uses the location string
instead of the topology itself, if available, thus avoiding instantiating the topology
* openib BTL. This uses the distance matrix. At present, I haven't developed a method
for replacing that reference. Thus, this component will instantiate the topology
* usnic BTL. Uses the distance matrix.
* treematch TOPO component. Does some complex tree-based algorithm, so it will instantiate
the topology
* ess base functions. If a process is direct launched and not bound at launch, this
code attempts to bind it. Thus, procs in this scenario will instantiate the
topology
Note that instantiating the topology on complex chips such as KNL can consume
megabytes of memory.
Fix pernode binding policy
Properly handle the unbound case
Correct pointer usage
Do not free static error messages!
Signed-off-by: Ralph Castain <rhc@open-mpi.org>
* When using `MPI_Put` with `MPI_Win_lock_all` a hang is possible since
the `put` is waiting on `eager_send_active` to become `true` but
that variable might not be reset in the case of `MPI_Win_lock_all`
depending on other incoming events (e.g., `post` or ACKs of lock
requests.
Signed-off-by: Joshua Hursey <jhursey@us.ibm.com>
* When using `MPI_Lock`/`MPI_Unlock` with `MPI_Get` and non-contiguous
datatypes is is possible that the unlock finishes too early before
the data is actually present in the recv buffer.
* We need to wait for the irecv to complete before unlocking the target.
This commit waits for the outgoing fragment counts to become equal
before unlocking.
Signed-off-by: Joshua Hursey <jhursey@us.ibm.com>
* If the user uses PSCW synchronization after a Fence then the previous
epoch is not reset which can cause the PSCW to transfer data before
it is ready leading to wrong answers.
* This commit resets the `eager_send_active` in the start call.
Signed-off-by: Joshua Hursey <jhursey@us.ibm.com>
According to the MPI-3.1 p.52 and p.53 (cited below), a request
created by `MPI_*_INIT` but not yet started by `MPI_START` or
`MPI_STARTALL` is inactive therefore `MPI_WAIT` or its friends
must return immediately if such a request is passed.
The current implementation hangs in `MPI_WAIT` and its friends
in such case because a persistent request is initialized as
`req_complete = REQUEST_PENDING`. This commit fixes the
initialization.
Also, this commit fixes internal requests used in `MPI_PROBE`
and `MPI_IPROBE` which was marked wrongly as persistent.
MPI-3.1 p.52:
We shall use the following terminology: A null handle is a handle
with value MPI_REQUEST_NULL. A persistent request and the handle
to it are inactive if the request is not associated with any ongoing
communication (see Section 3.9). A handle is active if it is neither
null nor inactive. An empty status is a status which is set to return
tag = MPI_ANY_TAG, source = MPI_ANY_SOURCE, error = MPI_SUCCESS, and
is also internally configured so that calls to MPI_GET_COUNT,
MPI_GET_ELEMENTS, and MPI_GET_ELEMENTS_X return count = 0 and
MPI_TEST_CANCELLED returns false. We set a status variable to empty
when the value returned by it is not significant. Status is set in
this way so as to prevent errors due to accesses of stale information.
MPI-3.1 p.53:
One is allowed to call MPI_WAIT with a null or inactive request
argument. In this case the operation returns immediately with empty
status.
Signed-off-by: KAWASHIMA Takahiro <t-kawashima@jp.fujitsu.com>
Adds the new API hcoll_conetxt_free that resolves the issues
observed with the ctx cache and group_destroy_notify.
Signed-off-by: Valentin Petrov <valentinp@mellanox.com>
`sturct mca_pml_ob1_comm_proc_t`, which is allocated per
connected rank in a communicator, had two paddings after
`expected_sequence` and `send_sequence` by alignments.
By changing the order of the members, the size of
`mca_pml_ob1_comm_proc_t` is reduced by 8 bytes on 64-bit
architectures.
Signed-off-by: KAWASHIMA Takahiro <t-kawashima@jp.fujitsu.com>
This fixes a bug reported in-house occuring with this component. It is triggered if the data assigned to different aggregators is highly differing, leading to different number of internal iterations required to handle it.
Signed-off-by: Edgar Gabriel <egabriel@central.uh.edu>