Returns a string name (either a resolved name or IPv4/IPv6 name in a
string if unresolvable. The caller is responsible for freeing the
string.
Signed-off-by: Jeff Squyres <jsquyres@cisco.com>
somehow the flag indicating to gather performance data
on collective io operations has changed to 1 accidentally.
Should be 0 ( false) by default.
Signed-off-by: Edgar Gabriel <egabriel@central.uh.edu>
never got to move this sharedfp component into anything
usable. Can easily be restored if necessary.
Signed-off-by: Edgar Gabriel <egabriel@central.uh.edu>
plfs components are at this point not utilized by anybody as far as I know.
Easy to bring back if we want to.
Signed-off-by: Edgar Gabriel <egabriel@central.uh.edu>
We have a small number of requirements for contributions (e.g.,
"Signed-off-by"), so let's make sure that people have an easy way of
knowing these things.
Signed-off-by: Jeff Squyres <jsquyres@cisco.com>
This commit fixes the case when local client asks for the key from the
process on the remote node. The local server don't have commit count for
remote ranks, it is maintained by another PMIx server, so commit count
should be ignored for remote requests.
Signed-off-by: Boris Karasev <karasev.b@gmail.com>
This commit is a large update to the osc/rdma component. Included in
this commit:
- Add support for using hardware atomics for fetch-and-op and single
count accumulate when using the accumulate lock. This will improve
the performance of these operations even when not setting the
single intrinsic info key.
- Rework how large accumulates are done. They now block on the get
operation to fix some bugs discovered by an IBM one-sided test. I
may roll back some of the changes if the underlying bug in the
original design is discovered. There appear to be no real
difference (on the hardware this was tested with) in performance so
its probably a non-issue. References #2530.
- Add support for an additional lock-all algorithm: on-demand. The
on-demand algorithm will attempt to acquire the peer lock when
starting an RMA operation. The lock algorithm default has not
changed. The algorithm can be selected by setting the
osc_rdma_locking_mode MCA variable. The valid values are two_level
and on_demand.
- Make use of the btl_flush function if available. This can improve
performance with some btls.
- When using btl_flush do not keep track of the number of put
operations. This reduces the number of atomic operations in the
critical path.
- Make the window buffers more friendly to multi-threaded
applications. This was done by dropping support for multiple
buffers per MPI window. I intend to re-add that support once the
underlying performance bug under the old buffering scheme is
fixed.
- Fix a bug in request completion in the accumulate, get, and put
paths. This also helps with #2530.
- General code cleanup and fixes.
Signed-off-by: Nathan Hjelm <hjelmn@lanl.gov>
data sieving has to occur for any offset provided that is larger
or equal zero for this implementation to work correctly.
Signed-off-by: Edgar Gabriel <egabriel@central.uh.edu>
* The `MPIR_PROCDESC` structure needs to be visible even in optimized
builds so that debuggers can attach to `mpirun` and properly read the
`MPIR_proctable`.
* In the v2.0.x and v2.x series this structure resided in the `orterun`
directory and included the `CFLAGS` fix included here. This code
moved in the v3.x series and the `CFLAGS` did not move causing this
issue.
- Instead of applying the debug `CFLAGS` globally to libopen-rte,
only apply them to the `orted_submit.c` compile which contains the
MPIR symbols.
Signed-off-by: Joshua Hursey <jhursey@us.ibm.com>
this commit fixes an issue observed with romio314 and the hdf5 1.10.x testsuite.
The ADIOI_Datatype_iscontig() routine in romio314/src/io_romio314_module.c
will now return for a datatype of size 0 that it is contiguous, even if the extent
of the datatype is non-zero. This avoids a segmentation fault observed in the
ADIOI_Flatten routine, and fixes this particular with the hdf5 1.10.x testsuite in
OpenMPI with romio314.
Signed-off-by: Edgar Gabriel <egabriel@central.uh.edu>
As discussed on https://github.com/mpi-forum/mpi-issues/issues/77#issuecomment-369663119
the conversion to double in the MPI_Wtime decrease the range
and accuracy of the resulting timer. By setting the timer to
0 at the first usage we basically maintain the accuracy for
194 days even for gettimeofday.
Signed-off-by: George Bosilca <bosilca@icl.utk.edu>
The NAG Fortran check only matched "nagfor" exactly, and failed if a
path to nagfor was provided. Also change "-pthread" into
"-Wl,-pthread".
Signed-off-by: Themos Tsikas <themos.tsikas@nag.co.uk>
Signed-off-by: Jeff Squyres <jsquyres@cisco.com>
The current code path for PMIx_Resolve_peers and PMIx_Resolve_nodes executes a threadshift in the preg components themselves. This is done to ensure thread safety when called from the user level. However, it causes thread-stall when someone attempts to call the regex functions from _inside_ the PMIx code base should the call occur from within an event.
Accordingly, move the threadshift to the client-level functions and make the preg components just execute their algorithms. Create a new pnet/test component to verify that the prge code can be safely accessed - set that component to be selected only when the user directly specifies it. The new component will be used to validate various logical extensions during development, and can then be discarded.
Signed-off-by: Ralph Castain <rhc@open-mpi.org>
(cherry picked from commit 456ac7f7af3d9ba09888e3c899eb001daaa24aef)
This is a point-in-time update that includes support for several new PMIx features, mostly focused on debuggers and "instant on":
* initial prototype support for PMIx-based debuggers. For the moment, this is restricted to using the DVM. Supports direct launch of apps under debugger control, and indirect launch using prun as the intermediate launcher. Includes ability for debuggers to control the environment of both the launcher and the spawned app procs. Work continues on completing support for indirect launch
* IO forwarding for tools. Output of apps launched under tool control is directed to the tool and output there - includes support for XML formatting and output to files. Stdin can be forwarded from the tool to apps, but this hasn't been implemented in ORTE yet.
* Fabric integration for "instant on". Enable collection of network "blobs" to be delivered to network libraries on compute nodes prior to local proc spawn. Infrastructure is in place - implementation will come later.
* Harvesting and forwarding of envars. Enable network plugins to harvest envars and include them in the launch msg for setting the environment prior to local proc spawn. Currently, only OmniPath is supported. PMIx MCA params control which envars are included, and also allows envars to be excluded.
Signed-off-by: Ralph Castain <rhc@open-mpi.org>