- Use opal hash table instead of list for group lookup.
- Code cleanup/refactoring. Group cache is now a part
of the proc_group.
Signed-off-by: Alex Mikheev <alexm@mellanox.com>
store oshmem related per proc data in an oshmem_proc_data_t struct,
that is stored in the padding section of an ompi_proc_t
this data can be accessed via the OSHMEM_PROC_DATA(proc) macro
Fixesopen-mpi/ompi#2023
and fix oshmem_group_proc_{init,create} so they use the number of procs in oshmem_comm_world
Thanks Debendra Das for the report and Josh Ladd for the guidance
Fixesopen-mpi/ompi#1966
Since the introduction of the on-demand proc allocation
the check become erroneous and irrelevant.
Moreover, it completely breaks OpenSHMEM support in OMPI.
Signed-off-by: Pavel Shamis (Pasha) <pasharesearch@gmail.com>
1324740 - Resource leak
1304562 - Unchecked return value
1340514 - Dereference before null check
1340515 - Use of untrusted scalar value
1340516 - Use of untrusted string value
ompi has new mpi_add_procs_cutoff argument that can control
creation of ompi_proc_t but We should be confident that all
ompi_proc_t object exists during oshmem_group_all creation.
Probably it could be done in more flexible way later.
Signed-off-by: Igor Ivanov <Igor.Ivanov@itseez.com>
Most functionality of oshmem_proc duplicates ompi_proc. In addition
to that, Current logic does not allow to do oshmem initialization
w/o ompi startup.
So this refactoring allows to avoid code duplication, decrease used
memory and make oshmem support easier.
Now oshmem_proc is transparent ompi_proc structure, that can be
extended by oshmem specific data.
Signed-off-by: Igor Ivanov <Igor.Ivanov@itseez.com>
WHAT: Open our low-level communication infrastructure by moving all necessary components (btl/rcache/allocator/mpool) down in OPAL
All the components required for inter-process communications are currently deeply integrated in the OMPI layer. Several groups/institutions have express interest in having a more generic communication infrastructure, without all the OMPI layer dependencies. This communication layer should be made available at a different software level, available to all layers in the Open MPI software stack. As an example, our ORTE layer could replace the current OOB and instead use the BTL directly, gaining access to more reactive network interfaces than TCP. Similarly, external software libraries could take advantage of our highly optimized AM (active message) communication layer for their own purpose. UTK with support from Sandia, developped a version of Open MPI where the entire communication infrastucture has been moved down to OPAL (btl/rcache/allocator/mpool). Most of the moved components have been updated to match the new schema, with few exceptions (mainly BTLs where I have no way of compiling/testing them). Thus, the completion of this RFC is tied to being able to completing this move for all BTLs. For this we need help from the rest of the Open MPI community, especially those supporting some of the BTLs. A non-exhaustive list of BTLs that qualify here is: mx, portals4, scif, udapl, ugni, usnic.
This commit was SVN r32317.
oshmem collective can select MPI collectives modules as provider.
Developed by Elena, reviewed by Miked
cmr=v1.7.5:reviewer=ompi-rm1.7
This commit was SVN r30808.