* datatype: Fix a incorrect datatype name of `MPI_UNSIGNED`
Name of predefined datatype for C `unsigned int` gotten by
`MPI_TYPE_GET_NAME` should be `MPI_UNSIGNED`, not `MPI_UNSIGNED_INT`.
* datatype: Fix incorrect datatype names of `MPI_C_BOOL` and `MPI_CXX_*`
Names of predefined datatypes gotten by `MPI_TYPE_GET_NAME` are:
after this commit (correct) | before this commit (incorrect)
-----------------------------------------------------------
MPI_C_BOOL MPI_BOOL
MPI_CXX_BOOL MPI_BOOL
MPI_CXX_FLOAT_COMPLEX MPI_C_FLOAT_COMPLEX
MPI_CXX_DOUBLE_COMPLEX MPI_C_DOUBLE_COMPLEX
MPI_CXX_LONG_DOUBLE_COMPLEX MPI_C_LONG_DOUBLE_COMPLEX
* datatype: Fix a incorrect datatype name of `MPI_2DOUBLE_PRECISION`
Name of the predefined datatype for Fortran two `double precision`
gotten by `MPI_TYPE_GET_NAME` should be `MPI_2DOUBLE_PRECISION`,
not `MPI_2DBLPREC`.
This bug was caused by setting the name to `opal_datatype_t::name`
instead of `ompi_datatype_t::name`.
* datatype: Fix `MPI_UNSIGNED_CHAR` internal flag
`MPI_UNSIGNED_CHAR` is an integer type.
* ompi/cxx: Fix C++ `MPI::LONG_DOUBLE_INT` definition
Just a typo fix. Without this fix, `MPI::MAX_LOC` and `MPI::MIN_LOC`
cannot be used with `MPI::LONG_DOUBLE_INT` in C++ programs.
I know the C++ binding is obsolete, but fixing this is harmless.
* Add FUJITSU copyright
This commit adds the following symbols
MPI_Alloc_mem_cptr_f
MPI_Alloc_mem_cptr_f08
PMPI_Alloc_mem_cptr_f
PMPI_Alloc_mem_cptr_f08
These are implemented in the same way as other `_cptr` routines.
`sendcounts`, `sdispls`, and `sendtype(s)` must be ignored
if `MPI_IN_PLACE` is specified for `sendbuf`.
This commit makes the param check code same as the blocking
`ALLTOALL{V|W}` function.
This commit rewrites both the mpool and rcache frameworks. Summary of
changes:
- Before this change a significant portion of the rcache
functionality lived in mpool components. This meant that it was
impossible to add a new memory pool to use with rdma networks
(ugni, openib, etc) without duplicating the functionality of an
existing mpool component. All the registration functionality has
been removed from the mpool and placed in the rcache framework.
- All registration cache mpools components (udreg, grdma, gpusm,
rgpusm) have been changed to rcache components. rcaches are
allocated and released in the same way mpool components were.
- It is now valid to pass NULL as the resources argument when
creating an rcache. At this time the gpusm and rgpusm components
support this. All other rcache components require non-NULL
resources.
- A new mpool component has been added: hugepage. This component
supports huge page allocations on linux.
- Memory pools are now allocated using "hints". Each mpool component
is queried with the hints and returns a priority. The current hints
supported are NULL (uses posix_memalign/malloc), page_size=x (huge
page mpool), and mpool=x.
- The sm mpool has been moved to common/sm. This reflects that the sm
mpool is specialized and not meant for any general
allocations. This mpool may be moved back into the mpool framework
if there is any objection.
- The opal_free_list_init arguments have been updated. The unused0
argument is not used to pass in the registration cache module. The
mpool registration flags are now rcache registration flags.
- All components have been updated to make use of the new framework
interfaces.
As this commit makes significant changes to both the mpool and rcache
frameworks both versions have been bumped to 3.0.0.
Signed-off-by: Nathan Hjelm <hjelmn@lanl.gov>
Fix CID 1327338: Resource leak (RESOURCE_LEAK):
Confirmed that the c_info array was being leaked. Free the array
before returning.
Signed-off-by: Nathan Hjelm <hjelmn@lanl.gov>
This commit removes the --with-mpi-thread-multiple option and forces
MPI_THREAD_MULTIPLE support. This cleans up an abstration violation
in opal where OMPI_ENABLE_THREAD_MULTIPLE determines whether the
opal_using_threads is meaningful. To reduce the performance hit on
MPI_THREAD_SINGLE programs an OPAL_UNLIKELY is used for the
check on opal_using_threads in OPAL_THREAD_* macros.
This commit does not clean up the arguments to the various functions
that take whether muti-threading support is enabled. That should be
done at a later time.
Signed-off-by: Nathan Hjelm <hjelmn@lanl.gov>
Nowhere in the standard does it say that it is invalid to pass
MPI_BOTTOM to MPI_Get_address yet we were returning an error. This
commit removes the error check on NULL == location.
Fixesopen-mpi/ompi#1355.
Signed-off-by: Nathan Hjelm <hjelmn@lanl.gov>
Since OS X 10.11 (aka El Capitan) DYLD_LIBRARY_PATH is no more
propagated to children, so try to dlopen libmpi with the full path
using the directory of libmpi_java
Fixesopen-mpi/ompi#1220
Thanks Alexander Daryin for reporting this
Remove excessive parameter check to avoid premature exit from the collective.
MPI standard says:
The type signature associated with sendcount, sendtype, at a process must be equal to
the type signature associated with recvcount, recvtype at any other process. This implies
that the amount of data sent must be equal to the amount of data received, pairwise between
every pair of processes.
In case of inter-communicator we have 2 group of processes and "left" group may call
MPI_Alltoall(NULL, 0, MPI_INT, buf, 10, MPI_INT, comm, ...);
and the right one:
MPI_Alltoall(buf,10,MPI_INT, NULL, 0, MPI_INT, comm, ...);
And it would be legal though one of the group will receive 0 bytes from others.
This was triggered by MPICH/coll test called icalltoall.
Instead of solely relying on the out value definitions in
MPI_Waitsome.3, explicitly copy this text here.
Note that the original text in this man page was copied verbatim from
the MPI spec; we've now added a bit more text (copied from
MPI_Waitsome.3in) that explains the out values so that users don't
have to cross-reference to another man page.
Thanks to Eric Schnetter for the suggestion.
Fixesopen-mpi/ompi#1153
This commit changes the way ompi_proc_t's are retained/released by
ompi_group_t's. Before this change ompi_proc_t's were retained once
for the group and then once for each retain of a group. This method
adds unnecessary overhead (need to traverse the group list each time
the group is retained) and causes problems when using an async
add_procs.
Signed-off-by: Nathan Hjelm <hjelmn@lanl.gov>
Without this modification, gfortran throw the following error
if these variables are used for `MPI_DIST_GRAPH_CREATE_ADJACENT` or
`MPI_DIST_GRAPH_CREATE_ADJACENT`.
Error: There is no specific subroutine for the generic
'mpi_dist_graph_create_adjacent' at (1)
When using dynamic memory windows the displacement becomes a
pointer. Since the high bit may be set on valid pointers on some
platforms the check for disp > 0 is invalid. This commit adds the
window flavor to ompi_win_t and disables the displacement check when
operating on dynamic memory windows.
Signed-off-by: Nathan Hjelm <hjelmn@lanl.gov>
currently, MPI fails with MPI_ERR_ARG. This is counter intuitive since
MPI_IN_PLACE is a legitimate parameter. MPI_IN_PLACE might not be correctly
implemented by all the non blocking modules (libnbc, ...) so fail with
MPI_ERR_INTERN for the time being.
Proposed extensions for Open MPI:
- If MPI_INITLIZED is invoked and MPI is only partially initialized,
wait until MPI is fully initialized before returning.
- If MPI_FINALIZED is invoked and MPI is only partially finalized,
wait until MPI is fully finalized before returning.
- If the ompi_mpix_allow_multi_init MCA param is true, allow MPI_INIT
and MPI_INIT_THREAD to be invoked multiple times without error (MPI
will be safely initialized only the first time it is invoked).
Add ompi_mpi_dynamics_disable() function to disable MPI dynamic
process functionality (i.e., such that if MPI_COMM_SPAWN/etc. are
invoked, you'll get a show_help error explaining that MPI dynamic
process functionality is disabled in this environment -- instead of a
potentially-cryptic network or hardware error).
Fixes#984
when profiling is built.
This prevents oshmem subroutines from being wrapped twice by third
party tools (e.g. once in oshmem and once in MPI)
see discussion starting at http://www.open-mpi.org/community/lists/devel/2015/08/17842.php
Thanks to Bert Wesarg for bringing this to our attention
This commit adds man pages for the MPI_Win_allocate and MPI_Win_allocated_shared
MPI-3 functions. The man page for MPI_Win_create has also been updated to
indicate support for the same_size and same_disp_unit info keys
Signed-off-by: Nathan Hjelm <hjelmn@lanl.gov>
- MPI_Compare_and_swap
- MPI_Fetch_and_op
- MPI_Raccumulate
- MPI_Win_detach
Thanks to Michael Knobloch and Takahiro Kawashima for bringing this
to our attention
Bring Slurm PMI-1 component online
Bring the s2 component online
Little cleanup - let the various PMIx modules set the process name during init, and then just raise it up to the ORTE level. Required as the different PMI environments all pass the jobid in different ways.
Bring the OMPI pubsub/pmi component online
Get comm_spawn working again
Ensure we always provide a cpuset, even if it is NULL
pmix/cray: adjust cray pmix component for pmix
Make changes so cray pmix can work within the integrated
ompi/pmix framework.
Bring singletons back online. Implement the comm_spawn operation using pmix - not tested yet
Cleanup comm_spawn - procs now starting, error in connect_accept
Complete integration
Ensure to define ompi/pompi versions for platforms that don't have
weak symbols. Also make fortran/mpif-h/profile build a separate
sizeof library, just like fortran/mpifh-h does.
Per http://www.open-mpi.org/community/lists/devel/2015/08/17775.php,
some compilers don't like it when there's a .f90 file that only
contains comments (and no actual Fortran code). So if OMPI determines
that the Fortran compiler does not support enough Fortran mojo to
support MPI_SIZEOF, generate at least one dummy Fortran subroutine
that can be compiled in an otherwise barren Fortran landscape that is
devoid of life and hope.
This pull request adds an arraylist of type Buffer to
the Request class. Whenever a request object is created
that has associated buffers, the buffers should be added
to this array list so the java garbage collector does
not dispose of the buffers prematurely.
This is a more robust expansion on the idea first proposed by
@ggouaillardet
Fixes#369
Signed-off-by: Nathaniel Graham <ngraham@lanl.gov>
Fortran uses objects (ompi_f08_mpi_comm_world, mpi_fortran_bottom,, ...) that are defined in C.
Some compilers have different requirements on how these objects should be aligned.
Smaller alignment in C can lead to several confusing warnings from the linker, so try to
find the alignment expected by Fortran compiler, and inform the C compiler.
The libmpi_*.la fortran libraries make some direct calls to
libopen-pal.la. In many (most?) cases, having libmpi_* link
against libmpi is sufficient (because libmpi pulls in libopen-pal).
But when building RPMs on SLES, some compiler/linker flags are used
that seem to make this implicit linking not sufficient -- we get
missing opal symbols when creating libmpi_mpifh.la. So link in
open-pal directly (vs. indirectly), which solves the problem.
The prior code was checking string constants (which are #defines from
configure) against NULL. They can never be NULL, so the checks were
overly-defensive. If the preprocessor macros do not exist, we'll get
a different compiler error. So remove the dead code.
This fixes CID 72349.
A helper method in Request.java could cause a crash
if the request array that was passed contained nulls.
Signed-off-by: Nathaniel Graham <ngraham@lanl.gov>
Added Cloneable to the implemented interface list as per
Coverity suggestion. The required methods were already
implemented, but it was not explicitly stated. This is
an intent revealing change.
Signed-off-by: Nathaniel Graham <ngraham@lanl.gov>
Includes java bindings for MPI_GET_ELEMENTS_X and
MPI_STATUS_SET_ELEMENTS_X. This PR also adds the
Count object which represents MPI_Count.
Signed-off-by: Nathaniel Graham <ngraham@lanl.gov>
only define the unique fortran symbol depending on
- CAPS
- PLAIN
- SINGLE_UNDERSCORE
- DOUBLE_UNDERSCORE
and bind the f08 symbol to the uniquely defined C symbol.
Use real data structures to make the code simpler.
(perl script written by Jeff)
There are a few places where adding the @param for the variable
javadoc wants does not make sense, so I added suppression statements
in those areas.
Signed-off-by: Nathaniel Graham <ngraham@lanl.gov>
Bindings for the MPI_WIN_FLUSH_LOCAL, MPI_WIN_FLUSH_LOCAL_ALL, MPI_WIN_ALLOCATE, MPI_WIN_ALLOCATE_SHARED, and MPI_COMM_SPLIT_TYPE. Also added several necessary constants.
Signed-off-by: Nathaniel Graham ngraham@lanl.gov
Java bindings for the following functions: MPI_RACCUMULATE, MPI_GET_ACCUMULATE, MPI_RGET_ACCUMULATE, MPI_WIN_LOCK_ALL, MPI_WIN_UNLOCK_ALL, MPI_WIN_SYNC, MPI_WIN_FLUSH, MPI_WIN_FLUSH_ALL, MPI_COMPARE_AND_SWAP, and MPI_FETCH_AND_OP. Also includes Java bindings for the Operations MPI_REPLACE and MPI_NO_OP.
Signed-off-by: Nathaniel Graham <ngraham@lanl.gov>
@ggouaillardet was right -- we should have put the
ompi_buffer_detach_f08() function in the use-mpi-f08 directory to
begin with. Putting it in the mpif-h directory made it complicated as
to whether the function would be built or not (e.g., whether weak
symbols were supported or not, whether the profiling layer was
disabled or not, ...etc.).
Just put it in the use-mpi-f08 directory and always build it (when the
mpi_f08 module is built, of course), and keep it simple.
Since there is no profiling version of the f08 buffer_detach function
(or, more specifically, the Fortran compile does the name mangling of
MPI and PMPI to the back-end C function for us), ensure that it is
only compiled once.
Also, per Gilles' observation, the f08-related #pragmas are no longer
relevant.
Add an mpi_f08-specific implementation for MPI_BUFFER_DETACH.
Per MPI-3.1:3.6, p45, the buffer argument is ignored in
MPI_BUFFER_DETACH for mpif.h and the mpi module. But in the mpi_f08
module, the buffer argument is treated like it is in the C binding.
This commit does two things. It removes checks for C99 required
headers (stdlib.h, string.h, signal.h, etc). Additionally it removes
definitions for required C99 types (intptr_t, int64_t, int32_t, etc).
Signed-off-by: Nathan Hjelm <hjelmn@me.com>
The definition of MPI_T_pvar_get_index was incorrect. This commit
fixes the definition and adds a missing return code.
Signed-off-by: Nathan Hjelm <hjelmn@me.com>
CID 71734 Self assignment (NO_EFFECT)
This code has no effect. The original author of the offending code
does not remember why the self-assignment is there. Fortran
MPI_Win_get_attr tests are working with or without it so remove the
code.
Signed-off-by: Nathan Hjelm <hjelmn@lanl.gov>
This commit adds support for MPI_Aint_add and MPI_Aint_diff. These
functions are implemented as macros in C (explicitly allowed by
MPI-3.1). The fortran implementations are a similar mess to the
MPI_Wtime implementations.
Signed-off-by: Nathan Hjelm <hjelmn@lanl.gov>
CID 1047284 Uninitialized scalar variable (UNINIT)
CID 1047285 Uninitialized scalar variable (UNINIT)
CID 1047286 Uninitialized scalar variable (UNINIT)
If a performance variable session has no handles we should be returning MPI_SUCCESS
for MPI_T_pvar_start, MPI_T_pvar_stop, and MPI_T_pvar_reset. The code was returning
an unitialized value. This commit also updates the error code to return the proper
error on failure.
Signed-off-by: Nathan Hjelm <hjelmn@me.com>
Talked to @ggouaillardet about this code. It was not intended to be committed to
master. Removing to fix coverity issue.
CID 1270134 Unchecked return value
Signed-off-by: Nathan Hjelm <hjelmn@lanl.gov>
This commit also adds protection against negative error codes in ompi
error code functions. There is one outstanding issue. There is a
negative MPI error code defined in mpi.h. This will need to be fixed
separetely.
This commit fixes coverity IDs 1271533 and 1270156.
Signed-off-by: Nathan Hjelm <hjelmn@lanl.gov>
The sargs array and its elements were malloced but not freed. Note
that strings passed to NewStringUTF are copied into Java's heap and it
is the callers responsibility to free the original string.
Signed-off-by: Nathan Hjelm <hjelmn@lanl.gov>
This commit fixes several vagrind errors. Included:
- installdirs did not correctly reinitialize all pointers to NULL
at close. This causes valgrind errors on a subsequent call to
opal_init_tool.
- several opal strings were leaked by opal_deregister_params which
was setting them to NULL instead of letting them be freed by the
MCA variable system.
- move opal_net_init to AFTER the variable system is initialized and
opal's MCA variables have been registered. opal_net_init uses a
variable registered by opal_register_params!
- do not leak ompi_mpi_main_thread when it is allocated by
MPI_T_init_thread.
- do not overwrite ompi_mpi_main_thread if it is already set (by
MPI_T_init_thread).
- mca_base_var: read_files was overwritting mca_base_var_file_list
even if it was non-NULL.
- mca_base_var: set all file global variables to initial states on
finalize.
- btl/vader: decrement enumerator reference count to ensure that it
is freed.
Signed-off-by: Nathan Hjelm <hjelmn@lanl.gov>
There were a number of bugs in the framework/variable code that
affected deregistration:
- Frameworks could be erroneously closed if seperately registered and
opened then subsequently closed. This was a bug in the original
design which only reference counted opens but not
registrations. This would cause undefined behavior if
MPI_T_finalize actually calls ompi_info_close_components as
intended. Now both registrations and opens are reference counted
and frameworks/components are not torn down until the matching
number of close calls have been made.
- group_find_by_name did not pass the invalidok flags down
to mca_base_var_group_get_internal correctly.
- Group deregistration caused the group to be completely reset. This
does not match the behavior required by MPI_T as it could reduce
the number of variables/subgroups in a group.
This commit also updates MPI_T_finalize to call
ompi_info_close_components as originally intended.
Closes#374
Signed-off-by: Nathan Hjelm <hjelmn@lanl.gov>
Remove some outdated discussion of "color" -- looks like this was a
copy-n-paste from the MPI_Comm_split man page. Also make some minor
updates to some Open MPI-specific key text.
Thanks to @eschnett for raising the issue.
Fixes#437.
Please verify your components have been updated correctly. Keep in
mind that in terms of threading:
OPAL_FREE_LIST_GET -> opal_free_list_get_st
OPAL_FREE_LIST_RETURN -> opal_free_list_return_st
I used the opal_using_threads() variant anytime it appeared multiple
threads could be operating on the free list. If this is not the case
update to _st. If multiple threads are always in use change to _mt.