1
1
openmpi/ompi/runtime
Brian Barrett 8b778903d8 Fix longstanding issue with our multi-project support. Rather than using
pkg{data,lib,includedir}, use our own ompi{data,lib,includedir}, which is
always set to {datadir,libdir,includedir}/openmpi.  This will keep us from
having help files in prefix/share/open-rte when building without Open MPI,
but in prefix/share/openmpi when building with Open MPI.

This commit was SVN r30140.
2014-01-07 22:11:15 +00:00
..
help-mpi-runtime.txt Remove tabs for spaces, fix some error messages. 2013-03-01 19:13:06 +00:00
Makefile.am Fix longstanding issue with our multi-project support. Rather than using 2014-01-07 22:11:15 +00:00
mpiruntime.h Per RFC add initial support for the MPI 3.0 tools interface. 2013-04-24 15:59:23 +00:00
ompi_cr.c MCA/base: Add new MCA variable system 2013-03-27 21:09:41 +00:00
ompi_cr.h Move the RTE framework change into the trunk. With this change, all non-CR 2013-01-27 23:25:10 +00:00
ompi_info_support.c tools: Add oshmem_info utility 2013-10-12 19:03:32 +00:00
ompi_info_support.h tools: Add oshmem_info utility 2013-10-12 19:03:32 +00:00
ompi_module_exchange.c Revert r29917 and replace it with a fix that resolves the thread deadlock while retaining the desired debug info. In an earlier commit, we had changed the modex accordingly: 2013-12-17 03:26:00 +00:00
ompi_module_exchange.h Move the RTE framework change into the trunk. With this change, all non-CR 2013-01-27 23:25:10 +00:00
ompi_mpi_abort.c Per this email thread: 2013-12-18 17:57:37 +00:00
ompi_mpi_finalize.c Fix minor MPI thread memory leak / fix valgrind still-reachable warning. 2013-12-24 11:05:51 +00:00
ompi_mpi_init.c Due to MPI_Comm_idup we can no longer use the communicator's CID as 2013-10-03 01:11:28 +00:00
ompi_mpi_params.c Fix so we do not get warnings when running on system without CUDA software installed and CUDA-aware compiled in. 2013-12-20 20:39:25 +00:00
ompi_mpi_preconnect.c MCA/base: Add new MCA variable system 2013-03-27 21:09:41 +00:00
params.h Since the calls to "PMI get" scale by number of procs (not nodes), it makes more sense to have the MCA param be the cutoff based on number of procs. Also, it occurred to me that this shouldn't impact the nidmap process as that is built and circulated when we launch via mpirun, not during direct launch. 2013-08-22 03:40:26 +00:00