- remove windows socket initialization (it's already in the TCP component)
- protect all used header files
- remove the unused ones.
This commit was SVN r8434.
empty for non threaded builds, but somehow just by moving the code a
little bit around and removing 2 call to lock/unlock the latency for TCP
went down by 2 micro-seconds ...
This commit was SVN r8426.
number of recv system call by caching the data. Each endpoint has a buffer
(the size is an MCA parameter) that can be use as a cache. Before each receive
operation this buffer is added at the end of the iovec list. All data that are
not expected by the fragment will go in this cache. If the cache contain data
all subsequent receive will just memcpy the data into the BTL buffers.
The only drawback is that we will spin around the receive_handle until all the
cached data is readed by the PML layer. This limitation come from the fact that
the event library is unable to call us if there is no events on the socket.
Therefore we are unable to keep the data in the cache until the next loop
into the progress engine.
This commit was SVN r8398.
the script did not complete successfully
- ROMIO likes to default to using the system compiler if no compiler is
specified. This can lead to using a different compiler for ROMIO as
for Open MPI, which is not always a good thing. Reset the default to
behave the same way Open MPI does.
This commit was SVN r8380.
- when processing out of order list - reset match to null on each iteration
- check matched request type and if probe - complete probe and queue fragment
on unexpected list
This commit was SVN r8339.
the page address three situations:
1. if the array of requests are all REQUEST_NULL, set flag=true and
index=MPI_UNDEFINED
2. if the array of requests are not all REQUEST_NULL, if any of them
finished, set flag=true and index=whatever_the_index_was
3. if the array of requests are not all REQUEST_NULL, if none of them
finished, set flag=false and index=MPI_UNDEFINED
With regards to the index value, we are currently doing 1 and 2, but
not 3. More specifically, index should be set to MPI_UNDEFINED if no
requests are found to have completed (regardless of whether they are
REQUEST_NULL or not).
This patch fixes this problem.
This commit was SVN r8319.
functionality. This is a global registration, so all components have to be made
aware of the registration -- therefore, do things at the component level, not the
module level.
This commit was SVN r8314.
case of:
sizeof(MPI_Flogical) != sizeof (int)
and
Fortran value of .TRUE. != 1
as is often the case.
- Check in configure the value of .TRUE., the C-type coresponding to
logical and check, that fortran compiler does not do something strange
with arrays of logicals
- Convert all occurrences of logicals in the fortran wrappers, only
in case it is needed.
*Please note* Implementation of MPI_Cart_sub needed special treatment.
- Output these value in ompi_info -a
- Clean up the prototypes_mpi.h to just have a single definition and
thereby deleting the necessity for prototypes_pmpi.h
- configured, compiled and tested with F90-program, which uses
MPI_Cart_create and MPI_Cart_get:
linux ia32, gcc (no testing, as no f90)
linux ia32, gcc --disable-mpi-f77 --disable-mpi-f90 (had a bug there)
linux ia32, icc-8.1
linux opteron, gcc-3.3.5, pgcc, pathccx/pathf90 (tested just
pgi-compiler)
linux em64t, gcc, icc-8.1 (tested just icc)
This commit was SVN r8254.
increase the previous connection code was broken. It can take as much as 60 seconds to connect
64 processes. Now we do not create the connections when we add the procs but only when we send
them the first message. Now it take only 1.6 seconds to setup a 64 procs MPI job over MX (doing a 2 steps barrier in order to insure that we create all the connections).
This commit was SVN r8252.
displacement (for both the inter and intra-communicator version). The
displacements in scatterv are given in multiples of the sendtype.
This fix should probably make to v1.0.1 as well?
This commit was SVN r8251.