allocation logic is completely done outside the data-type engine (in the PML) there is
no need for any special case inside the data-type engine. There is less arguments for
the ompi_convertor_pack and ompi_convertor_unpack as well (the last field free_after is
not required anymore as there is no memory allocated in the engine itself). This change
affect all components using datatypes. I test most of them, but it might happens that I
miss some ... If it's the case please let me know (don't shoot the pianist!!).
This commit was SVN r12331.
the default decision functions (for broadcast, reduce and barrier) are based on a
high performance network (not TCP). It should give good performance (really good) for
any network having the following caracteristics: small latency (5 microseconds) and good
bandwidth (more than 1Gb/s).
+ Cleanup of the reduce algorithms, plus 2 new algorithms (binary and binomial). Now most
of the reduce algorithms use a generic tree based function for completing the reduce.
+ Added macros for computing the trees (they are used for bcast and reduce right now).
+ Allow the usage of all 5 topologies.
+ Jelena's implementation of a binary tree that can be used for non commutative operations.
Right now only the tree building function is there, it will get activated soon.
+ Some others minor cleanups.
This commit was SVN r12326.
Give a more intelligible error message when someone passes -nolocal and the only available node is the local node.
This commit was SVN r12325.
The following Trac tickets were found above:
Ticket 487 --> https://svn.open-mpi.org/trac/ompi/ticket/487
A segfault would occur in mca_pml_ob1_recv_request_progress() when trying to prepare the convertor for unpacking, because the request's req_proc field was NULL.
Turns out that we weren't setting the req_proc field in the MCA_PML_OB1_CHECK_SPECIFIC_AND_WILD_RECEIVES_FOR_MATCH macro. Instead of just setting it there I removed the other place req_proc was being set correctly, and instead took care of all the cases at once in mca_pml_ob1_recv_frag_match().
This commit was SVN r12323.
parameter. For optimisation purpose only this BTL is used to send packet
through instead of trying to send packets through all BTLs. But actually the
code was wrong. It simply used provided bml_btl and it may represent different
endpoint from packet's destination. The fixed code checks if packet's
destination is reachable through the BTL, finds appropriate bml_btl and only
then tries to send it through correct bml_btl.
This commit was SVN r12319.
is done to assure alignment so strictly aligned CPUs (like SPARC) do not
sigbus. This also may benefit other platforms too.
This commit fixes trac:494.
This commit was SVN r12312.
The following Trac tickets were found above:
Ticket 494 --> https://svn.open-mpi.org/trac/ompi/ticket/494
If you want to look at our launch and MPI process startup times, you can do so with two MCA params:
OMPI_MCA_orte_timing: set it to anything non-zero and you will get the launch time for different steps in the job launch procedure. The degree of detail depends on the launch environment. rsh will provide you with the average, min, and max launch time for the daemons. SLURM block launches the daemon, so you only get the time to launch the daemons and the total time to launch the job. Ditto for bproc. TM looks more like rsh. Only those four environments are currently supported - anyone interested in extending this capability to other environs is welcome to do so. In all cases, you also get the time to setup the job for launch.
OMPI_MCA_ompi_timing: set it to anything non-zero and you will get the time for mpi_init to reach the compound registry command, the time to execute that command, the time to go from our stage1 barrier to the stage2 barrier, and the time to go from the stage2 barrier to the end of mpi_init. This will be output for each process, so you'll have to compile any statistics on your own. Note: if someone develops a nice parser to do so, it would be really appreciated if you could/would share!
This commit was SVN r12302.
packing a sockaddr_in, as there are some endianness and padding issues
with sending a sockaddr_in. Note that the sin_port and sin_addr are
already in network byte order, which is why we pack them as a byte
string.
Refs trac:493
This commit was SVN r12301.
The following Trac tickets were found above:
Ticket 493 --> https://svn.open-mpi.org/trac/ompi/ticket/493
"this is bogus" kind of answer. Passing in bad error codes should
only happen in erroneous sections of the OMPI code base, but still,
it's far more social to print a message saying, "hey, you messed up!"
rather than seg faulting.
Reviewed by Edgar.
This commit was SVN r12295.
mentioned in the comment the completion/callback of the triggered
send operation can happen before the call returns. If this happens and
if the pipeline depth is 0 before we triggered the send operation and
this is the last send operation of the request then the completion detection
code will decrement the pipeline depth and check it for equality to 0.
Because (0-1) != 0 the pml completion function for this request will
*not* be called.
This part 2 of the fix for ticket #246.
This commit was SVN r12292.
renamed the register fields in the thread state structures. Support compiling
with either the old or new names, keying off the UNIX03 define (which is what
the 10.5 headers do).
Refs trac:450
This commit was SVN r12285.
The following Trac tickets were found above:
Ticket 450 --> https://svn.open-mpi.org/trac/ompi/ticket/450
Only close off stdout/stderr from the daemons if we are not debugging the slurm pls and --debug-daemons was not passed.
This commit was SVN r12276.
The following Trac tickets were found above:
Ticket 352 --> https://svn.open-mpi.org/trac/ompi/ticket/352
Get the ordering right so that a singleton can start.
Protect the rmgr copy app_context function from NULL fields
Tell the mapper it is okay for there not to be a pre-existing mapping plan for a parent when dynamically spawning processes
This commit was SVN r12257.
but reset everything else. Once initialized the condition (and the attached
mutex) should be kept alive as long as possible if we want to be able to
retrieve all the informations.
This commit was SVN r12253.
Fix the persistent daemon problem where it was exiting when a job completed. Problem was that the persistent daemon would order the job daemons to exit. They would then send an 'ack' back to the persistent daemon - but the ack consisted of an echo of the "exit" command, which was recv'd by the wrong listener who treated it as a properly sent cmd....and exited.
This commit was SVN r12243.