1. no -np provided - put one proc/node across all allocated nodes
2. -np N provided, N > #nodes - we print a pretty error message and exit
3. -np N provided, N <= #nodes - put one proc/node across N nodes
I also added a new orte constant (ORTE_ERR_SILENT) that allows us to pass up the chain that an error was encountered, but NOT print ORTE_ERROR_LOG messages. This is intended to be used for cases where the error we encounter is NOT an orte error, but rather is one associated with incorrect user input (e.g., the preceding case 2). In such cases, there is no point in printing an ORTE_ERROR_LOG chain of messages as it isn't an orte error.
This commit was SVN r12821.
I found only two places that were looking at the tokens:
1. the odls - we used the tokens to separately process the globals container data from everything else. In this case, I left the subscription that returned the globals data alone, but "stripped" the subscription that returned the launch data for the procs. These subscriptions have nothing to do with the xcast message.
2. the pml_base_modex - the callback function was getting process names from the returned tokens. Actually, this function was doing a very bad thing - it was assuming that the first token returned was *always* the process name. This is currently true, but is one of those assumptions that someone could have easily changed - and suddenly found the system inexplicably failing. I modified the function to (a) get the name sent back to us, (b) "stripped" the value structures of tokens and segment strings, and (c) correctly obtained process names from the returned values. I also reindented the heck out of the code so it was legible (at least, to my old eyes).
This commit was SVN r12813.
This commit fixes several aspects regarding MPI conformance of requests.
* Eliminate the last argument of ompi_errhandler_request_invoke(); we
''always'' want to invoke the back-end exception handler with the
real error code.
* Make it clear in comments that we only invoke the ''first''
exception in a given array of requests, even if there's more than
one request with a non-MPI_SUCCESS value for MPI_ERROR.
* Defer the freeing of requests upon exception in the back-end
functions to MPI_WAIT* and MPI_TEST* until later; the requests are
kept so that we know what handler to invoke when we actually invoke
the exception. After figuring that out, ''then'' we free requests
with pending exceptions on them.
* Clean up return codes from the back-end MPI_TEST* and MPI_WAIT*
functions.
* Slightly modify ompi_errcode_get_mpi_code() to return unity if it
receives an MPI error code (vs. an OMPI error code).
This commit was SVN r12810.
The following Trac tickets were found above:
Ticket 659 --> https://svn.open-mpi.org/trac/ompi/ticket/659
usually is ok on little-endian systems, as the upper 32 bits will likely
be ignored, but on 32-bit big-endian systems, lval is complete junk.
Use ival if 32 bit mode, lval if 64.
Mixing of 32 and 64 bit architectures won't work without more changes.
This commit was SVN r12802.
* Do not add new procs to the global list during modex callback or
when sharing orte names during accept/connect. For modex, we
cache the modex info for later, in case that proc ever does get
added to the global proc list. For accept/connect orte name
exchange between the roots, we only need the orte name, so no
need to add a proc structure anyway. The procs will be added
to the global process list during the proc exchange later in
the wireup process
* Rename proc_get_namebuf and proc_get_proclist to proc_pack
and proc_unpack and extend them to include all information
needed to build that proc struct on a remote node (which
includes ORTE name, architecture, and hostname). Change
unpack to call pml_add_procs for the entire list of new
procs at once, rather than one at a time.
* Remove ompi_proc_find_and_add from the public proc
interface and make it a private function. This function
would add a half-created proc to the global proc list, so
making it harder to call is a good thing.
This means that there's only two ways to add new procs into the global proc list at this time: During MPI_INIT via the call to ompi_proc_init, where my job is added to the list and via ompi_proc_unpack using a buffer from a packed proc list sent to us by someone else. Currently, this is enough to implement MPI semantics. We can extend the interface more if we like, but that may require HNP communication to get the remote proc information and I wanted to avoid that if at all possible.
Refs trac:564
This commit was SVN r12798.
The following Trac tickets were found above:
Ticket 564 --> https://svn.open-mpi.org/trac/ompi/ticket/564
* don't load data into a buffer until we have the data, as
the data contains some header information needed to
properly load the data
This commit was SVN r12792.
Obviously, people like bproc will have to get the app_num via another avenue...but that's a problem for another day. Several options are easily available.
This commit was SVN r12788.
* When using the load/unload interface, stash away the current buffer
type so that it can be properly unpacked on the receiving side if
the buffer type is other than the receiver default
* Include type information for unsized types (bool, int, size_t,
pid_t) so that they can be properly unpacked by the receiver
in the heterogeneous case.
* Restore the NON_DESC type as the default for optimized builds,
since it looks like this fixes the known issues with the
non-described buffers
Refs trac:587
This commit was SVN r12784.
The following Trac tickets were found above:
Ticket 587 --> https://svn.open-mpi.org/trac/ompi/ticket/587
- we have to be able to attach a string to an error class, not just to an
error code
- according to MPI-2 the attribute MPI_LASTUSEDCODE has to be updated
everytime you add a new code or a new class. Thus, you have to have single
list for both.
Thus, we got rid of the error_class structure. In the error-code structure, we
can distinguish whether we are dealing with an error code or an error class by
looking at the err->code element of the structure. In case its value is
MPI_UNDEFINED, the according entry is a class, else it is an error code. All
predefined error codes have the code and the class field set to the same
value.
The test MPI_Add_error_class1 passes now.
Fixes trac:418
This commit was SVN r12764.
The following Trac tickets were found above:
Ticket 418 --> https://svn.open-mpi.org/trac/ompi/ticket/418
r12714) for supporting compilers / architectures with different
padding rules.
This commit was SVN r12749.
The following SVN revisions from the original message are invalid or
inconsistent and therefore were not cross-referenced:
r12491
r12714
make this warning-proof, loop over the uint64_ts as an array of integers
and use %x. The final string is just as random and formatted exactly
the same, so we're all good in that department.
Refs trac:655
This commit was SVN r12742.
The following Trac tickets were found above:
Ticket 655 --> https://svn.open-mpi.org/trac/ompi/ticket/655
hits the buffer on the other side. For this kind of BTLs we need to send
FIN through the same BTL, PUT was performed with so network will handle
ordering for us. If we will use another BTL, receiver can get FIN before
data will hit the buffer and complete request prematurely. We mark such
problematic BTLs with MCA_BTL_FLAGS_FAKE_RDMA flag (this kind of RDMA
is really fake, because the real one guaranties that sender will see the
completion only after receiver's NIC confirmed that all the data was
received).
This commit was SVN r12732.
It calls mca_pml_ob1_send_fin_btl() which may fail and doesn't check return
code. This breaks all RDMA transports event when only one BTL is used. Revert
it for now, I am working on a real fix for the problem (I hope).
This commit was SVN r12731.
The following SVN revision numbers were found above:
r12720 --> open-mpi/ompi@3e3689320b
regresion from v1.1 was reviewed and put to v1.2 branch. So revert this part
of r12721 back.
This commit was SVN r12730.
The following SVN revision numbers were found above:
r12433 --> open-mpi/ompi@82f7c0dd69
r12721 --> open-mpi/ompi@3edd850d2e
Also, change the dss buffer type mca param to something more easily remembered (it is now "dss_buffer_type"). Heck, even I had to keep looking at the darn code to remember it.
This commit was SVN r12728.
1. implement and enable the non-described buffer operations. I will send out a more detailed explanation separately. However, this mode of operation (which is now the default) significantly reduces message size during startup. If you want the described buffers, set the mca param "-mca dss_describe_buffer 1".
2. revise the xcast system to support both linear and binomial tree broadcast methods. Since we are seeing scenarios where the binomiall tree can cause problems, I have made the linear method the default. To run with the binomial tree, set the mca param "-mca oob_xcast_mode binomial".
3. add some detailed timing reports to the xcast operation. These are enabled via "-mca oob_xcast_timing 1".
4. add some more unit tests for the dss and gpr (focused on support for the non-described buffer)
This commit was SVN r12722.
protocol when multiple NICS are available between 2 peers. The fix force
the FIN message to take exactly the same path as the fragment it describe
(i.e. same path means same BTL). Otherwise, the FIN can be received by
the peer before the RDMA complete and the request will get freed
too early.
This commit was SVN r12720.
- consistent error message when something fails (via BTL_ERROR macro)
- decrease the number of jumps.
- cleanup some parts of the code.
This commit was SVN r12719.
* Always invoke the error handler on MPI_COMM_WORLD for
invalid windows (except in win_create, which should
instead be on the given communicator).
* Allow get_errhandler in addition to set_errhandler
on MPI_WIN_NULL
Refs trac:647
This commit was SVN r12718.
The following Trac tickets were found above:
Ticket 647 --> https://svn.open-mpi.org/trac/ompi/ticket/647
The temporary solution is to switch into EV_NONBLOCK mode earlier (right after the mx_connect loop) so that there isn't a giant slowdown when processes enter the stage gate 2 barrier before other proesses. They will now not block in the event library for any period of time, which appears to have a 50% speedup when running at > 64 procs.
Refs trac:645
This commit was SVN r12713.
The following Trac tickets were found above:
Ticket 645 --> https://svn.open-mpi.org/trac/ompi/ticket/645
OMPI_ARRAY_INT_2_LOGICAL had an array bounds error - fixed this and the
analogous error in OMPI_ARRAY_LOGICAL_2_INT.
This commit was SVN r12712.
The following Trac tickets were found above:
Ticket 482 --> https://svn.open-mpi.org/trac/ompi/ticket/482
case where sizeof(INTEGER) > sizeof(int).
This commit was SVN r12707.
The following SVN revision numbers were found above:
r12684 --> open-mpi/ompi@e2c605f32a