a200e4f865
*** THIS RFC INCLUDES A MINOR CHANGE TO THE MPI-RTE INTERFACE *** Note: during the course of this work, it was necessary to completely separate the MPI and RTE progress engines. There were multiple places in the MPI layer where ORTE_WAIT_FOR_COMPLETION was being used. A new OMPI_WAIT_FOR_COMPLETION macro was created (defined in ompi/mca/rte/rte.h) that simply cycles across opal_progress until the provided flag becomes false. Places where the MPI layer blocked waiting for RTE to complete an event have been modified to use this macro. *************************************************************************************** I am reissuing this RFC because of the time that has passed since its original release. Since its initial release and review, I have debugged it further to ensure it fully supports tests like loop_spawn. It therefore seems ready for merge back to the trunk. Given its prior review, I have set the timeout for one week. The code is in https://bitbucket.org/rhc/ompi-oob2 WHAT: Rewrite of ORTE OOB WHY: Support asynchronous progress and a host of other features WHEN: Wed, August 21 SYNOPSIS: The current OOB has served us well, but a number of limitations have been identified over the years. Specifically: * it is only progressed when called via opal_progress, which can lead to hangs or recursive calls into libevent (which is not supported by that code) * we've had issues when multiple NICs are available as the code doesn't "shift" messages between transports - thus, all nodes had to be available via the same TCP interface. * the OOB "unloads" incoming opal_buffer_t objects during the transmission, thus preventing use of OBJ_RETAIN in the code when repeatedly sending the same message to multiple recipients * there is no failover mechanism across NICs - if the selected NIC (or its attached switch) fails, we are forced to abort * only one transport (i.e., component) can be "active" The revised OOB resolves these problems: * async progress is used for all application processes, with the progress thread blocking in the event library * each available TCP NIC is supported by its own TCP module. The ability to asynchronously progress each module independently is provided, but not enabled by default (a runtime MCA parameter turns it "on") * multi-address TCP NICs (e.g., a NIC with both an IPv4 and IPv6 address, or with virtual interfaces) are supported - reachability is determined by comparing the contact info for a peer against all addresses within the range covered by the address/mask pairs for the NIC. * a message that arrives on one TCP NIC is automatically shifted to whatever NIC that is connected to the next "hop" if that peer cannot be reached by the incoming NIC. If no TCP module will reach the peer, then the OOB attempts to send the message via all other available components - if none can reach the peer, then an "error" is reported back to the RML, which then calls the errmgr for instructions. * opal_buffer_t now conforms to standard object rules re OBJ_RETAIN as we no longer "unload" the incoming object * NIC failure is reported to the TCP component, which then tries to resend the message across any other available TCP NIC. If that doesn't work, then the message is given back to the OOB base to try using other components. If all that fails, then the error is reported to the RML, which reports to the errmgr for instructions * obviously from the above, multiple OOB components (e.g., TCP and UD) can be active in parallel * the matching code has been moved to the RML (and out of the OOB/TCP component) so it is independent of transport * routing is done by the individual OOB modules (as opposed to the RML). Thus, both routed and non-routed transports can simultaneously be active * all blocking send/recv APIs have been removed. Everything operates asynchronously. KNOWN LIMITATIONS: * although provision is made for component failover as described above, the code for doing so has not been fully implemented yet. At the moment, if all connections for a given peer fail, the errmgr is notified of a "lost connection", which by default results in termination of the job if it was a lifeline * the IPv6 code is present and compiles, but is not complete. Since the current IPv6 support in the OOB doesn't work anyway, I don't consider this a blocker * routing is performed at the individual module level, yet the active routed component is selected on a global basis. We probably should update that to reflect that different transports may need/choose to route in different ways * obviously, not every error path has been tested nor necessarily covered * determining abnormal termination is more challenging than in the old code as we now potentially have multiple ways of connecting to a process. Ideally, we would declare "connection failed" when *all* transports can no longer reach the process, but that requires some additional (possibly complex) code. For now, the code replicates the old behavior only somewhat modified - i.e., if a module sees its connection fail, it checks to see if it is a lifeline. If so, it notifies the errmgr that the lifeline is lost - otherwise, it notifies the errmgr that a non-lifeline connection was lost. * reachability is determined solely on the basis of a shared subnet address/mask - more sophisticated algorithms (e.g., the one used in the tcp btl) are required to handle routing via gateways * the RML needs to assign sequence numbers to each message on a per-peer basis. The receiving RML will then deliver messages in order, thus preventing out-of-order messaging in the case where messages travel across different transports or a message needs to be redirected/resent due to failure of a NIC This commit was SVN r29058. |
||
---|---|---|
.. | ||
autogen.sh | ||
btl_tcp2_addr.h | ||
btl_tcp2_component.c | ||
btl_tcp2_endpoint.c | ||
btl_tcp2_endpoint.h | ||
btl_tcp2_frag.c | ||
btl_tcp2_frag.h | ||
btl_tcp2_ft.c | ||
btl_tcp2_ft.h | ||
btl_tcp2_hdr.h | ||
btl_tcp2_proc.c | ||
btl_tcp2_proc.h | ||
btl_tcp2.addr.h | ||
btl_tcp2.c | ||
btl_tcp2.h | ||
configure.ac | ||
help-mpi-btl-tcp2.txt | ||
Makefile.am | ||
README.txt |
2 Feb 2011 Description =========== This sample "tcp2" BTL component is a simple example of how to build an Open MPI MCA component from outside of the Open MPI source tree. This is a valuable technique for 3rd parties who want to provide their own components for Open MPI, but do not want to be in the mainstream distribution (i.e., their code is not part of the main Open MPI code base). NOTE: We do recommend that 3rd party developers investigate using a DVCS such as Mercurial or Git to keep up with Open MPI development. Using a DVCS allows you to host your component in your own copy of the Open MPI source tree, and yet still keep up with development changes, stable releases, etc. Previous colloquial knowledge held that building a component from outside of the Open MPI source tree required configuring Open MPI --with-devel-headers, and then building and installing it. This configure switch installs all of OMPI's internal .h files under $prefix/include/openmpi, and therefore allows 3rd party code to be compiled outside of the Open MPI tree. This method definitely works, but is annoying: * You have to ask users to use this special configure switch. * Not all users install from source; many get binary packages (e.g., RPMs). This example package shows two ways to build an Open MPI MCA component from outside the Open MPI source tree: 1. Using the above --with-devel-headers technique 2. Compiling against the Open MPI source tree itself (vs. the installation tree) The user still has to have a source tree, but at least they don't have to be required to use --with-devel-headers (which most users don't) -- they can likely build off the source tree that they already used. Example project contents ======================== The "tcp2" component is a direct copy of the TCP BTL as of January 2011 -- it has just been renamed so that it can be built separately and installed alongside the real TCP BTL component. Most of the mojo for both methods is handled in the example components' configure.ac, but the same techniques are applicable outside of the GNU Auto toolchain. This sample "tcp2" component has an autogen.sh script that requires the normal Autoconf, Automake, and Libtool. It also adds the following two configure switches: --with-openmpi-install=DIR If provided, DIR is an Open MPI installation tree that was installed --with-devel-headers. This switch uses the installed mpicc --showme:<foo> functionality to extract the relevant CPPFLAGS, LDFLAGS, and LIBS. --with-openmpi-source=DIR If provided, DIR is the source of a configured and built Open MPI source tree (corresponding to the version expected by the example component). The source tree is not required to have been configured --with-devel-headers. This switch uses the source tree's config.status script to extract the relevant CPPFLAGS and CFLAGS. Either one of these two switches must be provided, or appropriate CPPFLAGS, CFLAGS, LDFLAGS, and/or LIBS must be provided such that valid Open MPI header and library files can be found and compiled / linked against, respectively. Example use =========== First, download, build, and install Open MPI: ----- $ cd $HOME $ wget \ http://www.open-mpi.org/software/ompi/vX.Y/downloads/openmpi-X.Y.Z.tar.bz2 [lots of output] $ tar jxf openmpi-X.Y.Z.tar.bz2 $ cd openmpi-X.Y.Z $ ./configure --prefix=/opt/openmpi ... [lots of output] $ make -j 4 install [lots of output] $ /opt/openmpi/bin/ompi_info | grep btl MCA btl: self (MCA vA.B, API vM.N, Component vX.Y.Z) MCA btl: sm (MCA vA.B, API vM.N, Component vX.Y.Z) MCA btl: tcp (MCA vA.B, API vM.N, Component vX.Y.Z) [where X.Y.Z, A.B, and M.N are appropriate for your version of Open MPI] $ ----- Notice the installed BTLs from ompi_info. Now cd into this example project and build it, pointing it to the source directory of the Open MPI that you just built. Note that we use the same --prefix as when installing Open MPI (so that the built component will be installed into the Right place): ----- $ cd /path/to/this/sample $ ./autogen.sh $ ./configure --prefix=/opt/openmpi --with-openmpi-source=$HOME/openmpi-X.Y.Z [lots of output] $ make -j 4 install [lots of output] $ /opt/openmpi/bin/ompi_info | grep btl MCA btl: self (MCA vA.B, API vM.N, Component vX.Y.Z) MCA btl: sm (MCA vA.B, API vM.N, Component vX.Y.Z) MCA btl: tcp (MCA vA.B, API vM.N, Component vX.Y.Z) MCA btl: tcp2 (MCA vA.B, API vM.N, Component vX.Y.Z) [where X.Y.Z, A.B, and M.N are appropriate for your version of Open MPI] $ ----- Notice that the "tcp2" BTL is now installed. Random notes ============ The component in this project is just an example; I whipped it up in the span of several hours. Your component may be a bit more complex than this or have slightly different requirements. So you may need to tweak the configury or build system in each of the components to fit what you need. Changes required to the component to make it build in a standalone mode: 1. Write your own configure script. This component is just a sample. You basically need to build against an OMPI install that was installed --with-devel-headers or a built OMPI source tree. See ./configure --help for details. 2. I also provided a bogus btl_tcp2_config.h (generated by configure). This file is not included anywhere, but it does provide protection against re-defined PACKAGE_* macros when running configure, which is quite annoying. 3. Modify Makefile.am to only build DSOs. I.e., you can optionally take the static option out since the component can *only* build in DSO mode when building standalone. That being said, it doesn't hurt to leave the static builds in -- this would (hypothetically) allow the component to be built both in-tree and out-of-tree. Ping the Open MPI devel list if you have questions about this project. Enjoy. - Jeff Squyres