6b22641669
I know it does not make much sense but one can play around with the performance. Numbers are available at http://www.unixer.de/research/nbcoll/perf/. This is the first step towards collv2. Next step includes the addition of non-blocking functions to the MPI-Layer and the collv1 interface. It implements all MPI-1 collective algorithms in a non-blocking manner. However, the collv1 interface does not allow non-blocking collectives so that all collectives are used blocking by the ompi-glue layer. I wanted to add LibNBC as a separate subdirectory, but I could not convince the buildsystem (and had not the time). So the component looks pretty messy. It would be great if somebody could explain me how to move all nbc*{c,h}, and {hb,dict}*{c,h} to a seperate subdirectory. It's .ompi_ignored because I did not test it exhaustively yet. This commit was SVN r11401.
18 строки
640 B
Plaintext
18 строки
640 B
Plaintext
* TODO:
|
|
- support MPI-2 collectives
|
|
- support MPI-2 Features (MPI_IN_PLACE)
|
|
- support MPI-2 Requests (really? -> I don't think so :)
|
|
|
|
* Missing for MPI-1:
|
|
- FORTRAN Bindings
|
|
- add user defined operations (coll9, coll10, coll11, longuser)
|
|
-- how do we ensure that we do not collide with Intrinsic Operations if
|
|
we issue NBC_Ops???
|
|
-- we cannot issue NBC_Ops ... we need to issue MPI_Ops :-(.
|
|
-- hmm, we could simply wrap it and save the user defined op in a
|
|
list (hash) and search this every time we get called
|
|
--> cool idea, let's do that ...
|
|
|
|
* No Idea:
|
|
- what is wrong with nbcoll (does not work with Open MPI blocking colls)
|