a523dba41d
We have been getting several requests for new collectives that need to be inserted in various places of the MPI layer, all in support of either checkpoint/restart or various research efforts. Until now, this would require that the collective id's be generated at launch. which required modification s to ORTE and other places. We chose not to make collectives reusable as the race conditions associated with resetting collective counters are daunti ng. This commit extends the collective system to allow self-generation of collective id's that the daemons need to support, thereby allowing developers to request any number of collectives for their work. There is one restriction: RTE collectives must occur at the process level - i.e., we don't curren tly have a way of tagging the collective to a specific thread. From the comment in the code: * In order to allow scalable * generation of collective id's, they are formed as: * * top 32-bits are the jobid of the procs involved in * the collective. For collectives across multiple jobs * (e.g., in a connect_accept), the daemon jobid will * be used as the id will be issued by mpirun. This * won't cause problems because daemons don't use the * collective_id * * bottom 32-bits are a rolling counter that recycles * when the max is hit. The daemon will cleanup each * collective upon completion, so this means a job can * never have more than 2**32 collectives going on at * a time. If someone needs more than that - they've got * a problem. * * Note that this means (for now) that RTE-level collectives * cannot be done by individual threads - they must be * done at the overall process level. This is required as * there is no guaranteed ordering for the collective id's, * and all the participants must agree on the id of the * collective they are executing. So if thread A on one * process asks for a collective id before thread B does, * but B asks before A on another process, the collectives will * be mixed and not result in the expected behavior. We may * find a way to relax this requirement in the future by * adding a thread context id to the jobid field (maybe taking the * lower 16-bits of that field). This commit includes a test program (orte/test/mpi/coll_test.c) that cycles 100 times across barrier and modex collectives. This commit was SVN r32203.
23 строки
975 B
Makefile
23 строки
975 B
Makefile
PROGS = mpi_no_op mpi_barrier hello hello_nodename abort multi_abort simple_spawn concurrent_spawn spawn_multiple mpi_spin delayed_abort loop_spawn loop_child bad_exit pubsub hello_barrier segv accept connect hello_output hello_show_help crisscross read_write ziatest slave reduce-hang ziaprobe ziatest bcast_loop parallel_w8 parallel_w64 parallel_r8 parallel_r64 sio sendrecv_blaster early_abort debugger singleton_client_server intercomm_create spawn_tree init-exit77 mpi_info info_spawn server client paccept pconnect coll_test
|
|
|
|
all: $(PROGS)
|
|
|
|
# These guys need additional -I flags
|
|
|
|
hello_output: hello_output.c
|
|
$(CC) $(CFLAGS) $(CFLAGS_INTERNAL) $^ -o $@
|
|
|
|
hello_show_help: hello_show_help.c
|
|
$(CC) $(CFLAGS) $(CFLAGS_INTERNAL) $^ -o $@
|
|
|
|
CC = mpicc
|
|
CFLAGS = -g --openmpi:linkall
|
|
CFLAGS_INTERNAL = -I../../.. -I../../../orte/include -I../../../opal/include
|
|
CXX = mpic++ --openmpi:linkall
|
|
CXXFLAGS = -g
|
|
FC = mpifort -openmpi:linkall
|
|
FCFLAGS = -g
|
|
|
|
clean:
|
|
rm -f $(PROGS) *~
|