6509f60929
Samples are taken after MPI_Init, and then again after MPI_Barrier. This allows the user to see memory consumption caused by add_procs, as well as any modex contribution from forming connections if pmix_base_async_modex is given. Using the probe simply involves executing it via mpirun, with however many copies you want per node. Example: $ mpirun -npernode 2 ./mpi_memprobe Sampling memory usage after MPI_Init Data for node rhc001 Daemon: 12.483398 Client: 6.514648 Data for node rhc002 Daemon: 11.865234 Client: 4.643555 Sampling memory usage after MPI_Barrier Data for node rhc001 Daemon: 12.520508 Client: 6.576660 Data for node rhc002 Daemon: 11.879883 Client: 4.703125 Note that the client value on node rhc001 is larger - this is where rank=0 is housed, and apparently it gets a larger footprint for some reason. Signed-off-by: Ralph Castain <rhc@open-mpi.org>
18 строки
304 B
Makefile
18 строки
304 B
Makefile
PROGS = orte_no_op mpi_no_op mpi_memprobe
|
|
|
|
all: $(PROGS)
|
|
|
|
CFLAGS = -O
|
|
|
|
orte_no_op: orte_no_op.c
|
|
ortecc -o orte_no_op orte_no_op.c
|
|
|
|
mpi_no_op: mpi_no_op.c
|
|
mpicc -o mpi_no_op mpi_no_op.c
|
|
|
|
mpi_memprobe: mpi_memprobe.c
|
|
mpicc -o mpi_memprobe mpi_memprobe.c -lopen-pal -lopen-rte
|
|
|
|
clean:
|
|
rm -f $(PROGS) *~
|