1
1
openmpi/contrib/scaling/Makefile
Ralph Castain 6509f60929 Complete the memprobe support. This provides a new scaling tool called "mpi_memprobe" that samples the memory footprint of the local daemon and the client procs, and then reports the results. The output contains the footprint of the daemon on each node, plus the average footprint of the client procs on that node.
Samples are taken after MPI_Init, and then again after MPI_Barrier. This allows the user to see memory consumption caused by add_procs, as well as any modex contribution from forming connections if pmix_base_async_modex is given.

Using the probe simply involves executing it via mpirun, with however many copies you want per node. Example:

$ mpirun -npernode 2 ./mpi_memprobe
Sampling memory usage after MPI_Init
Data for node rhc001
	Daemon: 12.483398
	Client: 6.514648

Data for node rhc002
	Daemon: 11.865234
	Client: 4.643555

Sampling memory usage after MPI_Barrier
Data for node rhc001
	Daemon: 12.520508
	Client: 6.576660

Data for node rhc002
	Daemon: 11.879883
	Client: 4.703125

Note that the client value on node rhc001 is larger - this is where rank=0 is housed, and apparently it gets a larger footprint for some reason.

Signed-off-by: Ralph Castain <rhc@open-mpi.org>
2017-01-05 10:32:17 -08:00

18 строки
304 B
Makefile

PROGS = orte_no_op mpi_no_op mpi_memprobe
all: $(PROGS)
CFLAGS = -O
orte_no_op: orte_no_op.c
ortecc -o orte_no_op orte_no_op.c
mpi_no_op: mpi_no_op.c
mpicc -o mpi_no_op mpi_no_op.c
mpi_memprobe: mpi_memprobe.c
mpicc -o mpi_memprobe mpi_memprobe.c -lopen-pal -lopen-rte
clean:
rm -f $(PROGS) *~