This directory contains a few example programs.
Each program takes the filename as a command-line argument
"-fname filename".
If you are using "mpirun" to run an MPI program, you can run the
program "simple" with two processes as follows:
mpirun -np 2 simple -fname test
simple.c: Each process creates its own file, writes to it, reads it
back, and checks the data read.
psimple.c: Same as simple.c but uses the PMPI versions of all MPI routines
error.c: Tests if error messages are printed correctly
status.c: Tests if the status object is filled correctly by I/O functions
perf.c: A simple read and write performance test. Each process writes
4Mbytes to a file at a location determined by its rank and
reads it back. For a different access size, change the value
of SIZE in the code. The bandwidth is reported for two cases:
(1) without including MPI_File_sync and (2) including
MPI_File_sync.
async.c: This program is the same as simple.c, except that it uses
asynchronous I/O.
coll_test.c: This program tests the use of collective I/O. It writes
a 3D block-distributed array to a file corresponding to the
global array in row-major (C) order, reads it back, and checks
that the data read is correct. The global array size has been
set to 32^3. If you are running it on NFS, which is very slow,
you may want to reduce that size to 16^3.
coll_perf.c: Measures the I/O bandwidth for writing/reading a 3D
block-distributed array to a file corresponding to the global array
in row-major (C) order. The global array size has been
set to 128^3. If you are running it on NFS, which is very slow,
you may want to reduce that size to 16^3.
misc.c: Tests various miscellaneous MPI-IO functions
atomicity.c: Tests whether atomicity semantics are satisfied for
overlapping accesses in atomic mode. The probability of detecting
errors is higher if you run it on 8 or more processes.
large_file.c: Tests access to large files. Writes a 4-Gbyte file and
reads it back. Run it only on one process and on a file system
on which ROMIO supports large files, i.e., PIOFS, XFS, SFS, and HFS.
large_array.c: Tests writing and reading a 4-Gbyte distributed array using
the distributed array datatype constructor. Works only on file
systems that support 64-bit file sizes and MPI implementations
that support 64-bit MPI_Aint.
file_info.c: Tests the setting and retrieval of hints via
MPI_File_set_info and MPI_File_get_info
excl.c: Tests MPI_File_open with MPI_MODE_EXCL
noncontig.c: Tests noncontiguous accesses in memory and file using
independent I/O. Run it on two processes only.
noncontig_coll.c: Same as noncontig.c, but uses collective I/O
noncontig_coll2.c: Same as noncontig_coll.c, but exercises the
cb_config_list hint and aggregation handling more.
i_noncontig.c: Same as noncontig.c, but uses nonblocking I/O
shared_fp.c: Tests the shared file pointer functions
split_coll.c: Tests the split collective I/O functions
fperf.f: Fortran version of perf.c
fcoll_test.f: Fortran version of coll_test.c
pfcoll_test.f: Same as fcoll_test.f but uses the PMPI versions of
all MPI routines
fmisc.f: Fortran version of misc.c