23ab9e0277
at the top-level MPI API function. This allows two kinds of scenarios: 1. MPI_Ireduce(..., op, ...); MPI_Op_free(op); MPI_Wait(...); For the non-blocking collectives that we're someday planning -- to make them analogous to non-blocking point-to-point stuff. 2. Thread 1: MPI_Reduce(..., op, ...); Thread 2: MPI_Op_free(op); Granted, for #2 to occur would tread a fine line between a correct and erroneous MPI program, but it is possible (as long as the Op_free was *after* MPI_reduce() had started to execute). It's more realistic with case #1, where the Op_free() could be executed in the same thread or a different thread. This commit was SVN r7870.
/* * Copyright (c) 2004-2005 The Trustees of Indiana University. * All rights reserved. * Copyright (c) 2004-2005 The Trustees of the University of Tennessee. * All rights reserved. * Copyright (c) 2004-2005 High Performance Computing Center Stuttgart, * University of Stuttgart. All rights reserved. * Copyright (c) 2004-2005 The Regents of the University of California. * All rights reserved. * $COPYRIGHT$ * * Additional copyrights may follow * * $HEADER$ */ /** @mainpage @section mainpage_introduction Introduction This is the introduction. This is the introduction. This is the introduction. This is the introduction. This is the introduction. This is the introduction. This is the introduction. @section main_install Installation This is the installation section. This is the installation section. This is the installation section. This is the installation section. This is the installation section. This is the installation section. This is the installation section. */