MPI is a popular library for distributed-memory parallel programming. It offers both point-to-point message passing and group communication operations (broadcast, scatter/gather, etc).
Open MPI is an implementation of the MPI standard that combines technologies and resources from several other projects (FT-MPI, LA-MPI, LAM/MPI, and PACX-MPI) in order to build the best MPI library available.
The Chicken MPI egg provides a Scheme interface to a large subset of the MPI 1.2 procedures for communication. It is based on the Ocaml MPI library by Xavier Leroy. Below is a list of procedures that are included in this egg, along with brief descriptions. This egg has been tested with Open MPI version 1.2.4.
Initializes the MPI execution environment. This routine must be called before any other MPI routine. MPI can be initialized at most once.
Spawns MAXPROCS identical copies of the MPI program specified by COMMAND and returns an intercommunicator and a vector of status values. ARGUMENTS is a list of command-line arguments. LOCATIONS is a list of string pairs (HOST * WDIR) that tell MPI the host and working directory where to start processes.
Terminates the MPI execution environment.
Returns true if OBJ is an MPI communicator object, false otherwise.
Returns the default communicator created by MPI_Init; the group associated with this communicator contains all processes.
Creates a new communicator with communication group that spans all processes in GROUP and a new context. See the procedures in subsection Handling of communication groups for information on how to create process group objects.
Creates a new communicator with Cartesian topology information. Argument DIMS is an SRFI-4 s32vector that contains the number of dimensions of the Cartesian grid. Argument PERIODS is an SRFI-4 s32vector of the same length as DIMS that indicates if the grid is periodic (1) or not (0) in each dimension. Argument REORDER is a boolean value that indicates whether process ranking may be reordered.
Creates a division of processes in a Cartesian grid. Argument NNODES is the number of nodes in the grid. Argument NDIMS is the number of Cartesian dimensions. The return values is an SRFI-4 s32vector.
Determines process coordinates in Cartesian topology, given a rank in the group. The return value is an SRFI-4 s32vector of length NDIMS (the number of dimensions in the Cartesian topology).
Returns true if OBJ is an MPI group object, false otherwise.
Translates the ranks of processes in one group to those in another group. The return value is an SRFI-4 s32vector.
Produces a group by reordering an existing group and taking only members with the given ranks. Argument RANKS is an SRFI-4 s32vector.
Produces a group by reordering an existing group and taking only members that do not have the given ranks. Argument RANKS is an SRFI-4 s32vector.
Most communication procedures in this library come in several flavors, for fixnums, integers, floating point numbers, bytevectors, and for each of the SRFI-4 homogeneous vector types.
Performs a standard-mode blocking send. Argument DEST is the rank of the destination process. Argument TAG is integer message tag. TYPE is one of the following: fixnum, int, flonum, bytevector, s8vector, u8vector, s16vector, u16vector, s32vector, u32vector, f32vector, f64vector
Performs a standard-mode blocking receive. Argument DEST is the rank of the destination process. Argument TAG is integer message tag. Argument LENGTH is present only in the vector procedures. TYPE is one of the following: fixnum, int, flonum, bytevector, s8vector, u8vector, s16vector, u16vector, s32vector, u32vector, f32vector, f64vector
Check for an incoming message. This is a blocking call that returns only after a matching message is found. Argument SOURCE can be MPI:any-source. Argument TAG can be MPI:any-tag.
Broadcasts a message from the process with rank root to all other processes of the group. TYPE is one of the following: fixnum, int, flonum, bytevector, s8vector, u8vector, s16vector, u16vector, s32vector, u32vector, f32vector, f64vector
Sends data from the root process to all processes in a group, and returns the data received by the calling process. all other processes of the group. Argument SENDCOUNT is the number of elements sent to each process. Argument DATA is only required at the root process. All other processes can invoke this procedure with (void) as DATA. TYPE is one of the following: int, flonum, bytevector, s8vector, u8vector, s16vector, u16vector, s32vector, u32vector, f32vector, f64vector
Sends variable-length data from the root process to all processes in a group, and returns the data received by the calling process. all other processes of the group. Argument SENDCOUNT is the number of elements sent to each process. Argument DATA is only required at the root process, and is a list of values of type TYPE, where each element of the list is sent to the process of corresponding rank. All other processes can invoke this procedure with (void) as DATA. TYPE is one of the following: int, flonum, bytevector, s8vector, u8vector, s16vector, u16vector, s32vector, u32vector, f32vector, f64vector
Gathers data from a group of processes, where each process send data of the same length. Argument SENDCOUNT is the number of data elements being sent by each process. TYPE is one of the following: int, flonum, bytevector, s8vector, u8vector, s16vector, u16vector, s32vector, u32vector, f32vector, f64vector
Gathers data from a group of processes, where each process can send data of variable length. TYPE is one of the following: int, flonum, bytevector, s8vector, u8vector, s16vector, u16vector, s32vector, u32vector, f32vector, f64vector
Gathers data of variable length from all processes and distributes it to all processes. TYPE is one of the following: int, flonum, bytevector, s8vector, u8vector, s16vector, u16vector, s32vector, u32vector, f32vector, f64vector
Reduces values on all processes within a group, using a global reduce operation, and return the result at the root process. OP is one of the following: MPI:i_max, MPI:i_min, MPI:i_sum, MPI:i_prod, MPI:i_land, MPI:i_lor, MPI:i_xor (integer operations); and MPI:f_max, MPI:f_min, MPI:f_sum, MPI:f_prod (floating point operations). TYPE is one of the following: int, flonum, bytevector, s8vector, u8vector, s16vector, u16vector, s32vector, u32vector, f32vector, f64vector
Reduces values on all processes within a group, using a global reduce operation, and return the result at each process. OP is one of the following: MPI:i_max, MPI:i_min, MPI:i_sum, MPI:i_prod, MPI:i_land, MPI:i_lor, MPI:i_xor (integer operations); and MPI:f_max, MPI:f_min, MPI:f_sum, MPI:f_prod (floating point operations). TYPE is one of the following: int, flonum, bytevector, s8vector, u8vector, s16vector, u16vector, s32vector, u32vector, f32vector, f64vector
Computes a partial reduction across the processes in a group. OP is one of the following: MPI:i_max, MPI:i_min, MPI:i_sum, MPI:i_prod, MPI:i_land, MPI:i_lor, MPI:i_xor (integer operations); and MPI:f_max, MPI:f_min, MPI:f_sum, MPI:f_prod (floating point operations). TYPE is one of the following: int, flonum, bytevector, s8vector, u8vector, s16vector, u16vector, s32vector, u32vector, f32vector, f64vector
(MPI:init) (define comm-world (MPI:get-comm-world)) (define size (MPI:comm-size comm-world)) (define myrank (MPI:comm-rank comm-world)) ;; Barrier (MPI:barrier comm-world) (if (zero? myrank) (let ((data "aa")) (print myrank ": sending " data) (MPI:send (string->blob data) 1 0 comm-world) (let ((n (MPI:receive MPI:any-source MPI:any-tag comm-world))) (print myrank ": received " (blob->string n)))) (let* ((n (blob->string (MPI:receive MPI:any-source MPI:any-tag comm-world))) (n1 (string-append n "a"))) (print myrank ": received " n ", resending " n1) (MPI:send (string->blob n1) (modulo (+ myrank 1) size) 0 comm-world)))
Copyright Ivan Raikov and the Okinawa Institute of Science and Technology Based on the Ocaml MPI library by Xavier Leroy. This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. A full copy of the GPL license can be found at <http://www.gnu.org/licenses/>.