! if( pe.EQ.0 )call mpp_sync() !! ! Here only PE 0 reaches the barrier, where it will wait ! indefinitely. While this is a particularly egregious example to ! illustrate the coding flaw, more subtle versions of the same are ! among the most common errors in parallel code. ! ! It is therefore important to be conscious of the context of a ! subroutine or function call, and the implied synchronization. There ! are certain calls here (e.g mpp_declare_pelist, mpp_init, ! mpp_malloc, mpp_set_stack_size) which must be called by all ! PEs. There are others which must be called by a subset of PEs (here ! called a pelist) which must be called by all the PEs in the ! pelist (e.g mpp_max, mpp_sum, mpp_sync). Still ! others imply no synchronization at all. I will make every effort to ! highlight the context of each call in the MPP modules, so that the ! implicit synchronization is spelt out. ! ! For performance it is necessary to keep synchronization as limited ! as the algorithm being implemented will allow. For instance, a single ! message between two PEs should only imply synchronization across the ! PEs in question. A global synchronization (or barrier) ! is likely to be slow, and is best avoided. But codes first ! parallelized on a Cray T3E tend to have many global syncs, as very ! fast barriers were implemented there in hardware. ! ! Another reason to use pelists is to run a single program in MPMD ! mode, where different PE subsets work on different portions of the ! code. A typical example is to assign an ocean model and atmosphere ! model to different PE subsets, and couple them concurrently instead of ! running them serially. The MPP module provides the notion of a ! current pelist, which is set when a group of PEs branch off ! into a subset. Subsequent calls that omit the pelist optional ! argument (seen below in many of the individual calls) assume that the ! implied synchronization is across the current pelist. The calls ! mpp_root_pe and mpp_npes also return the values ! appropriate to the current pelist. The mpp_set_current_pelist ! call is provided to set the current pelist. !
! call mpp_error ! call mpp_error(FATAL) !! are equivalent. ! ! The argument order !
! call mpp_error( routine, errormsg, errortype ) !! is also provided to support legacy code. In this version of the ! call, none of the arguments may be omitted. ! ! The behaviour of mpp_error for a WARNING can be ! controlled with an additional call mpp_set_warn_level. !
! call mpp_set_warn_level(ERROR) !! causes mpp_error to treat WARNING ! exactly like FATAL. !
! call mpp_set_warn_level(WARNING) !! resets to the default behaviour described above. ! ! mpp_error also has an internal error state which ! maintains knowledge of whether a warning has been issued. This can be ! used at startup in a subroutine that checks if the model has been ! properly configured. You can generate a series of warnings using ! mpp_error, and then check at the end if any warnings has been ! issued using the function mpp_error_state(). If the value of ! this is WARNING, at least one warning has been issued, and ! the user can take appropriate action: ! !
! if( ... )call mpp_error( WARNING, '...' ) ! if( ... )call mpp_error( WARNING, '...' ) ! if( ... )call mpp_error( WARNING, '...' ) ! ... ! if( mpp_error_state().EQ.WARNING )call mpp_error( FATAL, '...' ) !!
! real, dimension(n) :: a ! if( pe.EQ.0 )then ! do p = 1,npes-1 ! call mpp_transmit( a, n, p, a, n, NULL_PE ) ! end do ! else ! call mpp_transmit( a, n, NULL_PE, a, n, 0 ) ! end if ! ! call mpp_transmit( a, n, ALL_PES, a, n, 0 ) !! ! The do loop and the broadcast operation above are equivalent. ! ! Two overloaded calls mpp_send and ! mpp_recv have also been ! provided. mpp_send calls mpp_transmit ! with get_pe=NULL_PE. mpp_recv calls ! mpp_transmit with put_pe=NULL_PE. Thus ! the do loop above could be written more succinctly: ! !
! if( pe.EQ.0 )then ! do p = 1,npes-1 ! call mpp_send( a, n, p ) ! end do ! else ! call mpp_recv( a, n, 0 ) ! end if !!
! use mpp_mod ! integer :: pe, chksum ! real :: a(:) ! pe = mpp_pe() ! chksum = mpp_chksum( a, (/pe/) ) !! ! The additional functionality of mpp_chksum over ! serial checksums is to compute the checksum across the PEs in ! pelist. The answer is guaranteed to be the same for ! the same distributed array irrespective of how it has been ! partitioned. ! ! If pelist is omitted, the context is assumed to be the ! current pelist. This call implies synchronization across the PEs in ! pelist, or the current pelist if pelist is absent. !