FMS  2024.03
Flexible Modeling System
mpp_domains_mod

Domain decomposition and domain update for message-passing codes. More...

Data Types

interface  check_data_size
 Private interface for internal usage, compares two sizes. More...
 
type  contact_type
 Type used to represent the contact between tiles. More...
 
type  domain1d
 One dimensional domain used to manage shared data access between pes. More...
 
type  domain1d_spec
 A private type used to specify index limits for a domain decomposition. More...
 
type  domain2d
 The domain2D type contains all the necessary information to define the global, compute and data domains of each task, as well as the PE associated with the task. The PEs from which remote data may be acquired to update the data domain are also contained in a linked list of neighbours. More...
 
type  domain2d_spec
 Private type to specify multiple index limits and pe information for a 2D domain. More...
 
type  domain_axis_spec
 Used to specify index limits along an axis of a domain. More...
 
type  domaincommunicator2d
 Used for sending domain data between pe's. More...
 
type  domainug
 Domain information for managing data on unstructured grids. More...
 
type  index_type
 index bounds for use in nestSpec More...
 
interface  mpp_broadcast_domain
 Broadcasts domain to every pe. Only useful outside the context of it's own pelist. More...
 
interface  mpp_check_field
 Parallel checking between two ensembles which run on different set pes at the same time
There are two forms for the mpp_check_field call. The 2D version is generally to be used and 3D version is built by repeated calls to the 2D version.

Example usage: More...
 
interface  mpp_complete_do_update
 Private interface used for non blocking updates. More...
 
interface  mpp_complete_group_update
 Completes a pending non-blocking group update Must follow a call to mpp_start_group_update. More...
 
interface  mpp_complete_update_domains
 Must be used after a call to mpp_start_update_domains in order to complete a nonblocking domain update. See mpp_start_update_domains for more info. More...
 
interface  mpp_copy_domain
 Copy 1D or 2D domain. More...
 
interface  mpp_create_group_update
 Constructor for the mpp_group_update_type which is then used with mpp_start_group_update. More...
 
interface  mpp_deallocate_domain
 Deallocate given 1D or 2D domain. More...
 
interface  mpp_define_domains
 Set up a domain decomposition. More...
 
interface  mpp_define_layout
 Retrieve layout associated with a domain decomposition. Given a global 2D domain and the number of divisions in the decomposition ndivs (usually the PE count unless some domains are masked) this calls returns a 2D domain layout. By default, mpp_define_layout will attempt to divide the 2D index space into domains that maintain the aspect ratio of the global domain. If this cannot be done, the algorithm favours domains that are longer in x than y, a preference that could improve vector performance.
Example usage: More...
 
interface  mpp_define_null_domain
 Defines a nullified 1D or 2D domain. More...
 
interface  mpp_do_check
 Private interface to updates data domain of 3D field whose computational domains have been computed. More...
 
interface  mpp_do_get_boundary
 
interface  mpp_do_get_boundary_ad
 
interface  mpp_do_global_field
 Private helper interface used by mpp_global_field. More...
 
interface  mpp_do_global_field_ad
 
interface  mpp_do_group_update
 
interface  mpp_do_redistribute
 
interface  mpp_do_update
 Private interface used for mpp_update_domains. More...
 
interface  mpp_do_update_ad
 Passes a data field from a unstructured grid to an structured grid
Example usage: More...
 
interface  mpp_do_update_nest_coarse
 Used by mpp_update_nest_coarse to perform domain updates. More...
 
interface  mpp_do_update_nest_fine
 
interface  mpp_get_boundary
 Get the boundary data for symmetric domain when the data is at C, E, or N-cell center.
mpp_get_boundary is used to get the boundary data for symmetric domain when the data is at C, E, or N-cell center. For cubic grid, the data should always at C-cell center.
Example usage: More...
 
interface  mpp_get_boundary_ad
 
interface  mpp_get_compute_domain
 These routines retrieve the axis specifications associated with the compute domains. The domain is a derived type with private elements. These routines retrieve the axis specifications associated with the compute domains The 2D version of these is a simple extension of 1D.
Example usage: More...
 
interface  mpp_get_compute_domains
 Retrieve the entire array of compute domain extents associated with a decomposition. More...
 
interface  mpp_get_data_domain
 These routines retrieve the axis specifications associated with the data domains. The domain is a derived type with private elements. These routines retrieve the axis specifications associated with the data domains. The 2D version of these is a simple extension of 1D.
Example usage: More...
 
interface  mpp_get_domain_extents
 
interface  mpp_get_f2c_index
 Get the index of the data passed from fine grid to coarse grid.
Example usage: More...
 
interface  mpp_get_global_domain
 These routines retrieve the axis specifications associated with the global domains. The domain is a derived type with private elements. These routines retrieve the axis specifications associated with the global domains. The 2D version of these is a simple extension of 1D.
Example usage: More...
 
interface  mpp_get_global_domains
 
interface  mpp_get_layout
 Retrieve layout associated with a domain decomposition The 1D version of this call returns the number of divisions that was assigned to this decomposition axis. The 2D version of this call returns an array of dimension 2 holding the results on two axes.
Example usage: More...
 
interface  mpp_get_memory_domain
 These routines retrieve the axis specifications associated with the memory domains. The domain is a derived type with private elements. These routines retrieve the axis specifications associated with the memory domains. The 2D version of these is a simple extension of 1D.
Example usage: More...
 
interface  mpp_get_neighbor_pe
 Retrieve PE number of a neighboring domain. More...
 
interface  mpp_get_pelist
 Retrieve list of PEs associated with a domain decomposition. The 1D version of this call returns an array of the PEs assigned to this 1D domain decomposition. In addition the optional argument pos may be used to retrieve the 0-based position of the domain local to the calling PE, i.e., domain%list(pos)%pe is the local PE, as returned by mpp_pe(). The 2D version of this call is identical to 1D version. More...
 
interface  mpp_global_field
 Fill in a global array from domain-decomposed arrays.
More...
 
interface  mpp_global_field_ad
 
interface  mpp_global_field_ug
 Same functionality as mpp_global_field but for unstructured domains. More...
 
interface  mpp_global_max
 Global max of domain-decomposed arrays.
mpp_global_max is used to get the maximum value of a domain-decomposed array on each PE. MPP_TYPE_can be of type integer or real; of 4-byte or 8-byte kind; of rank up to 5. The dimension of locus must equal the rank of field.

All PEs in a domain decomposition must call mpp_global_max, and each will have the result upon exit. The function mpp_global_min, with an identical syntax. is also available. More...
 
interface  mpp_global_min
 Global min of domain-decomposed arrays.
mpp_global_min is used to get the minimum value of a domain-decomposed array on each PE. MPP_TYPE_can be of type integer or real; of 4-byte or 8-byte kind; of rank up to 5. The dimension of locus must equal the rank of field.

All PEs in a domain decomposition must call mpp_global_min, and each will have the result upon exit. The function mpp_global_max, with an identical syntax. is also available. More...
 
interface  mpp_global_sum
 Global sum of domain-decomposed arrays.
mpp_global_sum is used to get the sum of a domain-decomposed array on each PE. MPP_TYPE_ can be of type integer, complex, or real; of 4-byte or 8-byte kind; of rank up to 5. More...
 
interface  mpp_global_sum_ad
 
interface  mpp_global_sum_tl
 
type  mpp_group_update_type
 used for updates on a group More...
 
interface  mpp_modify_domain
 Modifies the extents (compute, data and global) of a given domain. More...
 
interface  mpp_nullify_domain_list
 Nullify domain list. This interface is needed in mpp_domains_test. 1-D case can be added in if needed.
Example usage: More...
 
interface  mpp_pass_sg_to_ug
 Passes data from a structured grid to an unstructured grid
Example usage: More...
 
interface  mpp_pass_ug_to_sg
 Passes a data field from a structured grid to an unstructured grid
Example usage: More...
 
interface  mpp_redistribute
 Reorganization of distributed global arrays.
mpp_redistribute is used to reorganize a distributed array. MPP_TYPE_can be of type integer, complex, or real; of 4-byte or 8-byte kind; of rank up to 5.
Example usage: call mpp_redistribute( domain_in, field_in, domain_out, field_out ) More...
 
interface  mpp_reset_group_update_field
 
interface  mpp_set_compute_domain
 These routines set the axis specifications associated with the compute domains. The domain is a derived type with private elements. These routines set the axis specifications associated with the compute domains The 2D version of these is a simple extension of 1D.
Example usage: More...
 
interface  mpp_set_data_domain
 These routines set the axis specifications associated with the data domains. The domain is a derived type with private elements. These routines set the axis specifications associated with the data domains. The 2D version of these is a simple extension of 1D.
Example usage: More...
 
interface  mpp_set_global_domain
 These routines set the axis specifications associated with the global domains. The domain is a derived type with private elements. These routines set the axis specifications associated with the global domains. The 2D version of these is a simple extension of 1D.
Example usage: More...
 
interface  mpp_start_do_update
 Private interface used for non blocking updates. More...
 
interface  mpp_start_group_update
 Starts non-blocking group update Must be followed up with a call to mpp_complete_group_update mpp_group_update_type can be created with mpp_create_group_update. More...
 
interface  mpp_start_update_domains
 Interface to start halo updates mpp_start_update_domains is used to start a halo update of a domain-decomposed array on each PE. MPP_TYPE_ can be of type complex, integer, logical or real; of 4-byte or 8-byte kind; of rank up to 5. The vector version (with two input data fields) is only present for \ereal types.

\empp_start_update_domains must be paired together with \empp_complete_update_domains. In mpp_start_update_domains, a buffer will be pre-post to receive (non-blocking) the data and data on computational domain will be packed and sent (non-blocking send) to other processor. In mpp_complete_update_domains, buffer will be unpacked to fill the halo and mpp_sync_self will be called to to ensure communication safe at the last call of mpp_complete_update_domains.

Each mpp_update_domains can be replaced by the combination of mpp_start_update_domains and mpp_complete_update_domains. The arguments in mpp_start_update_domains and mpp_complete_update_domains should be the exact the same as in mpp_update_domains to be replaced except no optional argument "complete". The following are examples on how to replace mpp_update_domains with mpp_start_update_domains/mpp_complete_update_domains. More...
 
interface  mpp_update_domains
 Performs halo updates for a given domain.
More...
 
interface  mpp_update_domains_ad
 Similar to mpp_update_domains , updates adjoint domains. More...
 
interface  mpp_update_nest_coarse
 Pass the data from fine grid to fill the buffer to be ready to be interpolated onto coarse grid.
Example usage: More...
 
interface  mpp_update_nest_fine
 Pass the data from coarse grid to fill the buffer to be ready to be interpolated onto fine grid.
Example usage: More...
 
type  nest_domain_type
 domain with nested fine and course tiles More...
 
type  nest_level_type
 Private type to hold data for each level of nesting. More...
 
type  nestspec
 Used to specify bounds and index information for nested tiles as a linked list. More...
 
type  nonblock_type
 Used for nonblocking data transfer. More...
 
interface  operator(.eq.)
 Equality/inequality operators for domaintypes.
More...
 
interface  operator(.ne.)
 
type  overlap_type
 Type for overlapping data. More...
 
type  overlapspec
 Private type for overlap specifications. More...
 
type  tile_type
 Upper and lower x and y bounds for a tile. More...
 
type  unstruct_axis_spec
 Private type for axis specification data for an unstructured grid. More...
 
type  unstruct_domain_spec
 Private type for axis specification data for an unstructured domain. More...
 
type  unstruct_overlap_type
 Private type. More...
 
type  unstruct_pass_type
 Private type. More...
 

Functions/Subroutines

subroutine add_check_overlap (overlap_out, overlap_in)
 this routine adds the overlap_in into overlap_out
 
subroutine add_update_overlap (overlap_out, overlap_in)
 
subroutine allocate_check_overlap (overlap, count)
 
subroutine allocate_nest_overlap (overlap, count)
 
subroutine allocate_update_overlap (overlap, count)
 
subroutine apply_cyclic_offset (lstart, lend, offset, gstart, gend, gsize)
 add offset to the index
 
subroutine check_alignment (is, ie, js, je, isg, ieg, jsg, jeg, alignment)
 
subroutine check_data_size_1d (module, str1, size1, str2, size2)
 
subroutine check_data_size_2d (module, str1, isize1, jsize1, str2, isize2, jsize2)
 
subroutine check_message_size (domain, update, send, recv, position)
 
subroutine check_overlap_pe_order (domain, overlap, name)
 
subroutine compute_overlap_coarse_to_fine (nest_domain, overlap, extra_halo, position, name)
 
subroutine compute_overlap_fine_to_coarse (nest_domain, overlap, position, name)
 This routine will compute the send and recv information between overlapped nesting region. The data is assumed on T-cell center.
 
subroutine compute_overlap_sg2ug (UG_domain, SG_domain)
 
subroutine compute_overlap_ug2sg (UG_domain)
 
subroutine compute_overlaps (domain, position, update, check, ishift, jshift, x_cyclic_offset, y_cyclic_offset, whalo, ehalo, shalo, nhalo)
 Computes remote domain overlaps. More...
 
subroutine compute_overlaps_fold_east (domain, position, ishift, jshift)
 computes remote domain overlaps assumes only one in each direction will calculate the overlapping for T,E,C,N-cell seperately. here assume fold-east and y-cyclic boundary condition
 
subroutine compute_overlaps_fold_south (domain, position, ishift, jshift)
 Computes remote domain overlaps assumes only one in each direction will calculate the overlapping for T,E,C,N-cell seperately.
 
subroutine compute_overlaps_fold_west (domain, position, ishift, jshift)
 Computes remote domain overlaps assumes only one in each direction will calculate the overlapping for T,E,C,N-cell seperately.
 
subroutine convert_index_back (domain, ishift, jshift, rotate, is_in, ie_in, js_in, je_in, is_out, ie_out, js_out, je_out)
 
integer function convert_index_to_coarse (domain, ishift, jshift, tile_coarse, istart_coarse, iend_coarse, jstart_coarse, jend_coarse, ntiles_coarse, tile_in, is_in, ie_in, js_in, je_in, is_out, ie_out, js_out, je_out, rotate_out)
 
integer function convert_index_to_nest (domain, ishift, jshift, tile_coarse, istart_coarse, iend_coarse, jstart_coarse, jend_coarse, ntiles_coarse, tile_in, is_in, ie_in, js_in, je_in, is_out, ie_out, js_out, je_out, rotate_out)
 This routine will convert the global coarse grid index to nest grid index.
 
subroutine copy_nest_overlap (overlap_out, overlap_in)
 
subroutine deallocate_comm (d_comm)
 
subroutine deallocate_domain2d_local (domain)
 
subroutine deallocate_nest_overlap (overlap)
 
subroutine deallocate_overlap_type (overlap)
 
subroutine deallocate_overlapspec (overlap)
 
subroutine deallocate_unstruct_overlap_type (overlap)
 
subroutine deallocate_unstruct_pass_type (domain)
 
subroutine debug_message_size (overlap, name)
 
subroutine define_contact_point (domain, position, num_contact, tile1, tile2, align1, align2, refine1, refine2, istart1, iend1, jstart1, jend1, istart2, iend2, jstart2, jend2, isgList, iegList, jsgList, jegList)
 compute the overlapping between tiles for the T-cell. More...
 
subroutine define_nest_level_type (nest_domain, x_refine, y_refine, extra_halo)
 
subroutine expand_check_overlap_list (overlaplist, npes)
 
subroutine expand_update_overlap_list (overlapList, npes)
 
subroutine fill_contact (Contact, tile, is1, ie1, js1, je1, is2, ie2, js2, je2, align1, align2, refine1, refine2)
 always fill the contact according to index order.
 
subroutine fill_corner_contact (eCont, sCont, wCont, nCont, isg, ieg, jsg, jeg, numR, numS, tileRecv, tileSend, is1Recv, ie1Recv, js1Recv, je1Recv, is2Recv, ie2Recv, js2Recv, je2Recv, is1Send, ie1Send, js1Send, je1Send, is2Send, ie2Send, js2Send, je2Send, align1Recv, align2Recv, align1Send, align2Send, whalo, ehalo, shalo, nhalo, tileMe)
 
subroutine fill_overlap (overlap, domain, m, is, ie, js, je, isc, iec, jsc, jec, isg, ieg, jsg, jeg, dir, reverse, symmetry)
 
subroutine fill_overlap_recv_fold (overlap, domain, m, is, ie, js, je, isd, ied, jsd, jed, isg, ieg, dir, ishift, position, ioff, middle, symmetry)
 
subroutine fill_overlap_recv_nofold (overlap, domain, m, is, ie, js, je, isd, ied, jsd, jed, isg, ieg, dir, ioff, is_cyclic, folded, symmetry)
 
subroutine fill_overlap_send_fold (overlap, domain, m, is, ie, js, je, isc, iec, jsc, jec, isg, ieg, dir, ishift, position, ioff, middle, symmetry)
 
subroutine fill_overlap_send_nofold (overlap, domain, m, is, ie, js, je, isc, iec, jsc, jec, isg, ieg, dir, ioff, is_cyclic, folded, symmetry)
 
integer function find_index (array, index_data, start_pos)
 
integer function find_key (key, sorted, insert)
 
subroutine free_comm (domain_id, l_addr, l_addr2)
 
subroutine get_coarse_index (rotate, is, ie, js, je, iadd, jadd, is_c, ie_c, js_c, je_c)
 
type(domaincommunicator2d) function, pointer get_comm (domain_id, l_addr, l_addr2)
 
subroutine get_fold_index_east (jsg, jeg, ieg, jshift, position, is, ie, js, je)
 
subroutine get_fold_index_north (isg, ieg, jeg, ishift, position, is, ie, js, je)
 
subroutine get_fold_index_south (isg, ieg, jsg, ishift, position, is, ie, js, je)
 
subroutine get_fold_index_west (jsg, jeg, isg, jshift, position, is, ie, js, je)
 
integer function get_nest_vector_recv (nest_domain, update_x, update_y, ind_x, ind_y, start_pos, pelist)
 
integer function get_nest_vector_send (nest_domain, update_x, update_y, ind_x, ind_y, start_pos, pelist)
 
subroutine get_nnest (domain, num_nest, tile_coarse, istart_coarse, iend_coarse, jstart_coarse, jend_coarse, x_refine, y_refine, nnest, t_coarse, ncross_coarse, rotate_coarse, is_coarse, ie_coarse, js_coarse, je_coarse, is_fine, ie_fine, js_fine, je_fine)
 
subroutine init_index_type (indexData)
 
subroutine init_overlap_type (overlap)
 
subroutine insert_check_overlap (overlap, pe, tileMe, dir, rotation, is, ie, js, je)
 
subroutine insert_nest_overlap (overlap, pe, is, ie, js, je, dir, rotation)
 
subroutine insert_overlap_type (overlap, pe, tileMe, tileNbr, is, ie, js, je, dir, rotation, from_contact)
 
subroutine insert_update_overlap (overlap, pe, is1, ie1, js1, je1, is2, ie2, js2, je2, dir, reverse, symmetry)
 
subroutine mpp_broadcast_domain_ug (domain)
 Broadcast domain (useful only outside the context of its own pelist)
 
subroutine mpp_compute_block_extent (isg, ieg, ndivs, ibegin, iend)
 Computes the extents of a grid block. More...
 
subroutine mpp_compute_extent (isg, ieg, ndivs, ibegin, iend, extent)
 Computes extents for a grid decomposition with the given indices and divisions.
 
subroutine mpp_deallocate_domain1d (domain)
 
subroutine mpp_deallocate_domain2d (domain)
 
subroutine mpp_deallocate_domainug (domain)
 
subroutine mpp_define_domains1d (global_indices, ndivs, domain, pelist, flags, halo, extent, maskmap, memory_size, begin_halo, end_halo)
 Define data and computational domains on a 1D set of data (isg:ieg) and assign them to PEs. More...
 
subroutine mpp_define_domains2d (global_indices, layout, domain, pelist, xflags, yflags, xhalo, yhalo, xextent, yextent, maskmap, name, symmetry, memory_size, whalo, ehalo, shalo, nhalo, is_mosaic, tile_count, tile_id, complete, x_cyclic_offset, y_cyclic_offset)
 Define 2D data and computational domain on global rectilinear cartesian domain (isg:ieg,jsg:jeg) and assign them to PEs. More...
 
subroutine mpp_define_io_domain (domain, io_layout)
 Define the layout for IO pe's for the given domain. More...
 
subroutine mpp_define_layout2d (global_indices, ndivs, layout)
 
subroutine mpp_define_mosaic (global_indices, layout, domain, num_tile, num_contact, tile1, tile2, istart1, iend1, jstart1, jend1, istart2, iend2, jstart2, jend2, pe_start, pe_end, pelist, whalo, ehalo, shalo, nhalo, xextent, yextent, maskmap, name, memory_size, symmetry, xflags, yflags, tile_id)
 Defines a domain for mosaic tile grids. More...
 
subroutine mpp_define_mosaic_pelist (sizes, pe_start, pe_end, pelist, costpertile)
 Defines a pelist for use with mosaic tiles. More...
 
subroutine mpp_define_nest_domains (nest_domain, domain, num_nest, nest_level, tile_fine, tile_coarse, istart_coarse, icount_coarse, jstart_coarse, jcount_coarse, npes_nest_tile, x_refine, y_refine, extra_halo, name)
 Set up a domain to pass data between aligned coarse and fine grid of nested model. More...
 
subroutine mpp_define_null_domain1d (domain)
 
subroutine mpp_define_null_domain2d (domain)
 
subroutine mpp_define_null_ug_domain (domain)
 
subroutine mpp_define_unstruct_domain (UG_domain, SG_domain, npts_tile, grid_nlev, ndivs, npes_io_group, grid_index, name)
 
logical(l8_kind) function mpp_domain_ug_is_tile_root_pe (domain)
 
logical function mpp_domainug_eq (a, b)
 Overload the .eq. for UG.
 
logical function mpp_domainug_ne (a, b)
 Overload the .ne. for UG.
 
subroutine mpp_get_c2f_index (nest_domain, is_fine, ie_fine, js_fine, je_fine, is_coarse, ie_coarse, js_coarse, je_coarse, dir, nest_level, position)
 Get the index of the data passed from coarse grid to fine grid. More...
 
subroutine mpp_get_f2c_index_coarse (nest_domain, is_coarse, ie_coarse, js_coarse, je_coarse, nest_level, position)
 
subroutine mpp_get_f2c_index_fine (nest_domain, is_coarse, ie_coarse, js_coarse, je_coarse, is_fine, ie_fine, js_fine, je_fine, nest_level, position)
 
integer(i4_kind) function mpp_get_io_domain_ug_layout (domain)
 
type(domain2d) function, pointer mpp_get_nest_coarse_domain (nest_domain, nest_level)
 
type(domain2d) function, pointer mpp_get_nest_fine_domain (nest_domain, nest_level)
 
integer function mpp_get_nest_fine_npes (nest_domain, nest_level)
 
subroutine mpp_get_nest_fine_pelist (nest_domain, nest_level, pelist)
 
integer function mpp_get_nest_npes (nest_domain, nest_level)
 
subroutine mpp_get_nest_pelist (nest_domain, nest_level, pelist)
 
subroutine mpp_get_ug_compute_domain (domain, begin, end, size)
 
subroutine mpp_get_ug_compute_domains (domain, begin, end, size)
 
subroutine mpp_get_ug_domain_grid_index (domain, grid_index)
 
integer function mpp_get_ug_domain_npes (domain)
 
integer function mpp_get_ug_domain_ntiles (domain)
 
subroutine mpp_get_ug_domain_pelist (domain, pelist)
 
integer function mpp_get_ug_domain_tile_id (domain)
 
subroutine mpp_get_ug_domain_tile_list (domain, tiles)
 
subroutine mpp_get_ug_domain_tile_pe_inf (domain, root_pe, npes, pelist)
 
subroutine mpp_get_ug_domains_index (domain, begin, end)
 
subroutine mpp_get_ug_global_domain (domain, begin, end, size)
 
type(domainug) function, pointer mpp_get_ug_io_domain (domain)
 
subroutine mpp_get_ug_sg_domain (UG_domain, SG_domain)
 
subroutine mpp_global_field_free_comm (domain, l_addr, ksize, l_addr2, flags)
 
type(domaincommunicator2d) function, pointer mpp_global_field_init_comm (domain, l_addr, isize_g, jsize_g, isize_l, jsize_l, ksize, l_addr2, flags, position)
 initializes a DomainCommunicator2D type for use in mpp_global_field
 
logical function mpp_is_nest_coarse (nest_domain, nest_level)
 
logical function mpp_is_nest_fine (nest_domain, nest_level)
 
subroutine mpp_modify_domain1d (domain_in, domain_out, cbegin, cend, gbegin, gend, hbegin, hend)
 Modifies the exents of a domain. More...
 
subroutine mpp_modify_domain2d (domain_in, domain_out, isc, iec, jsc, jec, isg, ieg, jsg, jeg, whalo, ehalo, shalo, nhalo)
 
logical function mpp_mosaic_defined ()
 Accessor function for value of mosaic_defined.
 
subroutine mpp_redistribute_free_comm (domain_in, l_addr, domain_out, l_addr2, ksize, lsize)
 
type(domaincommunicator2d) function, pointer mpp_redistribute_init_comm (domain_in, l_addrs_in, domain_out, l_addrs_out, isize_in, jsize_in, ksize_in, isize_out, jsize_out, ksize_out)
 
subroutine mpp_shift_nest_domains (nest_domain, domain, delta_i_coarse, delta_j_coarse, extra_halo)
 Based on mpp_define_nest_domains, but just resets positioning of nest Modifies the parent/coarse start and end indices of the nest location Computes new overlaps of nest PEs on parent PEs Ramstrom/HRD Moving Nest. More...
 
subroutine pop_key (sorted, idx, n_idx, key_idx)
 
subroutine print_nest_overlap (overlap, msg)
 
integer function push_key (sorted, idx, n_idx, insert, key, ival)
 
type(nestspec) function, pointer search_c2f_nest_overlap (nest_domain, nest_level, extra_halo, position)
 
type(nestspec) function, pointer search_f2c_nest_overlap (nest_domain, nest_level, position)
 
subroutine set_bound_overlap (domain, position)
 set up the overlapping for boundary if the domain is symmetry.
 
subroutine set_check_overlap (domain, position)
 set up the overlapping for boundary check if the domain is symmetry. The check will be done on current pe for east boundary for E-cell, north boundary for N-cell, East and North boundary for C-cell
 
subroutine set_contact_point (domain, position)
 this routine sets the overlapping between tiles for E,C,N-cell based on T-cell overlapping
 
subroutine set_domain_comm_inf (update)
 
integer(i8_kind) function set_domain_id (d_id, ksize, flags, gtype, position, whalo, ehalo, shalo, nhalo)
 
subroutine set_overlaps (domain, overlap_in, overlap_out, whalo_out, ehalo_out, shalo_out, nhalo_out)
 this routine sets up the overlapping for mpp_update_domains for arbitrary halo update. should be the halo size defined in mpp_define_domains. xhalo_out, yhalo_out should not be exactly the same as xhalo_in, yhalo_in currently we didn't consider about tripolar grid situation, because in the folded north region, the overlapping is specified through list of points, not through rectangular. But will return back to solve this problem in the future.
 
subroutine set_single_overlap (overlap_in, overlap_out, isoff, ieoff, jsoff, jeoff, index, dir, rotation)
 

Variables

integer, save a2_sort_len =0
 length sorted memory list
 
integer, save a_sort_len =0
 length sorted memory list
 
integer(i8_kind), parameter addr2_base = 65536_i8_kind
 = 0x0000000000010000
 
integer, dimension(-1:max_addrs2), save addrs2_idx =-9999
 index of addr2 associated with d_comm
 
integer(i8_kind), dimension(max_addrs2), save addrs2_sorted =-9999
 list of sorted local addresses
 
integer, dimension(-1:max_addrs), save addrs_idx =-9999
 index of address associated with d_comm
 
integer(i8_kind), dimension(max_addrs), save addrs_sorted =-9999
 list of sorted local addresses
 
logical complete_group_update_on = .false.
 
logical complete_update = .false.
 
integer current_id_update = 0
 
type(domaincommunicator2d), dimension(:), allocatable, target, save d_comm
 domain communicators
 
integer, dimension(-1:max_fields), save d_comm_idx =-9999
 index of d_comm associated with sorted addresses
 
integer, save dc_sort_len =0
 length sorted comm keys (=num active communicators)
 
integer(i8_kind), dimension(max_fields), save dckey_sorted =-9999
 list of sorted local addresses
 
logical debug = .FALSE.
 
logical debug_message_passing = .false.
 Will check the consistency on the boundary between processor/tile when updating domain for symmetric domain and check the consistency on the north folded edge.
 
character(len=32) debug_update_domain = "none"
 namelist interface More...
 
integer debug_update_level = NO_CHECK
 
logical domain_clocks_on =.FALSE.
 
integer(i8_kind) domain_cnt =0
 
logical efp_sum_overflow_check = .false.
 If .true., always do overflow_check when doing EFP bitwise mpp_global_sum.
 
integer, parameter field_s = 0
 
integer, parameter field_x = 1
 
integer, parameter field_y = 2
 
integer group_pack_clock =0
 
integer group_recv_clock =0
 
integer group_send_clock =0
 
integer group_unpk_clock =0
 
integer group_update_buffer_pos = 0
 
integer group_wait_clock =0
 
integer(i8_kind), parameter gt_base = 256_i8_kind
 
integer, save i_sort_len =0
 length sorted domain ids list
 
integer, dimension(-1:max_dom_ids), save ids_idx =-9999
 index of d_comm associated with sorted addesses
 
integer(i8_kind), dimension(max_dom_ids), save ids_sorted =-9999
 list of sorted domain identifiers
 
integer(i8_kind), parameter ke_base = 281474976710656_i8_kind
 
integer, parameter max_addrs =512
 
integer, parameter max_addrs2 =128
 
integer, parameter max_dom_ids =128
 
integer, parameter max_fields =1024
 
integer, parameter max_nonblock_update = 100
 
integer, parameter maxlist = 100
 
integer, parameter maxoverlap = 200
 
logical module_is_initialized = .false.
 
logical mosaic_defined = .false.
 
integer mpp_domains_stack_hwm =0
 
integer mpp_domains_stack_size =0
 
integer, save n_addrs =0
 number of memory addresses used
 
integer, save n_addrs2 =0
 number of memory addresses used
 
integer, save n_comm =0
 number of communicators used
 
integer, save n_ids =0
 number of domain ids used (=i_sort_len; domain ids are never removed)
 
integer nest_pack_clock =0
 
integer nest_recv_clock =0
 
integer nest_send_clock =0
 
integer nest_unpk_clock =0
 
integer nest_wait_clock =0
 
integer, parameter no_check = -1
 
integer nonblock_buffer_pos = 0
 
type(nonblock_type), dimension(:), allocatable nonblock_data
 
integer nonblock_group_buffer_pos = 0
 
integer nonblock_group_pack_clock =0
 
integer nonblock_group_recv_clock =0
 
integer nonblock_group_send_clock =0
 
integer nonblock_group_unpk_clock =0
 
integer nonblock_group_wait_clock =0
 
integer nthread_control_loop = 8
 Determine the loop order for packing and unpacking. When number of threads is greater than nthread_control_loop, the k-loop will be moved outside and combined with number of pack and unpack. When the number of threads is less than or equal to nthread_control_loop, the k-loop is moved inside, but still outside, of j,i loop.
 
type(domain1d), save, public null_domain1d
 
type(domain2d), save, public null_domain2d
 
type(domainug), save, public null_domainug
 
integer num_nonblock_group_update = 0
 
integer num_update = 0
 
integer pack_clock =0
 
integer pe
 
integer recv_clock =0
 
integer recv_clock_nonblock =0
 
integer send_clock =0
 
integer send_pack_clock_nonblock =0
 
logical start_update = .true.
 
integer unpk_clock =0
 
integer unpk_clock_nonblock =0
 
logical use_alltoallw = .false.
 
logical verbose =.FALSE.
 
integer wait_clock =0
 
integer wait_clock_nonblock =0
 
subroutine mpp_domains_init (flags)
 Initialize domain decomp package. More...
 
subroutine init_nonblock_type (nonblock_obj)
 Initialize domain decomp package. More...
 
subroutine mpp_domains_exit ()
 Exit mpp_domains_mod. Serves no particular purpose, but is provided should you require to re-initialize mpp_domains_mod, for some odd reason.
 
subroutine mpp_check_field_3d (field_in, pelist1, pelist2, domain, mesg, w_halo, s_halo, e_halo, n_halo, force_abort, position)
 This routine is used to do parallel checking for 3d data between n and m pe. The comparison is is done on pelist2. When size of pelist2 is 1, we can check the halo; otherwise, halo can not be checked. More...
 
subroutine mpp_check_field_2d (field_in, pelist1, pelist2, domain, mesg, w_halo, s_halo, e_halo, n_halo, force_abort, position)
 This routine is used to do parallel checking for 2d data between n and m pe. The comparison is is done on pelist2. When size of pelist2 is 1, we can check the halo; otherwise, halo can not be checked. More...
 
subroutine mpp_check_field_2d_type1 (field_in, pelist1, pelist2, domain, mesg, w_halo, s_halo, e_halo, n_halo, force_abort)
 This routine is used to check field between running on 1 pe (pelist2) and n pe(pelist1). The need_to_be_checked data is sent to the pelist2 and All the comparison is done on pelist2. More...
 
subroutine mpp_check_field_2d_type2 (field_in, pelist1, pelist2, domain, mesg, force_abort)
 This routine is used to check field between running on m pe (root pe) and n pe. This routine can not check halo. More...
 
subroutine logical mpp_broadcast_domain_1 (domain)
 broadcast domain (useful only outside the context of its own pelist)
 
subroutine mpp_broadcast_domain_nest_coarse (domain, tile_coarse)
 Broadcast nested domain (useful only outside the context of its own pelist)
 
subroutine mpp_domains_set_stack_size (n)
 Set user stack size. More...
 
logical function mpp_domain1d_eq (a, b)
 Set user stack size. More...
 
logical function mpp_domain1d_ne (a, b)
 Set user stack size. More...
 
logical function mpp_domain2d_eq (a, b)
 Set user stack size. More...
 
logical function mpp_domain2d_ne (a, b)
 Set user stack size. More...
 
subroutine mpp_get_compute_domain1d (domain, begin, end, size, max_size, is_global)
 Set user stack size. More...
 
subroutine mpp_get_data_domain1d (domain, begin, end, size, max_size, is_global)
 Set user stack size. More...
 
subroutine mpp_get_global_domain1d (domain, begin, end, size, max_size)
 Set user stack size. More...
 
subroutine mpp_get_memory_domain1d (domain, begin, end, size, max_size, is_global)
 Set user stack size. More...
 
subroutine mpp_get_compute_domain2d (domain, xbegin, xend, ybegin, yend, xsize, xmax_size, ysize, ymax_size, x_is_global, y_is_global, tile_count, position)
 Set user stack size. More...
 
subroutine mpp_get_data_domain2d (domain, xbegin, xend, ybegin, yend, xsize, xmax_size, ysize, ymax_size, x_is_global, y_is_global, tile_count, position)
 Set user stack size. More...
 
subroutine mpp_get_global_domain2d (domain, xbegin, xend, ybegin, yend, xsize, xmax_size, ysize, ymax_size, tile_count, position)
 Set user stack size. More...
 
subroutine mpp_get_memory_domain2d (domain, xbegin, xend, ybegin, yend, xsize, xmax_size, ysize, ymax_size, x_is_global, y_is_global, position)
 Set user stack size. More...
 
subroutine mpp_set_super_grid_indices (grid)
 Modifies the indices in the domain_axis_spec type to those of the supergrid. More...
 
subroutine mpp_create_super_grid_domain (domain)
 Modifies the indices of the input domain to create the supergrid domain. More...
 
subroutine mpp_set_compute_domain1d (domain, begin, end, size, is_global)
 Set user stack size. More...
 
subroutine mpp_set_compute_domain2d (domain, xbegin, xend, ybegin, yend, xsize, ysize, x_is_global, y_is_global, tile_count)
 Set user stack size. More...
 
subroutine mpp_set_data_domain1d (domain, begin, end, size, is_global)
 Set user stack size. More...
 
subroutine mpp_set_data_domain2d (domain, xbegin, xend, ybegin, yend, xsize, ysize, x_is_global, y_is_global, tile_count)
 Set user stack size. More...
 
subroutine mpp_set_global_domain1d (domain, begin, end, size)
 Set user stack size. More...
 
subroutine mpp_set_global_domain2d (domain, xbegin, xend, ybegin, yend, xsize, ysize, tile_count)
 Set user stack size. More...
 
subroutine mpp_get_domain_components (domain, x, y, tile_count)
 Retrieve 1D components of 2D decomposition. More...
 
subroutine mpp_get_compute_domains1d (domain, begin, end, size)
 Set user stack size. More...
 
subroutine mpp_get_compute_domains2d (domain, xbegin, xend, xsize, ybegin, yend, ysize, position)
 Set user stack size. More...
 
subroutine mpp_get_global_domains1d (domain, begin, end, size)
 Set user stack size. More...
 
subroutine mpp_get_global_domains2d (domain, xbegin, xend, xsize, ybegin, yend, ysize, position)
 Set user stack size. More...
 
subroutine mpp_get_domain_extents1d (domain, xextent, yextent)
 Set user stack size. More...
 
subroutine mpp_get_domain_extents2d (domain, xextent, yextent)
 This will return xextent and yextent for each tile.
 
integer function mpp_get_domain_pe (domain)
 Set user stack size. More...
 
integer function mpp_get_domain_tile_root_pe (domain)
 Set user stack size. More...
 
integer function mpp_get_domain_tile_commid (domain)
 Set user stack size. More...
 
integer function mpp_get_domain_commid (domain)
 Set user stack size. More...
 
type(domain2d) function, pointer mpp_get_io_domain (domain)
 Set user stack size. More...
 
subroutine mpp_get_pelist1d (domain, pelist, pos)
 Set user stack size. More...
 
subroutine mpp_get_pelist2d (domain, pelist, pos)
 Set user stack size. More...
 
subroutine mpp_get_layout1d (domain, layout)
 Set user stack size. More...
 
subroutine mpp_get_layout2d (domain, layout)
 Set user stack size. More...
 
subroutine mpp_get_domain_shift (domain, ishift, jshift, position)
 Returns the shift value in x and y-direction according to domain position.. More...
 
subroutine mpp_get_neighbor_pe_1d (domain, direction, pe)
 Return PE to the righ/left of this PE-domain.
 
subroutine mpp_get_neighbor_pe_2d (domain, direction, pe)
 Return PE North/South/East/West of this PE-domain. direction must be NORTH, SOUTH, EAST or WEST.
 
subroutine nullify_domain2d_list (domain)
 Set user stack size. More...
 
logical function mpp_domain_is_symmetry (domain)
 Set user stack size. More...
 
logical function mpp_domain_is_initialized (domain)
 Set user stack size. More...
 
logical function domain_update_is_needed (domain, whalo, ehalo, shalo, nhalo)
 Set user stack size. More...
 
type(overlapspec) function, pointer search_update_overlap (domain, whalo, ehalo, shalo, nhalo, position)
 this routine found the domain has the same halo size with the input whalo, ehalo,
 
type(overlapspec) function, pointer search_check_overlap (domain, position)
 this routine finds the check at certain position
 
type(overlapspec) function, pointer search_bound_overlap (domain, position)
 This routine finds the bound at certain position.
 
integer function, dimension(size(domain%tile_id(:))) mpp_get_tile_id (domain)
 Returns the tile_id on current pe.
 
subroutine mpp_get_tile_list (domain, tiles)
 Return the tile_id on current pelist. one-tile-per-pe is assumed.
 
integer function mpp_get_ntile_count (domain)
 Returns number of tiles in mosaic.
 
integer function mpp_get_current_ntile (domain)
 Returns number of tile on current pe.
 
logical function mpp_domain_is_tile_root_pe (domain)
 Returns if current pe is the root pe of the tile, if number of tiles on current pe is greater than 1, will return true, if isc==isg and jsc==jsg also will return true, otherwise false will be returned.
 
integer function mpp_get_tile_npes (domain)
 Returns number of processors used on current tile.
 
subroutine mpp_get_tile_pelist (domain, pelist)
 Get the processors list used on current tile.
 
subroutine mpp_get_tile_compute_domains (domain, xbegin, xend, ybegin, yend, position)
 Set user stack size. More...
 
integer function mpp_get_num_overlap (domain, action, p, position)
 Set user stack size. More...
 
subroutine mpp_get_update_size (domain, nsend, nrecv, position)
 Set user stack size. More...
 
subroutine mpp_get_update_pelist (domain, action, pelist, position)
 Set user stack size. More...
 
subroutine mpp_get_overlap (domain, action, p, is, ie, js, je, dir, rot, position)
 Set user stack size. More...
 
character(len=name_length) function mpp_get_domain_name (domain)
 Set user stack size. More...
 
integer function mpp_get_domain_root_pe (domain)
 Set user stack size. More...
 
integer function mpp_get_domain_npes (domain)
 Set user stack size. More...
 
subroutine mpp_get_domain_pelist (domain, pelist)
 Set user stack size. More...
 
integer function, dimension(2) mpp_get_io_domain_layout (domain)
 Set user stack size. More...
 
integer function get_rank_send (domain, overlap_x, overlap_y, rank_x, rank_y, ind_x, ind_y)
 Set user stack size. More...
 
integer function get_rank_recv (domain, overlap_x, overlap_y, rank_x, rank_y, ind_x, ind_y)
 Set user stack size. More...
 
integer function get_vector_recv (domain, update_x, update_y, ind_x, ind_y, start_pos, pelist)
 Set user stack size. More...
 
integer function get_vector_send (domain, update_x, update_y, ind_x, ind_y, start_pos, pelist)
 Set user stack size. More...
 
integer function get_rank_unpack (domain, overlap_x, overlap_y, rank_x, rank_y, ind_x, ind_y)
 Set user stack size. More...
 
integer function get_mesgsize (overlap, do_dir)
 Set user stack size. More...
 
subroutine mpp_set_domain_symmetry (domain, symmetry)
 Set user stack size. More...
 
recursive subroutine mpp_copy_domain1d (domain_in, domain_out)
 Copies input 1d domain to the output 1d domain. More...
 
subroutine mpp_copy_domain2d (domain_in, domain_out)
 Copies input 2d domain to the output 2d domain. More...
 
subroutine mpp_copy_domain2d_spec (domain2D_spec_in, domain2d_spec_out)
 Copies input 2d domain spec to the output 2d domain spec. More...
 
subroutine mpp_copy_domain1d_spec (domain1D_spec_in, domain1D_spec_out)
 Copies input 1d domain spec to the output 1d domain spec. More...
 
subroutine mpp_copy_domain_axis_spec (domain_axis_spec_in, domain_axis_spec_out)
 Copies input domain_axis_spec to the output domain_axis_spec. More...
 
subroutine set_group_update (group, domain)
 Set user stack size. More...
 
subroutine mpp_clear_group_update (group)
 Set user stack size. More...
 
logical function mpp_group_update_initialized (group)
 Set user stack size. More...
 
logical function mpp_group_update_is_set (group)
 Set user stack size. More...
 

Detailed Description

Domain decomposition and domain update for message-passing codes.

Instantiates a layout with the given indices and divisions.

Author
V. Balaji SGI/GFDL Princeton University

A set of simple calls for domain decomposition and domain updates on rectilinear grids. It requires the module mpp.F90, upon which it is built.
Scalable implementations of finite-difference codes are generally based on decomposing the model domain into subdomains that are distributed among processors. These domains will then be obliged to exchange data at their boundaries if data dependencies are merely nearest-neighbour, or may need to acquire information from the global domain if there are extended data dependencies, as in the spectral transform. The domain decomposition is a key operation in the development of parallel codes.

mpp_domains_mod provides a domain decomposition and domain update API for rectilinear grids, built on top of the mpp_mod API for message passing. Features of mpp_domains_mod include:

Simple, minimal API, with free access to underlying API for more complicated stuff.

Design toward typical use in climate/weather CFD codes.

[Domains]
It is assumed that domain decomposition will mainly be in 2 horizontal dimensions, which will in general be the two fastest-varying indices. There is a separate implementation of 1D decomposition on the fastest-varying index, and 1D decomposition on the second index, treated as a special case of 2D decomposition, is also possible. We define domain as the grid associated with a task. We define the compute domain as the set of gridpoints that are computed by a task, and the data domain as the set of points that are required by the task for the calculation. There can in general be more than 1 task per PE, though often the number of domains is the same as the processor count. We define the global domain as the global computational domain of the entire model (i.e, the same as the computational domain if run on a single processor). 2D domains are defined using a derived type domain2D, constructed as follows (see comments in code for more details).
 type, public :: domain_axis_spec
   private
   integer :: begin, end, size, max_size
   logical :: is_global
 end type domain_axis_spec

 type, public :: domain1D
   private
   type(domain_axis_spec) :: compute, data, global, active
   logical :: mustputb, mustgetb, mustputf, mustgetf, folded
   type(domain1D), pointer, dimension(:) :: list
   integer :: pe  ! pe to which the domain is assigned
   integer :: pos
 end type domain1D

 type, public :: domain2D
   private
   type(domain1D) :: x
   type(domain1D) :: y
   type(domain2D), pointer, dimension(:) :: list
   integer :: pe ! PE to which this domain is assigned
   integer :: pos
 end type domain2D

 type(domain1D), public :: NULL_DOMAIN1D
 type(domain2D), public :: NULL_DOMAIN2D

Data Type Documentation

◆ mpp_domains_mod::check_data_size

interface mpp_domains_mod::check_data_size

Private interface for internal usage, compares two sizes.

Definition at line 2335 of file mpp_domains.F90.

Private Member Functions

 check_data_size_1d
 
 check_data_size_2d
 

◆ mpp_domains_mod::contact_type

type mpp_domains_mod::contact_type

Type used to represent the contact between tiles.

Note
This type will only be used in mpp_domains_define.inc

Definition at line 417 of file mpp_domains.F90.

Collaboration diagram for contact_type:
[legend]

Private Attributes

integer, dimension(:), pointer align1 =>NULL()
 
integer, dimension(:), pointer align2 =>NULL()
 alignment of me and neighbor
 
integer, dimension(:), pointer ie1 =>NULL()
 i-index of current tile repsenting contact
 
integer, dimension(:), pointer ie2 =>NULL()
 i-index of neighbor tile repsenting contact
 
integer, dimension(:), pointer is1 =>NULL()
 
integer, dimension(:), pointer is2 =>NULL()
 
integer, dimension(:), pointer je1 =>NULL()
 j-index of current tile repsenting contact
 
integer, dimension(:), pointer je2 =>NULL()
 j-index of neighbor tile repsenting contact
 
integer, dimension(:), pointer js1 =>NULL()
 
integer, dimension(:), pointer js2 =>NULL()
 
integer ncontact
 number of neighbor tile.
 
real, dimension(:), pointer refine1 =>NULL()
 
real, dimension(:), pointer refine2 =>NULL()
 
integer, dimension(:), pointer tile =>NULL()
 neighbor tile
 

◆ mpp_domains_mod::domain1d

type mpp_domains_mod::domain1d

One dimensional domain used to manage shared data access between pes.

Definition at line 631 of file mpp_domains.F90.

Collaboration diagram for domain1d:
[legend]

Private Attributes

type(domain_axis_speccompute
 index limits for compute domain
 
logical cyclic
 true if domain is cyclic
 
type(domain_axis_specdomain_data
 index limits for data domain
 
type(domain_axis_specglobal
 index limits for global domain
 
integer goffset
 needed for global sum
 
type(domain1d), dimension(:), pointer list =>NULL()
 list of each pe's domains
 
integer loffset
 needed for global sum
 
type(domain_axis_specmemory
 index limits for memory domain
 
integer pe
 PE to which this domain is assigned.
 
integer pos
 position of this PE within link list, i.e domainlist(pos)pe = pe
 

◆ mpp_domains_mod::domain1d_spec

type mpp_domains_mod::domain1d_spec

A private type used to specify index limits for a domain decomposition.

Definition at line 298 of file mpp_domains.F90.

Collaboration diagram for domain1d_spec:
[legend]

Private Attributes

type(domain_axis_speccompute
 
type(domain_axis_specglobal
 
integer pos
 

◆ mpp_domains_mod::domain2d

type mpp_domains_mod::domain2d

The domain2D type contains all the necessary information to define the global, compute and data domains of each task, as well as the PE associated with the task. The PEs from which remote data may be acquired to update the data domain are also contained in a linked list of neighbours.

Domain types of higher rank can be constructed from type domain1D typically we only need 1 and 2D, but could need higher (e.g 3D LES) some elements are repeated below if they are needed once per domain, not once per axis

Definition at line 367 of file mpp_domains.F90.

Collaboration diagram for domain2d:
[legend]

Private Attributes

type(overlapspec), pointer bound_c => NULL()
 send information for getting boundary value for symmetry domain.
 
type(overlapspec), pointer bound_e => NULL()
 send information for getting boundary value for symmetry domain.
 
type(overlapspec), pointer bound_n => NULL()
 send information for getting boundary value for symmetry domain.
 
type(overlapspec), pointer check_c => NULL()
 send and recv information for boundary consistency check of C-cell
 
type(overlapspec), pointer check_e => NULL()
 send and recv information for boundary consistency check of E-cell
 
type(overlapspec), pointer check_n => NULL()
 send and recv information for boundary consistency check of N-cell
 
integer comm_id
 MPI communicator for the mosaic.
 
integer ehalo
 halo size in x-direction
 
integer fold
 
integer(i8_kind) id
 
logical initialized =.FALSE.
 indicate if the overlapping is computed or not.
 
type(domain2d), pointer io_domain => NULL()
 domain for IO, will be set through calling mpp_set_io_domain ( this will be changed).
 
integer, dimension(2) io_layout
 io_layout, will be set through mpp_define_io_domain default = domain layout
 
type(domain2d_spec), dimension(:), pointer list => NULL()
 domain decomposition on pe list
 
integer max_ntile_pe
 maximum value in the pelist of number of tiles on each pe.
 
character(len=name_length) name ='unnamed'
 name of the domain, default is "unspecified"
 
integer ncontacts
 number of contact region within mosaic.
 
integer nhalo
 halo size in y-direction
 
integer ntiles
 number of tiles within mosaic
 
integer pe
 PE to which this domain is assigned.
 
integer, dimension(:,:), pointer pearray => NULL()
 pe of each layout position
 
integer pos
 position of this PE within link list
 
logical rotated_ninety
 indicate if any contact rotate NINETY or MINUS_NINETY
 
integer shalo
 
logical symmetry
 indicate the domain is symmetric or non-symmetric.
 
integer tile_comm_id
 MPI communicator for this tile of domain.
 
integer, dimension(:), pointer tile_id => NULL()
 tile id of each tile on current processor
 
integer, dimension(:), pointer tile_id_all => NULL()
 tile id of all the tiles of domain
 
integer tile_root_pe
 root pe of current tile.
 
type(tile_type), dimension(:), pointer tilelist => NULL()
 store tile information
 
type(overlapspec), pointer update_c => NULL()
 send and recv information for halo update of C-cell.
 
type(overlapspec), pointer update_e => NULL()
 send and recv information for halo update of E-cell.
 
type(overlapspec), pointer update_n => NULL()
 send and recv information for halo update of N-cell.
 
type(overlapspec), pointer update_t => NULL()
 send and recv information for halo update of T-cell.
 
integer whalo
 
type(domain1d), dimension(:), pointer x => NULL()
 x-direction domain decomposition
 
type(domain1d), dimension(:), pointer y => NULL()
 y-direction domain decomposition
 

◆ mpp_domains_mod::domain2d_spec

type mpp_domains_mod::domain2d_spec

Private type to specify multiple index limits and pe information for a 2D domain.

Definition at line 307 of file mpp_domains.F90.

Collaboration diagram for domain2d_spec:
[legend]

Private Attributes

integer pe
 PE to which this domain is assigned.
 
integer pos
 position of this PE within link list
 
integer, dimension(:), pointer tile_id => NULL()
 tile id of each tile
 
integer tile_root_pe
 root pe of tile.
 
type(domain1d_spec), dimension(:), pointer x => NULL()
 x-direction domain decomposition
 
type(domain1d_spec), dimension(:), pointer y => NULL()
 y-direction domain decomposition
 

◆ mpp_domains_mod::domain_axis_spec

type mpp_domains_mod::domain_axis_spec

Used to specify index limits along an axis of a domain.

Definition at line 287 of file mpp_domains.F90.

Collaboration diagram for domain_axis_spec:
[legend]

Private Attributes

integer begin
 start of domain axis
 
integer end
 end of domain axis
 
logical is_global
 .true. if domain axis extent covers global domain
 
integer max_size
 max size in set
 
integer size
 size of domain axis
 

◆ mpp_domains_mod::domaincommunicator2d

type mpp_domains_mod::domaincommunicator2d

Used for sending domain data between pe's.

Definition at line 498 of file mpp_domains.F90.

Collaboration diagram for domaincommunicator2d:
[legend]

Private Attributes

integer, dimension(:), allocatable cfrom_pe
 
integer, dimension(:), allocatable cto_pe
 
type(domain2d), pointer domain =>NULL()
 
type(domain2d), pointer domain_in =>NULL()
 
type(domain2d), pointer domain_out =>NULL()
 
integer gf_ioff =0
 
integer gf_joff =0
 
integer(i8_kind) id =-9999
 
logical initialized =.false.
 
integer isize =0
 
integer isize_in =0
 
integer isize_max =0
 
integer isize_out =0
 
integer, dimension(:), allocatable isizer
 
integer jsize =0
 
integer jsize_in =0
 
integer jsize_max =0
 
integer jsize_out =0
 
integer, dimension(:), allocatable jsizer
 
integer ke =0
 
integer(i8_kind) l_addr =-9999
 
integer(i8_kind) l_addrx =-9999
 
integer(i8_kind) l_addry =-9999
 
integer position
 data location. T, E, C, or N.
 
logical, dimension(:), allocatable r_do_buf
 
integer, dimension(:), allocatable r_msize
 
type(overlapspec), dimension(:,:,:,:), pointer recv => NULL()
 
integer, dimension(:,:), allocatable recvie
 
integer, dimension(:,:), allocatable recvis
 
integer, dimension(:,:), allocatable recvje
 
integer, dimension(:,:), allocatable recvjs
 
integer(i8_kind), dimension(:), allocatable rem_addr
 
integer(i8_kind), dimension(:,:), allocatable rem_addrl
 
integer(i8_kind), dimension(:,:), allocatable rem_addrlx
 
integer(i8_kind), dimension(:,:), allocatable rem_addrly
 
integer(i8_kind), dimension(:), allocatable rem_addrx
 
integer(i8_kind), dimension(:), allocatable rem_addry
 
integer rlist_size =0
 
logical, dimension(:), allocatable s_do_buf
 
integer, dimension(:), allocatable s_msize
 
type(overlapspec), dimension(:,:,:,:), pointer send => NULL()
 
integer, dimension(:,:), allocatable sendie
 
integer, dimension(:,:), allocatable sendis
 
integer, dimension(:,:), allocatable sendisr
 
integer, dimension(:,:), allocatable sendje
 
integer, dimension(:,:), allocatable sendjs
 
integer, dimension(:,:), allocatable sendjsr
 
integer slist_size =0
 

◆ mpp_domains_mod::domainug

type mpp_domains_mod::domainug

Domain information for managing data on unstructured grids.

Definition at line 266 of file mpp_domains.F90.

Collaboration diagram for domainug:
[legend]

Private Attributes

type(unstruct_axis_speccompute
 
type(unstruct_axis_specglobal
 axis specifications
 
integer, dimension(:), pointer grid_index => NULL()
 index of grid on current pe
 
type(domainug), pointer io_domain =>NULL()
 
integer(i4_kind) io_layout
 
type(unstruct_domain_spec), dimension(:), pointer list =>NULL()
 
integer npes_io_group
 
integer ntiles
 
integer pe
 
integer pos
 
type(unstruct_pass_typesg2ug
 
type(domain2d), pointer sg_domain => NULL()
 
integer tile_id
 
integer tile_npes
 
integer tile_root_pe
 
type(unstruct_pass_typeug2sg
 

◆ mpp_domains_mod::index_type

type mpp_domains_mod::index_type

index bounds for use in nestSpec

Definition at line 431 of file mpp_domains.F90.

Collaboration diagram for index_type:
[legend]

Private Attributes

integer ie_me
 
integer ie_you
 
integer is_me
 
integer is_you
 
integer je_me
 
integer je_you
 
integer js_me
 
integer js_you
 

◆ mpp_domains_mod::mpp_broadcast_domain

interface mpp_domains_mod::mpp_broadcast_domain

Broadcasts domain to every pe. Only useful outside the context of it's own pelist.


Example usage: call mpp_broadcast_domain(domain) call mpp_broadcast_domain(domain_in, domain_out) call mpp_broadcast_domain(domain, tile_coarse) ! nested domains

Definition at line 1505 of file mpp_domains.F90.

Private Member Functions

 mpp_broadcast_domain_1
 
 mpp_broadcast_domain_2
 
 mpp_broadcast_domain_nest_coarse
 
 mpp_broadcast_domain_nest_fine
 
 mpp_broadcast_domain_ug
 

◆ mpp_domains_mod::mpp_check_field

interface mpp_domains_mod::mpp_check_field

Parallel checking between two ensembles which run on different set pes at the same time
There are two forms for the mpp_check_field call. The 2D version is generally to be used and 3D version is built by repeated calls to the 2D version.

Example usage:

call mpp_check_field(field_in, pelist1, pelist2, domain, mesg, &
w_halo, s_halo, e_halo, n_halo, force_abort )
Parameters
field_inField to be checked
domainDomain of current pe
mesgMessage to be printed out
w_haloHalo size to be checked, default is 0
s_haloHalo size to be checked, default is 0
e_haloHalo size to be checked, default is 0
n_haloHalo size to be checked, default is 0
force_abortWhen true, abort program when any difference found. Default is false.

Definition at line 1751 of file mpp_domains.F90.

Private Member Functions

 mpp_check_field_2d
 
 mpp_check_field_3d
 

◆ mpp_domains_mod::mpp_complete_do_update

interface mpp_domains_mod::mpp_complete_do_update

Private interface used for non blocking updates.

Definition at line 1294 of file mpp_domains.F90.

Private Member Functions

 mpp_complete_do_update_i4_3d
 
 mpp_complete_do_update_i8_3d
 
 mpp_complete_do_update_r4_3d
 
 mpp_complete_do_update_r4_3dv
 
 mpp_complete_do_update_r8_3d
 
 mpp_complete_do_update_r8_3dv
 

◆ mpp_domains_mod::mpp_complete_group_update

interface mpp_domains_mod::mpp_complete_group_update

Completes a pending non-blocking group update Must follow a call to mpp_start_group_update.

Parameters
[in,out]type(mpp_group_update_type)group
[in,out]type(domain2D)domain
[in]d_typedata type

Definition at line 1354 of file mpp_domains.F90.

Private Member Functions

 mpp_complete_group_update_r4
 
 mpp_complete_group_update_r8
 

◆ mpp_domains_mod::mpp_complete_update_domains

interface mpp_domains_mod::mpp_complete_update_domains

Must be used after a call to mpp_start_update_domains in order to complete a nonblocking domain update. See mpp_start_update_domains for more info.

Definition at line 1236 of file mpp_domains.F90.

Private Member Functions

 mpp_complete_update_domain2d_i4_2d
 
 mpp_complete_update_domain2d_i4_3d
 
 mpp_complete_update_domain2d_i4_4d
 
 mpp_complete_update_domain2d_i4_5d
 
 mpp_complete_update_domain2d_i8_2d
 
 mpp_complete_update_domain2d_i8_3d
 
 mpp_complete_update_domain2d_i8_4d
 
 mpp_complete_update_domain2d_i8_5d
 
 mpp_complete_update_domain2d_r4_2d
 
 mpp_complete_update_domain2d_r4_2dv
 
 mpp_complete_update_domain2d_r4_3d
 
 mpp_complete_update_domain2d_r4_3dv
 
 mpp_complete_update_domain2d_r4_4d
 
 mpp_complete_update_domain2d_r4_4dv
 
 mpp_complete_update_domain2d_r4_5d
 
 mpp_complete_update_domain2d_r4_5dv
 
 mpp_complete_update_domain2d_r8_2d
 
 mpp_complete_update_domain2d_r8_2dv
 
 mpp_complete_update_domain2d_r8_3d
 
 mpp_complete_update_domain2d_r8_3dv
 
 mpp_complete_update_domain2d_r8_4d
 
 mpp_complete_update_domain2d_r8_4dv
 
 mpp_complete_update_domain2d_r8_5d
 
 mpp_complete_update_domain2d_r8_5dv
 

◆ mpp_domains_mod::mpp_copy_domain

interface mpp_domains_mod::mpp_copy_domain

Copy 1D or 2D domain.

Parameters
domain_inInput domain to get read
domain_outOutput domain to get written to

Definition at line 912 of file mpp_domains.F90.

Private Member Functions

 mpp_copy_domain1d
 
 mpp_copy_domain2d
 

◆ mpp_domains_mod::mpp_create_group_update

interface mpp_domains_mod::mpp_create_group_update

Constructor for the mpp_group_update_type which is then used with mpp_start_group_update.

Parameters

Definition at line 1314 of file mpp_domains.F90.

Private Member Functions

 mpp_create_group_update_r4_2d
 
 mpp_create_group_update_r4_2dv
 
 mpp_create_group_update_r4_3d
 
 mpp_create_group_update_r4_3dv
 
 mpp_create_group_update_r4_4d
 
 mpp_create_group_update_r4_4dv
 
 mpp_create_group_update_r8_2d
 
 mpp_create_group_update_r8_2dv
 
 mpp_create_group_update_r8_3d
 
 mpp_create_group_update_r8_3dv
 
 mpp_create_group_update_r8_4d
 
 mpp_create_group_update_r8_4dv
 

◆ mpp_domains_mod::mpp_deallocate_domain

interface mpp_domains_mod::mpp_deallocate_domain

Deallocate given 1D or 2D domain.

Parameters
domainan allocated domain1D or domain2D

Definition at line 919 of file mpp_domains.F90.

Private Member Functions

 mpp_deallocate_domain1d
 
 mpp_deallocate_domain2d
 

◆ mpp_domains_mod::mpp_define_domains

interface mpp_domains_mod::mpp_define_domains

Set up a domain decomposition.

There are two forms for the mpp_define_domains call. The 2D version is generally to be used but is built by repeated calls to the 1D version, also provided.


Example usage:

               call mpp_define_domains( global_indices, ndivs, domain, &
                              pelist, flags, halo, extent, maskmap )
               call mpp_define_domains( global_indices, layout, domain, pelist, &
                              xflags, yflags, xhalo, yhalo,           &
                              xextent, yextent, maskmap, name )
Parameters
global_indicesDefines the global domain.
ndivsThe number of domain divisions required.
[in,out]domainHolds the resulting domain decomposition.
pelistList of PEs to which the domains are to be assigned.
flagsAn optional flag to pass additional information about the desired domain topology. Useful flags in a 1D decomposition include GLOBAL_DATA_DOMAIN and CYCLIC_GLOBAL_DOMAIN. Flags are integers: multiple flags may be added together. The flag values are public parameters available by use association.
haloWidth of the halo.
extentNormally mpp_define_domains attempts an even division of the global domain across ndivs domains. The extent array can be used by the user to pass a custom domain division. The extent array has ndivs elements and holds the compute domain widths, which should add up to cover the global domain exactly.
maskmapSome divisions may be masked (maskmap=.FALSE.) to exclude them from the computation (e.g for ocean model domains that are all land). The maskmap array is dimensioned ndivs and contains .TRUE. values for any domain that must be included in the computation (default all). The pelist array length should match the number of domains included in the computation.


Example usage:

call mpp_define_domains( (/1,100/), 10, domain, &
flags=global_data_domain+cyclic_global_domain, halo=2 )

defines 10 compute domains spanning the range [1,100] of the global domain. The compute domains are non-overlapping blocks of 10. All the data domains are global, and with a halo of 2 span the range [-1:102]. And since the global domain has been declared to be cyclic, domain(9)next => domain(0) and domain(0)prev => domain(9). A field is allocated on the data domain, and computations proceed on the compute domain. A call to mpp_update_domains would fill in the values in the halo region:

call mpp_get_data_domain( domain, isd, ied ) !returns -1 and 102
call mpp_get_compute_domain( domain, is, ie ) !returns (1,10) on PE 0 ...
allocate( a(isd:ied) )
do i = is,ie
a(i) = <perform computations>
end do
call mpp_update_domains( a, domain )


The call to mpp_update_domainsfills in the regions outside the compute domain. Since the global domain is cyclic, the values at i=(-1,0) are the same as at i=(99,100); and i=(101,102) are the same as i=(1,2).

The 2D version is just an extension of this syntax to two dimensions.

The 2D version of the above should generally be used in codes, including 1D-decomposed ones, if there is a possibility of future evolution toward 2D decomposition. The arguments are similar to the 1D case, except that now we have optional arguments flags, halo, extent and maskmap along two axes.

flags can now take an additional possible value to fold one or more edges. This is done by using flags FOLD_WEST_EDGE, FOLD_EAST_EDGE, FOLD_SOUTH_EDGE or FOLD_NORTH_EDGE. When a fold exists (e.g cylindrical domain), vector fields reverse sign upon crossing the fold. This parity reversal is performed only in the vector version of mpp_update_domains. In addition, shift operations may need to be applied to vector fields on staggered grids, also described in the vector interface to mpp_update_domains.

name is the name associated with the decomposition, e.g 'Ocean model'. If this argument is present, mpp_define_domains will print the domain decomposition generated to stdlog.


Examples: call mpp_define_domains( (/1,100,1,100/), (/2,2/), domain, xhalo=1 ) will create the following domain layout:

domain domain(1) domain(2) domain(3) domain(4)
Compute domain 1,50,1,50 51,100,1,50 1,50,51,100 51,100,51,100
Data domain 0,51,1,50 50,101,1,50 0,51,51,100 50,101,51,100

Again, we allocate arrays on the data domain, perform computations on the compute domain, and call mpp_update_domains to update the halo region.

If we wished to perfom a 1D decomposition along Y on the same global domain, we could use:

               call mpp_define_domains( (/1,100,1,100/), layout=(/4,1/), domain, xhalo=1 )

This will create the following domain layout:

domain domain(1) domain(2) domain(3) domain(4)
Compute domain 1,100,1,25 1,100,26,50 1,100,51,75 1,100,76,100
Data domain 0,101,1,25 0,101,26,50 0,101,51,75 1,101,76,100

Definition at line 891 of file mpp_domains.F90.

Private Member Functions

 mpp_define_domains1d
 
 mpp_define_domains2d
 

◆ mpp_domains_mod::mpp_define_layout

interface mpp_domains_mod::mpp_define_layout

Retrieve layout associated with a domain decomposition. Given a global 2D domain and the number of divisions in the decomposition ndivs (usually the PE count unless some domains are masked) this calls returns a 2D domain layout. By default, mpp_define_layout will attempt to divide the 2D index space into domains that maintain the aspect ratio of the global domain. If this cannot be done, the algorithm favours domains that are longer in x than y, a preference that could improve vector performance.
Example usage:

call mpp_define_layout( global_indices, ndivs, layout )

Definition at line 773 of file mpp_domains.F90.

Private Member Functions

 mpp_define_layout2d
 

◆ mpp_domains_mod::mpp_define_null_domain

interface mpp_domains_mod::mpp_define_null_domain

Defines a nullified 1D or 2D domain.


Example usage:

call mpp_define_null_domain(domain)

Definition at line 903 of file mpp_domains.F90.

Private Member Functions

 mpp_define_null_domain1d
 
 mpp_define_null_domain2d
 

◆ mpp_domains_mod::mpp_do_check

interface mpp_domains_mod::mpp_do_check

Private interface to updates data domain of 3D field whose computational domains have been computed.

Definition at line 1555 of file mpp_domains.F90.

Private Member Functions

 mpp_do_check_c4_3d
 
 mpp_do_check_c8_3d
 
 mpp_do_check_i4_3d
 
 mpp_do_check_i8_3d
 
 mpp_do_check_r4_3d
 
 mpp_do_check_r4_3dv
 
 mpp_do_check_r8_3d
 
 mpp_do_check_r8_3dv
 

◆ mpp_domains_mod::mpp_do_get_boundary

interface mpp_domains_mod::mpp_do_get_boundary

Definition at line 1656 of file mpp_domains.F90.

Private Member Functions

 mpp_do_get_boundary_r4_3d
 
 mpp_do_get_boundary_r4_3dv
 
 mpp_do_get_boundary_r8_3d
 
 mpp_do_get_boundary_r8_3dv
 

◆ mpp_domains_mod::mpp_do_get_boundary_ad

interface mpp_domains_mod::mpp_do_get_boundary_ad

Definition at line 1664 of file mpp_domains.F90.

Private Member Functions

 mpp_do_get_boundary_ad_r4_3d
 
 mpp_do_get_boundary_ad_r4_3dv
 
 mpp_do_get_boundary_ad_r8_3d
 
 mpp_do_get_boundary_ad_r8_3dv
 

◆ mpp_domains_mod::mpp_do_global_field

interface mpp_domains_mod::mpp_do_global_field

Private helper interface used by mpp_global_field.

Definition at line 1865 of file mpp_domains.F90.

Private Member Functions

 mpp_do_global_field2d_c4_3d
 
 mpp_do_global_field2d_c8_3d
 
 mpp_do_global_field2d_i4_3d
 
 mpp_do_global_field2d_i8_3d
 
 mpp_do_global_field2d_l4_3d
 
 mpp_do_global_field2d_l8_3d
 
 mpp_do_global_field2d_r4_3d
 
 mpp_do_global_field2d_r8_3d
 

◆ mpp_domains_mod::mpp_do_global_field_ad

interface mpp_domains_mod::mpp_do_global_field_ad

Definition at line 1917 of file mpp_domains.F90.

Private Member Functions

 mpp_do_global_field2d_c4_3d_ad
 
 mpp_do_global_field2d_c8_3d_ad
 
 mpp_do_global_field2d_i4_3d_ad
 
 mpp_do_global_field2d_i8_3d_ad
 
 mpp_do_global_field2d_l4_3d_ad
 
 mpp_do_global_field2d_l8_3d_ad
 
 mpp_do_global_field2d_r4_3d_ad
 
 mpp_do_global_field2d_r8_3d_ad
 

◆ mpp_domains_mod::mpp_do_group_update

interface mpp_domains_mod::mpp_do_group_update

Definition at line 1330 of file mpp_domains.F90.

Private Member Functions

 mpp_do_group_update_r4
 
 mpp_do_group_update_r8
 

◆ mpp_domains_mod::mpp_do_redistribute

interface mpp_domains_mod::mpp_do_redistribute

Definition at line 1718 of file mpp_domains.F90.

Private Member Functions

 mpp_do_redistribute_c4_3d
 
 mpp_do_redistribute_c8_3d
 
 mpp_do_redistribute_i4_3d
 
 mpp_do_redistribute_i8_3d
 
 mpp_do_redistribute_l4_3d
 
 mpp_do_redistribute_l8_3d
 
 mpp_do_redistribute_r4_3d
 
 mpp_do_redistribute_r8_3d
 

◆ mpp_domains_mod::mpp_do_update

interface mpp_domains_mod::mpp_do_update

Private interface used for mpp_update_domains.

Definition at line 1539 of file mpp_domains.F90.

Private Member Functions

 mpp_do_update_c4_3d
 
 mpp_do_update_c8_3d
 
 mpp_do_update_i4_3d
 
 mpp_do_update_i8_3d
 
 mpp_do_update_r4_3d
 
 mpp_do_update_r4_3dv
 
 mpp_do_update_r8_3d
 
 mpp_do_update_r8_3dv
 

◆ mpp_domains_mod::mpp_do_update_ad

interface mpp_domains_mod::mpp_do_update_ad

Passes a data field from a unstructured grid to an structured grid
Example usage:

       call mpp_pass_UG_to_SG(UG_domain, field_UG, field_SG)

Definition at line 1608 of file mpp_domains.F90.

Private Member Functions

 mpp_do_update_ad_r4_3d
 
 mpp_do_update_ad_r4_3dv
 
 mpp_do_update_ad_r8_3d
 
 mpp_do_update_ad_r8_3dv
 

◆ mpp_domains_mod::mpp_do_update_nest_coarse

interface mpp_domains_mod::mpp_do_update_nest_coarse

Used by mpp_update_nest_coarse to perform domain updates.

Definition at line 1471 of file mpp_domains.F90.

Private Member Functions

 mpp_do_update_nest_coarse_i4_3d
 
 mpp_do_update_nest_coarse_i8_3d
 
 mpp_do_update_nest_coarse_r4_3d
 
 mpp_do_update_nest_coarse_r4_3dv
 
 mpp_do_update_nest_coarse_r8_3d
 
 mpp_do_update_nest_coarse_r8_3dv
 

◆ mpp_domains_mod::mpp_do_update_nest_fine

interface mpp_domains_mod::mpp_do_update_nest_fine

Definition at line 1415 of file mpp_domains.F90.

Private Member Functions

 mpp_do_update_nest_fine_i4_3d
 
 mpp_do_update_nest_fine_i8_3d
 
 mpp_do_update_nest_fine_r4_3d
 
 mpp_do_update_nest_fine_r4_3dv
 
 mpp_do_update_nest_fine_r8_3d
 
 mpp_do_update_nest_fine_r8_3dv
 

◆ mpp_domains_mod::mpp_get_boundary

interface mpp_domains_mod::mpp_get_boundary

Get the boundary data for symmetric domain when the data is at C, E, or N-cell center.
mpp_get_boundary is used to get the boundary data for symmetric domain when the data is at C, E, or N-cell center. For cubic grid, the data should always at C-cell center.
Example usage:

               call mpp_get_boundary(domain, field, ebuffer, sbuffer, wbuffer, nbuffer)

Get boundary information from domain and field and store in buffers

Definition at line 1624 of file mpp_domains.F90.

Private Member Functions

 mpp_get_boundary_r4_2d
 
 mpp_get_boundary_r4_2dv
 
 mpp_get_boundary_r4_3d
 
 mpp_get_boundary_r4_3dv
 
 mpp_get_boundary_r8_2d
 
 mpp_get_boundary_r8_2dv
 
 mpp_get_boundary_r8_3d
 
 mpp_get_boundary_r8_3dv
 

◆ mpp_domains_mod::mpp_get_boundary_ad

interface mpp_domains_mod::mpp_get_boundary_ad

Definition at line 1644 of file mpp_domains.F90.

Private Member Functions

 mpp_get_boundary_ad_r4_2d
 
 mpp_get_boundary_ad_r4_2dv
 
 mpp_get_boundary_ad_r4_3d
 
 mpp_get_boundary_ad_r4_3dv
 
 mpp_get_boundary_ad_r8_2d
 
 mpp_get_boundary_ad_r8_2dv
 
 mpp_get_boundary_ad_r8_3d
 
 mpp_get_boundary_ad_r8_3dv
 

◆ mpp_domains_mod::mpp_get_compute_domain

interface mpp_domains_mod::mpp_get_compute_domain

These routines retrieve the axis specifications associated with the compute domains. The domain is a derived type with private elements. These routines retrieve the axis specifications associated with the compute domains The 2D version of these is a simple extension of 1D.
Example usage:

       call mpp_get_compute_domain(domain_1D, is, ie)
       call mpp_get_compute_domain(domain_2D, is, ie, js, je)

Definition at line 2192 of file mpp_domains.F90.

Private Member Functions

 mpp_get_compute_domain1d
 
 mpp_get_compute_domain2d
 

◆ mpp_domains_mod::mpp_get_compute_domains

interface mpp_domains_mod::mpp_get_compute_domains

Retrieve the entire array of compute domain extents associated with a decomposition.

Parameters
domain2D domain
[out]xbegin,ybeginx and y domain starting indices
[out]xsize,ysizex and y domain sizes
Example usage:
       call mpp_get_compute_domains( domain, xbegin, xend, xsize, &
                                           ybegin, yend, ysize )

Definition at line 2207 of file mpp_domains.F90.

Private Member Functions

 mpp_get_compute_domains1d
 
 mpp_get_compute_domains2d
 

◆ mpp_domains_mod::mpp_get_data_domain

interface mpp_domains_mod::mpp_get_data_domain

These routines retrieve the axis specifications associated with the data domains. The domain is a derived type with private elements. These routines retrieve the axis specifications associated with the data domains. The 2D version of these is a simple extension of 1D.
Example usage:

               call mpp_get_data_domain(domain_1d, isd, ied)
               call mpp_get_data_domain(domain_2d, isd, ied, jsd, jed)

Definition at line 2227 of file mpp_domains.F90.

Private Member Functions

 mpp_get_data_domain1d
 
 mpp_get_data_domain2d
 

◆ mpp_domains_mod::mpp_get_domain_extents

interface mpp_domains_mod::mpp_get_domain_extents

Definition at line 2261 of file mpp_domains.F90.

Private Member Functions

 mpp_get_domain_extents1d
 
 mpp_get_domain_extents2d
 

◆ mpp_domains_mod::mpp_get_f2c_index

interface mpp_domains_mod::mpp_get_f2c_index

Get the index of the data passed from fine grid to coarse grid.
Example usage:

       call mpp_get_F2C_index(nest_domain, is_coarse, ie_coarse, js_coarse, je_coarse,
                       is_fine, ie_fine, js_fine, je_fine, nest_level, position)

Definition at line 1492 of file mpp_domains.F90.

Private Member Functions

 mpp_get_f2c_index_coarse
 
 mpp_get_f2c_index_fine
 

◆ mpp_domains_mod::mpp_get_global_domain

interface mpp_domains_mod::mpp_get_global_domain

These routines retrieve the axis specifications associated with the global domains. The domain is a derived type with private elements. These routines retrieve the axis specifications associated with the global domains. The 2D version of these is a simple extension of 1D.
Example usage:

               call mpp_get_global_domain(domain_1d, isg, ieg)
               call mpp_get_global_domain(domain_2d, isg, ieg, jsg, jeg)

Definition at line 2241 of file mpp_domains.F90.

Private Member Functions

 mpp_get_global_domain1d
 
 mpp_get_global_domain2d
 

◆ mpp_domains_mod::mpp_get_global_domains

interface mpp_domains_mod::mpp_get_global_domains

Definition at line 2213 of file mpp_domains.F90.

Private Member Functions

 mpp_get_global_domains1d
 
 mpp_get_global_domains2d
 

◆ mpp_domains_mod::mpp_get_layout

interface mpp_domains_mod::mpp_get_layout

Retrieve layout associated with a domain decomposition The 1D version of this call returns the number of divisions that was assigned to this decomposition axis. The 2D version of this call returns an array of dimension 2 holding the results on two axes.
Example usage:

               call mpp_get_layout( domain, layout )

Definition at line 2329 of file mpp_domains.F90.

Private Member Functions

 mpp_get_layout1d
 
 mpp_get_layout2d
 

◆ mpp_domains_mod::mpp_get_memory_domain

interface mpp_domains_mod::mpp_get_memory_domain

These routines retrieve the axis specifications associated with the memory domains. The domain is a derived type with private elements. These routines retrieve the axis specifications associated with the memory domains. The 2D version of these is a simple extension of 1D.
Example usage:

               call mpp_get_memory_domain(domain_1d, ism, iem)
               call mpp_get_memory_domain(domain_2d, ism, iem, jsm, jem)

Definition at line 2255 of file mpp_domains.F90.

Private Member Functions

 mpp_get_memory_domain1d
 
 mpp_get_memory_domain2d
 

◆ mpp_domains_mod::mpp_get_neighbor_pe

interface mpp_domains_mod::mpp_get_neighbor_pe

Retrieve PE number of a neighboring domain.

Given a 1-D or 2-D domain decomposition, this call allows users to retrieve the PE number of an adjacent PE-domain while taking into account that the domain may have holes (masked) and/or have cyclic boundary conditions and/or a folded edge. Which PE-domain will be retrived will depend on "direction": +1 (right) or -1 (left) for a 1-D domain decomposition and either NORTH, SOUTH, EAST, WEST, NORTH_EAST, SOUTH_EAST, SOUTH_WEST, or NORTH_WEST for a 2-D decomposition. If no neighboring domain exists (masked domain), then the returned "pe" value will be set to NULL_PE.

Example usage:

               call mpp_get_neighbor_pe( domain1d, direction=+1   , pe)

Set pe to the neighbor pe number that is to the right of the current pe

               call mpp_get_neighbor_pe( domain2d, direction=NORTH, pe)

Get neighbor pe number that's above/north of the current pe

Definition at line 2147 of file mpp_domains.F90.

Private Member Functions

 mpp_get_neighbor_pe_1d
 
 mpp_get_neighbor_pe_2d
 

◆ mpp_domains_mod::mpp_get_pelist

interface mpp_domains_mod::mpp_get_pelist

Retrieve list of PEs associated with a domain decomposition. The 1D version of this call returns an array of the PEs assigned to this 1D domain decomposition. In addition the optional argument pos may be used to retrieve the 0-based position of the domain local to the calling PE, i.e., domain%list(pos)%pe is the local PE, as returned by mpp_pe(). The 2D version of this call is identical to 1D version.

Definition at line 2316 of file mpp_domains.F90.

Private Member Functions

 mpp_get_pelist1d
 
 mpp_get_pelist2d
 

◆ mpp_domains_mod::mpp_global_field

interface mpp_domains_mod::mpp_global_field

Fill in a global array from domain-decomposed arrays.

mpp_global_field is used to get an entire domain-decomposed array on each PE. MPP_TYPE_ can be of type complex, integer, logical or real; of 4-byte or 8-byte kind; of rank up to 5.

All PEs in a domain decomposition must call mpp_global_field, and each will have a complete global field at the end. Please note that a global array of rank 3 or higher could occupy a lot of memory.

Parameters
domain2D domain
localData dimensioned on either the compute or data domains of 'domain'
[out]globaloutput data dimensioned on the corresponding global domain
flagscan be either XONLY or YONLY parameters to specify a globalization on one axis only


Example usage:

call mpp_global_field( domain, local, global, flags )

Definition at line 1784 of file mpp_domains.F90.

Private Member Functions

 mpp_global_field2d_c4_2d
 
 mpp_global_field2d_c4_3d
 
 mpp_global_field2d_c4_4d
 
 mpp_global_field2d_c4_5d
 
 mpp_global_field2d_c8_2d
 
 mpp_global_field2d_c8_3d
 
 mpp_global_field2d_c8_4d
 
 mpp_global_field2d_c8_5d
 
 mpp_global_field2d_i4_2d
 
 mpp_global_field2d_i4_3d
 
 mpp_global_field2d_i4_4d
 
 mpp_global_field2d_i4_5d
 
 mpp_global_field2d_i8_2d
 
 mpp_global_field2d_i8_3d
 
 mpp_global_field2d_i8_4d
 
 mpp_global_field2d_i8_5d
 
 mpp_global_field2d_l4_2d
 
 mpp_global_field2d_l4_3d
 
 mpp_global_field2d_l4_4d
 
 mpp_global_field2d_l4_5d
 
 mpp_global_field2d_l8_2d
 
 mpp_global_field2d_l8_3d
 
 mpp_global_field2d_l8_4d
 
 mpp_global_field2d_l8_5d
 
 mpp_global_field2d_r4_2d
 
 mpp_global_field2d_r4_3d
 
 mpp_global_field2d_r4_4d
 
 mpp_global_field2d_r4_5d
 
 mpp_global_field2d_r8_2d
 
 mpp_global_field2d_r8_3d
 
 mpp_global_field2d_r8_4d
 
 mpp_global_field2d_r8_5d
 

◆ mpp_domains_mod::mpp_global_field_ad

interface mpp_domains_mod::mpp_global_field_ad

Definition at line 1824 of file mpp_domains.F90.

Private Member Functions

 mpp_global_field2d_c4_2d_ad
 
 mpp_global_field2d_c4_3d_ad
 
 mpp_global_field2d_c4_4d_ad
 
 mpp_global_field2d_c4_5d_ad
 
 mpp_global_field2d_c8_2d_ad
 
 mpp_global_field2d_c8_3d_ad
 
 mpp_global_field2d_c8_4d_ad
 
 mpp_global_field2d_c8_5d_ad
 
 mpp_global_field2d_i4_2d_ad
 
 mpp_global_field2d_i4_3d_ad
 
 mpp_global_field2d_i4_4d_ad
 
 mpp_global_field2d_i4_5d_ad
 
 mpp_global_field2d_i8_2d_ad
 
 mpp_global_field2d_i8_3d_ad
 
 mpp_global_field2d_i8_4d_ad
 
 mpp_global_field2d_i8_5d_ad
 
 mpp_global_field2d_l4_2d_ad
 
 mpp_global_field2d_l4_3d_ad
 
 mpp_global_field2d_l4_4d_ad
 
 mpp_global_field2d_l4_5d_ad
 
 mpp_global_field2d_l8_2d_ad
 
 mpp_global_field2d_l8_3d_ad
 
 mpp_global_field2d_l8_4d_ad
 
 mpp_global_field2d_l8_5d_ad
 
 mpp_global_field2d_r4_2d_ad
 
 mpp_global_field2d_r4_3d_ad
 
 mpp_global_field2d_r4_4d_ad
 
 mpp_global_field2d_r4_5d_ad
 
 mpp_global_field2d_r8_2d_ad
 
 mpp_global_field2d_r8_3d_ad
 
 mpp_global_field2d_r8_4d_ad
 
 mpp_global_field2d_r8_5d_ad
 

◆ mpp_domains_mod::mpp_global_field_ug

interface mpp_domains_mod::mpp_global_field_ug

Same functionality as mpp_global_field but for unstructured domains.

Definition at line 1897 of file mpp_domains.F90.

Private Member Functions

 mpp_global_field2d_ug_i4_2d
 
 mpp_global_field2d_ug_i4_3d
 
 mpp_global_field2d_ug_i4_4d
 
 mpp_global_field2d_ug_i4_5d
 
 mpp_global_field2d_ug_i8_2d
 
 mpp_global_field2d_ug_i8_3d
 
 mpp_global_field2d_ug_i8_4d
 
 mpp_global_field2d_ug_i8_5d
 
 mpp_global_field2d_ug_r4_2d
 
 mpp_global_field2d_ug_r4_3d
 
 mpp_global_field2d_ug_r4_4d
 
 mpp_global_field2d_ug_r4_5d
 
 mpp_global_field2d_ug_r8_2d
 
 mpp_global_field2d_ug_r8_3d
 
 mpp_global_field2d_ug_r8_4d
 
 mpp_global_field2d_ug_r8_5d
 

◆ mpp_domains_mod::mpp_global_max

interface mpp_domains_mod::mpp_global_max

Global max of domain-decomposed arrays.
mpp_global_max is used to get the maximum value of a domain-decomposed array on each PE. MPP_TYPE_can be of type integer or real; of 4-byte or 8-byte kind; of rank up to 5. The dimension of locus must equal the rank of field.

All PEs in a domain decomposition must call mpp_global_max, and each will have the result upon exit. The function mpp_global_min, with an identical syntax. is also available.

Parameters
domain2D domain
fieldfield data dimensioned on either the compute or data domains of 'domain'
locusIf present, van be used to retrieve the location of the maximum


Example usage: mpp_global_max( domain, field, locus )

Definition at line 1949 of file mpp_domains.F90.

Private Member Functions

 mpp_global_max_i4_2d
 
 mpp_global_max_i4_3d
 
 mpp_global_max_i4_4d
 
 mpp_global_max_i4_5d
 
 mpp_global_max_i8_2d
 
 mpp_global_max_i8_3d
 
 mpp_global_max_i8_4d
 
 mpp_global_max_i8_5d
 
 mpp_global_max_r4_2d
 
 mpp_global_max_r4_3d
 
 mpp_global_max_r4_4d
 
 mpp_global_max_r4_5d
 
 mpp_global_max_r8_2d
 
 mpp_global_max_r8_3d
 
 mpp_global_max_r8_4d
 
 mpp_global_max_r8_5d
 

◆ mpp_domains_mod::mpp_global_min

interface mpp_domains_mod::mpp_global_min

Global min of domain-decomposed arrays.
mpp_global_min is used to get the minimum value of a domain-decomposed array on each PE. MPP_TYPE_can be of type integer or real; of 4-byte or 8-byte kind; of rank up to 5. The dimension of locus must equal the rank of field.

All PEs in a domain decomposition must call mpp_global_min, and each will have the result upon exit. The function mpp_global_max, with an identical syntax. is also available.

Parameters
domain2D domain
fieldfield data dimensioned on either the compute or data domains of 'domain'
locusIf present, van be used to retrieve the location of the minimum


Example usage: mpp_global_min( domain, field, locus )

Definition at line 1985 of file mpp_domains.F90.

Private Member Functions

 mpp_global_min_i4_2d
 
 mpp_global_min_i4_3d
 
 mpp_global_min_i4_4d
 
 mpp_global_min_i4_5d
 
 mpp_global_min_i8_2d
 
 mpp_global_min_i8_3d
 
 mpp_global_min_i8_4d
 
 mpp_global_min_i8_5d
 
 mpp_global_min_r4_2d
 
 mpp_global_min_r4_3d
 
 mpp_global_min_r4_4d
 
 mpp_global_min_r4_5d
 
 mpp_global_min_r8_2d
 
 mpp_global_min_r8_3d
 
 mpp_global_min_r8_4d
 
 mpp_global_min_r8_5d
 

◆ mpp_domains_mod::mpp_global_sum

interface mpp_domains_mod::mpp_global_sum

Global sum of domain-decomposed arrays.
mpp_global_sum is used to get the sum of a domain-decomposed array on each PE. MPP_TYPE_ can be of type integer, complex, or real; of 4-byte or 8-byte kind; of rank up to 5.

Parameters
domain2D domain
fieldfield data dimensioned on either the compute or data domain of 'domain'
flagsIf present must have the value BITWISE_EXACT_SUM. This produces a sum that is guaranteed to produce the identical result irrespective of how the domain is decomposed. This method does the sum first along the ranks beyond 2, and then calls mpp_global_field to produce a global 2D array which is then summed. The default method, which is considerably faster, does a local sum followed by mpp_sum across the domain decomposition.


Example usage: call mpp_global_sum( domain, field, flags )

Note
All PEs in a domain decomposition must call mpp_global_sum, and each will have the result upon exit.

Definition at line 2023 of file mpp_domains.F90.

Private Member Functions

 mpp_global_sum_c4_2d
 
 mpp_global_sum_c4_3d
 
 mpp_global_sum_c4_4d
 
 mpp_global_sum_c4_5d
 
 mpp_global_sum_c8_2d
 
 mpp_global_sum_c8_3d
 
 mpp_global_sum_c8_4d
 
 mpp_global_sum_c8_5d
 
 mpp_global_sum_i4_2d
 
 mpp_global_sum_i4_3d
 
 mpp_global_sum_i4_4d
 
 mpp_global_sum_i4_5d
 
 mpp_global_sum_i8_2d
 
 mpp_global_sum_i8_3d
 
 mpp_global_sum_i8_4d
 
 mpp_global_sum_i8_5d
 
 mpp_global_sum_r4_2d
 
 mpp_global_sum_r4_3d
 
 mpp_global_sum_r4_4d
 
 mpp_global_sum_r4_5d
 
 mpp_global_sum_r8_2d
 
 mpp_global_sum_r8_3d
 
 mpp_global_sum_r8_4d
 
 mpp_global_sum_r8_5d
 

◆ mpp_domains_mod::mpp_global_sum_ad

interface mpp_domains_mod::mpp_global_sum_ad

Definition at line 2090 of file mpp_domains.F90.

Private Member Functions

 mpp_global_sum_ad_c4_2d
 
 mpp_global_sum_ad_c4_3d
 
 mpp_global_sum_ad_c4_4d
 
 mpp_global_sum_ad_c4_5d
 
 mpp_global_sum_ad_c8_2d
 
 mpp_global_sum_ad_c8_3d
 
 mpp_global_sum_ad_c8_4d
 
 mpp_global_sum_ad_c8_5d
 
 mpp_global_sum_ad_i4_2d
 
 mpp_global_sum_ad_i4_3d
 
 mpp_global_sum_ad_i4_4d
 
 mpp_global_sum_ad_i4_5d
 
 mpp_global_sum_ad_i8_2d
 
 mpp_global_sum_ad_i8_3d
 
 mpp_global_sum_ad_i8_4d
 
 mpp_global_sum_ad_i8_5d
 
 mpp_global_sum_ad_r4_2d
 
 mpp_global_sum_ad_r4_3d
 
 mpp_global_sum_ad_r4_4d
 
 mpp_global_sum_ad_r4_5d
 
 mpp_global_sum_ad_r8_2d
 
 mpp_global_sum_ad_r8_3d
 
 mpp_global_sum_ad_r8_4d
 
 mpp_global_sum_ad_r8_5d
 

◆ mpp_domains_mod::mpp_global_sum_tl

interface mpp_domains_mod::mpp_global_sum_tl

Definition at line 2056 of file mpp_domains.F90.

Private Member Functions

 mpp_global_sum_tl_c4_2d
 
 mpp_global_sum_tl_c4_3d
 
 mpp_global_sum_tl_c4_4d
 
 mpp_global_sum_tl_c4_5d
 
 mpp_global_sum_tl_c8_2d
 
 mpp_global_sum_tl_c8_3d
 
 mpp_global_sum_tl_c8_4d
 
 mpp_global_sum_tl_c8_5d
 
 mpp_global_sum_tl_i4_2d
 
 mpp_global_sum_tl_i4_3d
 
 mpp_global_sum_tl_i4_4d
 
 mpp_global_sum_tl_i4_5d
 
 mpp_global_sum_tl_i8_2d
 
 mpp_global_sum_tl_i8_3d
 
 mpp_global_sum_tl_i8_4d
 
 mpp_global_sum_tl_i8_5d
 
 mpp_global_sum_tl_r4_2d
 
 mpp_global_sum_tl_r4_3d
 
 mpp_global_sum_tl_r4_4d
 
 mpp_global_sum_tl_r4_5d
 
 mpp_global_sum_tl_r8_2d
 
 mpp_global_sum_tl_r8_3d
 
 mpp_global_sum_tl_r8_4d
 
 mpp_global_sum_tl_r8_5d
 

◆ mpp_domains_mod::mpp_group_update_type

type mpp_domains_mod::mpp_group_update_type

used for updates on a group

Definition at line 575 of file mpp_domains.F90.

Collaboration diagram for mpp_group_update_type:
[legend]

Private Attributes

integer(i8_kind), dimension(max_domain_fields) addrs_s
 
integer(i8_kind), dimension(max_domain_fields) addrs_x
 
integer(i8_kind), dimension(max_domain_fields) addrs_y
 
integer, dimension(maxoverlap) buffer_pos_recv
 
integer, dimension(maxoverlap) buffer_pos_send
 
integer buffer_start_pos = -1
 
integer ehalo_s =0
 
integer ehalo_v =0
 
integer flags_s =0
 
integer flags_v =0
 
integer, dimension(maxoverlap) from_pe
 
integer gridtype =0
 
integer ie_s =0
 
integer ie_x =0
 
integer ie_y =0
 
logical initialized = .FALSE.
 
integer is_s =0
 
integer is_x =0
 
integer is_y =0
 
integer isize_s =0
 
integer isize_x =0
 
integer isize_y =0
 
integer je_s =0
 
integer je_x =0
 
integer je_y =0
 
integer js_s =0
 
integer js_x =0
 
integer js_y =0
 
integer jsize_s =0
 
integer jsize_x =0
 
integer jsize_y =0
 
logical k_loop_inside = .TRUE.
 
integer ksize_s =1
 
integer ksize_v =1
 
integer nhalo_s =0
 
integer nhalo_v =0
 
logical nonsym_edge = .FALSE.
 
integer npack =0
 
integer nrecv =0
 
integer nscalar = 0
 
integer nsend =0
 
integer nunpack =0
 
integer nvector = 0
 
integer, dimension(maxoverlap) pack_buffer_pos
 
integer, dimension(maxoverlap) pack_ie
 
integer, dimension(maxoverlap) pack_is
 
integer, dimension(maxoverlap) pack_je
 
integer, dimension(maxoverlap) pack_js
 
integer, dimension(maxoverlap) pack_rotation
 
integer, dimension(maxoverlap) pack_size
 
integer, dimension(maxoverlap) pack_type
 
integer position =0
 
logical, dimension(8) recv_s
 
integer, dimension(maxoverlap) recv_size
 
logical, dimension(8) recv_x
 
logical, dimension(8) recv_y
 
integer, dimension(max_request) request_recv
 
integer, dimension(max_request) request_send
 
integer reset_index_s = 0
 
integer reset_index_v = 0
 
integer, dimension(maxoverlap) send_size
 
integer shalo_s =0
 
integer shalo_v =0
 
integer, dimension(maxoverlap) to_pe
 
integer tot_msgsize = 0
 
integer, dimension(max_request) type_recv
 
integer, dimension(maxoverlap) unpack_buffer_pos
 
integer, dimension(maxoverlap) unpack_ie
 
integer, dimension(maxoverlap) unpack_is
 
integer, dimension(maxoverlap) unpack_je
 
integer, dimension(maxoverlap) unpack_js
 
integer, dimension(maxoverlap) unpack_rotation
 
integer, dimension(maxoverlap) unpack_size
 
integer, dimension(maxoverlap) unpack_type
 
integer whalo_s =0
 
integer whalo_v =0
 

◆ mpp_domains_mod::mpp_modify_domain

interface mpp_domains_mod::mpp_modify_domain

Modifies the extents (compute, data and global) of a given domain.

Definition at line 926 of file mpp_domains.F90.

Private Member Functions

 mpp_modify_domain1d
 
 mpp_modify_domain2d
 

◆ mpp_domains_mod::mpp_nullify_domain_list

interface mpp_domains_mod::mpp_nullify_domain_list

Nullify domain list. This interface is needed in mpp_domains_test. 1-D case can be added in if needed.
Example usage:

               call mpp_nullify_domain_list(domain)

Definition at line 2346 of file mpp_domains.F90.

Private Member Functions

 nullify_domain2d_list
 

◆ mpp_domains_mod::mpp_pass_sg_to_ug

interface mpp_domains_mod::mpp_pass_sg_to_ug

Passes data from a structured grid to an unstructured grid
Example usage:

       call mpp_pass_SG_to_UG(domain, sg_data, ug_data)

Definition at line 1575 of file mpp_domains.F90.

Private Member Functions

 mpp_pass_sg_to_ug_i4_2d
 
 mpp_pass_sg_to_ug_i4_3d
 
 mpp_pass_sg_to_ug_l4_2d
 
 mpp_pass_sg_to_ug_l4_3d
 
 mpp_pass_sg_to_ug_r4_2d
 
 mpp_pass_sg_to_ug_r4_3d
 
 mpp_pass_sg_to_ug_r8_2d
 
 mpp_pass_sg_to_ug_r8_3d
 

◆ mpp_domains_mod::mpp_pass_ug_to_sg

interface mpp_domains_mod::mpp_pass_ug_to_sg

Passes a data field from a structured grid to an unstructured grid
Example usage:

       call mpp_pass_SG_to_UG(SG_domain, field_SG, field_UG)

Definition at line 1591 of file mpp_domains.F90.

Private Member Functions

 mpp_pass_ug_to_sg_i4_2d
 
 mpp_pass_ug_to_sg_i4_3d
 
 mpp_pass_ug_to_sg_l4_2d
 
 mpp_pass_ug_to_sg_l4_3d
 
 mpp_pass_ug_to_sg_r4_2d
 
 mpp_pass_ug_to_sg_r4_3d
 
 mpp_pass_ug_to_sg_r8_2d
 
 mpp_pass_ug_to_sg_r8_3d
 

◆ mpp_domains_mod::mpp_redistribute

interface mpp_domains_mod::mpp_redistribute

Reorganization of distributed global arrays.
mpp_redistribute is used to reorganize a distributed array. MPP_TYPE_can be of type integer, complex, or real; of 4-byte or 8-byte kind; of rank up to 5.
Example usage: call mpp_redistribute( domain_in, field_in, domain_out, field_out )

Definition at line 1678 of file mpp_domains.F90.

Private Member Functions

 mpp_redistribute_c4_2d
 
 mpp_redistribute_c4_3d
 
 mpp_redistribute_c4_4d
 
 mpp_redistribute_c4_5d
 
 mpp_redistribute_c8_2d
 
 mpp_redistribute_c8_3d
 
 mpp_redistribute_c8_4d
 
 mpp_redistribute_c8_5d
 
 mpp_redistribute_i4_2d
 
 mpp_redistribute_i4_3d
 
 mpp_redistribute_i4_4d
 
 mpp_redistribute_i4_5d
 
 mpp_redistribute_i8_2d
 
 mpp_redistribute_i8_3d
 
 mpp_redistribute_i8_4d
 
 mpp_redistribute_i8_5d
 
 mpp_redistribute_r4_2d
 
 mpp_redistribute_r4_3d
 
 mpp_redistribute_r4_4d
 
 mpp_redistribute_r4_5d
 
 mpp_redistribute_r8_2d
 
 mpp_redistribute_r8_3d
 
 mpp_redistribute_r8_4d
 
 mpp_redistribute_r8_5d
 

◆ mpp_domains_mod::mpp_reset_group_update_field

interface mpp_domains_mod::mpp_reset_group_update_field

Definition at line 1360 of file mpp_domains.F90.

Private Member Functions

 mpp_reset_group_update_field_r4_2d
 
 mpp_reset_group_update_field_r4_2dv
 
 mpp_reset_group_update_field_r4_3d
 
 mpp_reset_group_update_field_r4_3dv
 
 mpp_reset_group_update_field_r4_4d
 
 mpp_reset_group_update_field_r4_4dv
 
 mpp_reset_group_update_field_r8_2d
 
 mpp_reset_group_update_field_r8_2dv
 
 mpp_reset_group_update_field_r8_3d
 
 mpp_reset_group_update_field_r8_3dv
 
 mpp_reset_group_update_field_r8_4d
 
 mpp_reset_group_update_field_r8_4dv
 

◆ mpp_domains_mod::mpp_set_compute_domain

interface mpp_domains_mod::mpp_set_compute_domain

These routines set the axis specifications associated with the compute domains. The domain is a derived type with private elements. These routines set the axis specifications associated with the compute domains The 2D version of these is a simple extension of 1D.
Example usage:

               call mpp_get_data_domain(domain_1d, isd, ied)
               call mpp_get_data_domain(domain_2d, isd, ied, jsd, jed)

Definition at line 2275 of file mpp_domains.F90.

Private Member Functions

 mpp_set_compute_domain1d
 
 mpp_set_compute_domain2d
 

◆ mpp_domains_mod::mpp_set_data_domain

interface mpp_domains_mod::mpp_set_data_domain

These routines set the axis specifications associated with the data domains. The domain is a derived type with private elements. These routines set the axis specifications associated with the data domains. The 2D version of these is a simple extension of 1D.
Example usage:

               call mpp_set_data_domain(domain_1d, isd, ied)
               call mpp_set_data_domain(domain_2d, isd, ied, jsd, jed)

Definition at line 2289 of file mpp_domains.F90.

Private Member Functions

 mpp_set_data_domain1d
 
 mpp_set_data_domain2d
 

◆ mpp_domains_mod::mpp_set_global_domain

interface mpp_domains_mod::mpp_set_global_domain

These routines set the axis specifications associated with the global domains. The domain is a derived type with private elements. These routines set the axis specifications associated with the global domains. The 2D version of these is a simple extension of 1D.
Example usage:

               call mpp_set_global_domain(domain_1d, isg, ieg)
               call mpp_set_global_domain(domain_2d, isg, ieg, jsg, jeg)

Definition at line 2303 of file mpp_domains.F90.

Private Member Functions

 mpp_set_global_domain1d
 
 mpp_set_global_domain2d
 

◆ mpp_domains_mod::mpp_start_do_update

interface mpp_domains_mod::mpp_start_do_update

Private interface used for non blocking updates.

Definition at line 1277 of file mpp_domains.F90.

Private Member Functions

 mpp_start_do_update_i4_3d
 
 mpp_start_do_update_i8_3d
 
 mpp_start_do_update_r4_3d
 
 mpp_start_do_update_r4_3dv
 
 mpp_start_do_update_r8_3d
 
 mpp_start_do_update_r8_3dv
 

◆ mpp_domains_mod::mpp_start_group_update

interface mpp_domains_mod::mpp_start_group_update

Starts non-blocking group update Must be followed up with a call to mpp_complete_group_update mpp_group_update_type can be created with mpp_create_group_update.

Parameters
[in,out]type(mpp_group_update_type)group type created for group update
[in,out]type(domain2D)domain to update

Definition at line 1342 of file mpp_domains.F90.

Private Member Functions

 mpp_start_group_update_r4
 
 mpp_start_group_update_r8
 

◆ mpp_domains_mod::mpp_start_update_domains

interface mpp_domains_mod::mpp_start_update_domains

Interface to start halo updates mpp_start_update_domains is used to start a halo update of a domain-decomposed array on each PE. MPP_TYPE_ can be of type complex, integer, logical or real; of 4-byte or 8-byte kind; of rank up to 5. The vector version (with two input data fields) is only present for \ereal types.

\empp_start_update_domains must be paired together with \empp_complete_update_domains. In mpp_start_update_domains, a buffer will be pre-post to receive (non-blocking) the data and data on computational domain will be packed and sent (non-blocking send) to other processor. In mpp_complete_update_domains, buffer will be unpacked to fill the halo and mpp_sync_self will be called to to ensure communication safe at the last call of mpp_complete_update_domains.

Each mpp_update_domains can be replaced by the combination of mpp_start_update_domains and mpp_complete_update_domains. The arguments in mpp_start_update_domains and mpp_complete_update_domains should be the exact the same as in mpp_update_domains to be replaced except no optional argument "complete". The following are examples on how to replace mpp_update_domains with mpp_start_update_domains/mpp_complete_update_domains.

Example 1: Replace one scalar mpp_update_domains.

Replace

call mpp_update_domains(data, domain, flags=update_flags)

with

id_update = mpp_start_update_domains(data, domain, flags=update_flags)
...( doing some computation )
call mpp_complete_update_domains(id_update, data, domain, flags=update_flags)

Example 2: Replace group scalar mpp_update_domains

Replace

call mpp_update_domains(data_1, domain, flags=update_flags, complete=.false.)
.... ( other n-2 call mpp_update_domains with complete = .false. )
call mpp_update_domains(data_n, domain, flags=update_flags, complete=.true. )

With

id_up_1 = mpp_start_update_domains(data_1, domain, flags=update_flags)
.... ( other n-2 call mpp_start_update_domains )
id_up_n = mpp_start_update_domains(data_n, domain, flags=update_flags)

..... ( doing some computation )

call mpp_complete_update_domains(id_up_1, data_1, domain, flags=update_flags)
.... ( other n-2 call mpp_complete_update_domains )
call mpp_complete_update_domains(id_up_n, data_n, domain, flags=update_flags)
Example 3: Replace group CGRID_NE vector, mpp_update_domains

Replace

call mpp_update_domains(u_1, v_1, domain, flags=update_flgs, gridtype=CGRID_NE, complete=.false.)
.... ( other n-2 call mpp_update_domains with complete = .false. )
call mpp_update_domains(u_1, v_1, domain, flags=update_flags, gridtype=CGRID_NE, complete=.true. )

with

id_up_1 = mpp_start_update_domains(u_1, v_1, domain, flags=update_flags, gridtype=CGRID_NE)
.... ( other n-2 call mpp_start_update_domains )
id_up_n = mpp_start_update_domains(u_n, v_n, domain, flags=update_flags, gridtype=CGRID_NE)

..... ( doing some computation )

call mpp_complete_update_domains(id_up_1, u_1, v_1, domain, flags=update_flags, gridtype=CGRID_NE)
.... ( other n-2 call mpp_complete_update_domains )
call mpp_complete_update_domains(id_up_n, u_n, v_n, domain, flags=update_flags, gridtype=CGRID_NE)

For 2D domain updates, if there are halos present along both x and y, we can choose to update one only, by specifying flags=XUPDATE or flags=YUPDATE. In addition, one-sided updates can be performed by setting flags to any combination of WUPDATE, EUPDATE, SUPDATE and NUPDATE, to update the west, east, north and south halos respectively. Any combination of halos may be used by adding the requisite flags, e.g: flags=XUPDATE+SUPDATE or flags=EUPDATE+WUPDATE+SUPDATE will update the east, west and south halos.

If a call to mpp_start_update_domains/mpp_complete_update_domains involves at least one E-W halo and one N-S halo, the corners involved will also be updated, i.e, in the example above, the SE and SW corners will be updated.

If flags is not supplied, that is equivalent to flags=XUPDATE+YUPDATE.

The vector version is passed the x and y components of a vector field in tandem, and both are updated upon return. They are passed together to treat parity issues on various grids. For example, on a cubic sphere projection, the x and y components may be interchanged when passing from an equatorial cube face to a polar face. For grids with folds, vector components change sign on crossing the fold. Paired scalar quantities can also be passed with the vector version if flags=SCALAR_PAIR, in which case components are appropriately interchanged, but signs are not.

Special treatment at boundaries such as folds is also required for staggered grids. The following types of staggered grids are recognized:
1) AGRID: values are at grid centers.
2) BGRID_NE: vector fields are at the NE vertex of a grid cell, i.e: the array elements u(i,j) and v(i,j) are actually at (i+½,j+½) with respect to the grid centers.
3) BGRID_SW: vector fields are at the SW vertex of a grid cell, i.e., the array elements u(i,j) and v(i,j) are actually at (i-½,j-½) with respect to the grid centers.
4) CGRID_NE: vector fields are at the N and E faces of a grid cell, i.e: the array elements u(i,j) and v(i,j) are actually at (i+½,j) and (i,j+½) with respect to the grid centers.
5) CGRID_SW: vector fields are at the S and W faces of a grid cell, i.e: the array elements u(i,j) and v(i,j) are actually at (i-½,j) and (i,j-½) with respect to the grid centers.

The gridtypes listed above are all available by use association as integer parameters. If vector fields are at staggered locations, the optional argument gridtype must be appropriately set for correct treatment at boundaries.
It is safe to apply vector field updates to the appropriate arrays irrespective of the domain topology: if the topology requires no special treatment of vector fields, specifying gridtype will do no harm.

mpp_start_update_domains/mpp_complete_update_domains internally buffers the data being sent and received into single messages for efficiency. A turnable internal buffer area in memory is provided for this purpose by mpp_domains_mod. The size of this buffer area can be set by the user by calling mpp_domains_set_stack_size.
Example usage:
            call mpp_start_update_domains( field, domain, flags )
            call mpp_complete_update_domains( field, domain, flags )

Definition at line 1193 of file mpp_domains.F90.

Private Member Functions

 mpp_start_update_domain2d_i4_2d
 
 mpp_start_update_domain2d_i4_3d
 
 mpp_start_update_domain2d_i4_4d
 
 mpp_start_update_domain2d_i4_5d
 
 mpp_start_update_domain2d_i8_2d
 
 mpp_start_update_domain2d_i8_3d
 
 mpp_start_update_domain2d_i8_4d
 
 mpp_start_update_domain2d_i8_5d
 
 mpp_start_update_domain2d_r4_2d
 
 mpp_start_update_domain2d_r4_2dv
 
 mpp_start_update_domain2d_r4_3d
 
 mpp_start_update_domain2d_r4_3dv
 
 mpp_start_update_domain2d_r4_4d
 
 mpp_start_update_domain2d_r4_4dv
 
 mpp_start_update_domain2d_r4_5d
 
 mpp_start_update_domain2d_r4_5dv
 
 mpp_start_update_domain2d_r8_2d
 
 mpp_start_update_domain2d_r8_2dv
 
 mpp_start_update_domain2d_r8_3d
 
 mpp_start_update_domain2d_r8_3dv
 
 mpp_start_update_domain2d_r8_4d
 
 mpp_start_update_domain2d_r8_4dv
 
 mpp_start_update_domain2d_r8_5d
 
 mpp_start_update_domain2d_r8_5dv
 

◆ mpp_domains_mod::mpp_update_domains

interface mpp_domains_mod::mpp_update_domains

Performs halo updates for a given domain.

Used to perform a halo update of a domain-decomposed array on each PE. MPP_TYPE can be of type complex, integer, logical or real of 4-byte or 8-byte kind; of rank up to 5. The vector version (with two input data fields) is only present for real types. For 2D domain updates, if there are halos present along both x and y, we can choose to update one only, by specifying flags=XUPDATE or flags=YUPDATE. In addition, one-sided updates can be performed by setting flags to any combination of WUPDATE, EUPDATE, SUPDATE and NUPDATE to update the west, east, north and south halos respectively. Any combination of halos may be used by adding the requisite flags, e.g: flags=XUPDATE+SUPDATE or flags=EUPDATE+WUPDATE+SUPDATE will update the east, west and south halos.

If a call to mpp_update_domains involves at least one E-W halo and one N-S halo, the corners involved will also be updated, i.e, in the example above, the SE and SW corners will be updated.
If flags is not supplied, that is equivalent to flags=XUPDATE+YUPDATE.

The vector version is passed the x and y components of a vector field in tandem, and both are updated upon return. They are passed together to treat parity issues on various grids. For example, on a cubic sphere projection, the x y components may be interchanged when passing from an equatorial cube face to a polar face. For grids with folds, vector components change sign on crossing the fold. Paired scalar quantities can also be passed with the vector version if flags=SCALAR_PAIR, in which case components are appropriately interchanged, but signs are not.

Special treatment at boundaries such as folds is also required for staggered grids. The following types of staggered grids are recognized:

1) AGRID: values are at grid centers.
2) BGRID_NE: vector fields are at the NE vertex of a grid cell, i.e: the array elements u(i,j)and v(i,j)are actually at (i,j;) with respect to the grid centers.
3) BGRID_SW: vector fields are at the SW vertex of a grid cell, i.e: the array elements u(i,j) and v(i,j) are actually at (i;,j;) with respect to the grid centers
4) CGRID_NE: vector fields are at the N and E faces of a grid cell, i.e: the array elements u(i,j) and v(i,j) are actually at (i;,j) and (i,j+½) with respect to the grid centers.
5) CGRID_SW: vector fields are at the S and W faces of a grid cell, i.e: the array elements u(i,j)and v(i,j) are actually at (i;,j) and (i,j;) with respect to the grid centers.

The gridtypes listed above are all available by use association as integer parameters. The scalar version of mpp_update_domains assumes that the values of a scalar field are always at AGRID locations, and no special boundary treatment is required. If vector fields are at staggered locations, the optional argument gridtype must be appropriately set for correct treatment at boundaries.
It is safe to apply vector field updates to the appropriate arrays irrespective of the domain topology: if the topology requires no special treatment of vector fields, specifying gridtype will do no harm.

mpp_update_domains internally buffers the date being sent and received into single messages for efficiency. A turnable internal buffer area in memory is provided for this purpose by mpp_domains_mod. The size of this buffer area can be set by the user by calling mpp_domains mpp_domains_set_stack_size.

Example usage: call mpp_update_domains( field, domain, flags ) Update a 1D domain for the given field. call mpp_update_domains( fieldx, fieldy, domain, flags, gridtype ) Update a 2D domain for the given fields.

Definition at line 1012 of file mpp_domains.F90.

Private Member Functions

 mpp_update_domain2d_i4_2d
 
 mpp_update_domain2d_i4_3d
 
 mpp_update_domain2d_i4_4d
 
 mpp_update_domain2d_i4_5d
 
 mpp_update_domain2d_i8_2d
 
 mpp_update_domain2d_i8_3d
 
 mpp_update_domain2d_i8_4d
 
 mpp_update_domain2d_i8_5d
 
 mpp_update_domain2d_r4_2d
 
 mpp_update_domain2d_r4_2dv
 
 mpp_update_domain2d_r4_3d
 
 mpp_update_domain2d_r4_3dv
 
 mpp_update_domain2d_r4_4d
 
 mpp_update_domain2d_r4_4dv
 
 mpp_update_domain2d_r4_5d
 
 mpp_update_domain2d_r4_5dv
 
 mpp_update_domain2d_r8_2d
 
 mpp_update_domain2d_r8_2dv
 
 mpp_update_domain2d_r8_3d
 
 mpp_update_domain2d_r8_3dv
 
 mpp_update_domain2d_r8_4d
 
 mpp_update_domain2d_r8_4dv
 
 mpp_update_domain2d_r8_5d
 
 mpp_update_domain2d_r8_5dv
 

◆ mpp_domains_mod::mpp_update_domains_ad

interface mpp_domains_mod::mpp_update_domains_ad

Similar to mpp_update_domains , updates adjoint domains.

Definition at line 1518 of file mpp_domains.F90.

Private Member Functions

 mpp_update_domains_ad_2d_r4_2d
 
 mpp_update_domains_ad_2d_r4_2dv
 
 mpp_update_domains_ad_2d_r4_3d
 
 mpp_update_domains_ad_2d_r4_3dv
 
 mpp_update_domains_ad_2d_r4_4d
 
 mpp_update_domains_ad_2d_r4_4dv
 
 mpp_update_domains_ad_2d_r4_5d
 
 mpp_update_domains_ad_2d_r4_5dv
 
 mpp_update_domains_ad_2d_r8_2d
 
 mpp_update_domains_ad_2d_r8_2dv
 
 mpp_update_domains_ad_2d_r8_3d
 
 mpp_update_domains_ad_2d_r8_3dv
 
 mpp_update_domains_ad_2d_r8_4d
 
 mpp_update_domains_ad_2d_r8_4dv
 
 mpp_update_domains_ad_2d_r8_5d
 
 mpp_update_domains_ad_2d_r8_5dv
 

◆ mpp_domains_mod::mpp_update_nest_coarse

interface mpp_domains_mod::mpp_update_nest_coarse

Pass the data from fine grid to fill the buffer to be ready to be interpolated onto coarse grid.
Example usage:

          call mpp_update_nest_coarse(field, nest_domain, field_out, nest_level, complete,
                            position, name, tile_count)

Definition at line 1437 of file mpp_domains.F90.

Private Member Functions

 mpp_update_nest_coarse_i4_2d
 
 mpp_update_nest_coarse_i4_3d
 
 mpp_update_nest_coarse_i4_4d
 
 mpp_update_nest_coarse_i8_2d
 
 mpp_update_nest_coarse_i8_3d
 
 mpp_update_nest_coarse_i8_4d
 
 mpp_update_nest_coarse_r4_2d
 
 mpp_update_nest_coarse_r4_2dv
 
 mpp_update_nest_coarse_r4_3d
 
 mpp_update_nest_coarse_r4_3dv
 
 mpp_update_nest_coarse_r4_4d
 
 mpp_update_nest_coarse_r4_4dv
 
 mpp_update_nest_coarse_r8_2d
 
 mpp_update_nest_coarse_r8_2dv
 
 mpp_update_nest_coarse_r8_3d
 
 mpp_update_nest_coarse_r8_3dv
 
 mpp_update_nest_coarse_r8_4d
 
 mpp_update_nest_coarse_r8_4dv
 

◆ mpp_domains_mod::mpp_update_nest_fine

interface mpp_domains_mod::mpp_update_nest_fine

Pass the data from coarse grid to fill the buffer to be ready to be interpolated onto fine grid.
Example usage:

           call mpp_update_nest_fine(field, nest_domain, wbuffer, ebuffer, sbuffer,
                       nbuffer, nest_level, flags, complete, position, extra_halo, name,
                       tile_count)

Definition at line 1383 of file mpp_domains.F90.

Private Member Functions

 mpp_update_nest_fine_i4_2d
 
 mpp_update_nest_fine_i4_3d
 
 mpp_update_nest_fine_i4_4d
 
 mpp_update_nest_fine_i8_2d
 
 mpp_update_nest_fine_i8_3d
 
 mpp_update_nest_fine_i8_4d
 
 mpp_update_nest_fine_r4_2d
 
 mpp_update_nest_fine_r4_2dv
 
 mpp_update_nest_fine_r4_3d
 
 mpp_update_nest_fine_r4_3dv
 
 mpp_update_nest_fine_r4_4d
 
 mpp_update_nest_fine_r4_4dv
 
 mpp_update_nest_fine_r8_2d
 
 mpp_update_nest_fine_r8_2dv
 
 mpp_update_nest_fine_r8_3d
 
 mpp_update_nest_fine_r8_3dv
 
 mpp_update_nest_fine_r8_4d
 
 mpp_update_nest_fine_r8_4dv
 

◆ mpp_domains_mod::nest_domain_type

type mpp_domains_mod::nest_domain_type

domain with nested fine and course tiles

Definition at line 455 of file mpp_domains.F90.

Collaboration diagram for nest_domain_type:
[legend]

Private Attributes

integer, dimension(:), pointer iend_coarse
 
integer, dimension(:), pointer iend_fine
 
integer, dimension(:), pointer istart_coarse
 
integer, dimension(:), pointer istart_fine
 
integer, dimension(:), pointer jend_coarse
 
integer, dimension(:), pointer jend_fine
 
integer, dimension(:), pointer jstart_coarse
 
integer, dimension(:), pointer jstart_fine
 
character(len=name_length) name
 
type(nest_level_type), dimension(:), pointer nest => NULL()
 
integer, dimension(:), pointer nest_level => NULL()
 Added for moving nest functionality.
 
integer num_level
 
integer num_nest
 
integer, dimension(:), pointer tile_coarse
 
integer, dimension(:), pointer tile_fine
 

◆ mpp_domains_mod::nest_level_type

type mpp_domains_mod::nest_level_type

Private type to hold data for each level of nesting.

Definition at line 468 of file mpp_domains.F90.

Collaboration diagram for nest_level_type:
[legend]

Private Attributes

type(nestspec), pointer c2f_c => NULL()
 
type(nestspec), pointer c2f_e => NULL()
 
type(nestspec), pointer c2f_n => NULL()
 
type(nestspec), pointer c2f_t => NULL()
 
type(domain2d), pointer domain_coarse => NULL()
 
type(domain2d), pointer domain_fine => NULL()
 
type(nestspec), pointer f2c_c => NULL()
 
type(nestspec), pointer f2c_e => NULL()
 
type(nestspec), pointer f2c_n => NULL()
 
type(nestspec), pointer f2c_t => NULL()
 
integer, dimension(:), pointer iend_coarse
 
integer, dimension(:), pointer iend_fine
 
logical is_coarse
 
logical is_coarse_pe
 
logical is_fine
 
logical is_fine_pe
 
integer, dimension(:), pointer istart_coarse
 
integer, dimension(:), pointer istart_fine
 
integer, dimension(:), pointer jend_coarse
 
integer, dimension(:), pointer jend_fine
 
integer, dimension(:), pointer jstart_coarse
 
integer, dimension(:), pointer jstart_fine
 
integer, dimension(:), pointer my_nest_id
 
integer my_num_nest
 
integer num_nest
 
logical on_level
 
integer, dimension(:), pointer pelist => NULL()
 
integer, dimension(:), pointer pelist_coarse => NULL()
 
integer, dimension(:), pointer pelist_fine => NULL()
 
integer, dimension(:), pointer tile_coarse
 
integer, dimension(:), pointer tile_fine
 
integer x_refine
 
integer y_refine
 

◆ mpp_domains_mod::nestspec

type mpp_domains_mod::nestspec

Used to specify bounds and index information for nested tiles as a linked list.

Definition at line 438 of file mpp_domains.F90.

Collaboration diagram for nestspec:
[legend]

Private Attributes

type(index_typecenter
 
type(index_typeeast
 
integer extra_halo
 
type(nestspec), pointer next => NULL()
 
type(index_typenorth
 
integer nrecv
 
integer nsend
 
type(overlap_type), dimension(:), pointer recv => NULL()
 
type(overlap_type), dimension(:), pointer send => NULL()
 
type(index_typesouth
 
type(index_typewest
 
integer xbegin
 
integer xbegin_c
 
integer xbegin_f
 
integer xend
 
integer xend_c
 
integer xend_f
 
integer xsize_c
 
integer ybegin
 
integer ybegin_c
 
integer ybegin_f
 
integer yend
 
integer yend_c
 
integer yend_f
 
integer ysize_c
 

◆ mpp_domains_mod::nonblock_type

type mpp_domains_mod::nonblock_type

Used for nonblocking data transfer.

Definition at line 548 of file mpp_domains.F90.

Collaboration diagram for nonblock_type:
[legend]

Private Attributes

integer, dimension(max_request) buffer_pos_recv
 
integer, dimension(max_request) buffer_pos_send
 
integer(i8_kind), dimension(max_domain_fields) field_addrs
 
integer(i8_kind), dimension(max_domain_fields) field_addrs2
 
integer nfields
 
integer recv_msgsize
 
integer recv_pos
 
integer, dimension(max_request) request_recv
 
integer request_recv_count
 
integer, dimension(max_request) request_send
 
integer request_send_count
 
integer send_msgsize
 
integer send_pos
 
integer, dimension(max_request) size_recv
 
integer, dimension(max_request) type_recv
 
integer update_ehalo
 
integer update_flags
 
integer update_gridtype
 
integer update_nhalo
 
integer update_position
 
integer update_shalo
 
integer update_whalo
 

◆ mpp_domains_mod::operator(.eq.)

interface mpp_domains_mod::operator(.eq.)

Equality/inequality operators for domaintypes.


The module provides public operators to check for equality/inequality of domaintypes, e.g:

       type(domain1D) :: a, b
       type(domain2D) :: c, d
       ...
       if( a.NE.b )then
       ...
       end if
       if( c==d )then
       ...
       end if


Domains are considered equal if and only if the start and end indices of each of their component global, data and compute domains are equal.

Definition at line 2170 of file mpp_domains.F90.

Private Member Functions

 mpp_domain1d_eq
 
 mpp_domain2d_eq
 
 mpp_domainug_eq
 

◆ mpp_domains_mod::operator(.ne.)

interface mpp_domains_mod::operator(.ne.)

Definition at line 2177 of file mpp_domains.F90.

Private Member Functions

 mpp_domain1d_ne
 
 mpp_domain2d_ne
 
 mpp_domainug_ne
 

◆ mpp_domains_mod::overlap_type

type mpp_domains_mod::overlap_type

Type for overlapping data.

Definition at line 319 of file mpp_domains.F90.

Collaboration diagram for overlap_type:
[legend]

Private Attributes

integer count = 0
 number of overlapping
 
integer, dimension(:), pointer dir => NULL()
 direction ( value 1,2,3,4 = E,S,W,N)
 
logical, dimension(:), pointer from_contact => NULL()
 indicate if the overlap is computed from define_contact_overlap
 
integer, dimension(:), pointer ie => NULL()
 ending i-index
 
integer, dimension(:), pointer index => NULL()
 for refinement
 
integer, dimension(:), pointer is => NULL()
 starting i-index
 
integer, dimension(:), pointer je => NULL()
 ending j-index
 
integer, dimension(:), pointer js => NULL()
 starting j-index
 
integer, dimension(:), pointer msgsize => NULL()
 overlapping msgsize to be sent or received
 
integer pe
 
integer, dimension(:), pointer rotation => NULL()
 rotation angle.
 
integer start_pos
 start position in the buffer
 
integer, dimension(:), pointer tileme => NULL()
 my tile id for this overlap
 
integer, dimension(:), pointer tilenbr => NULL()
 neighbor tile id for this overlap
 
integer totsize
 all message size
 

◆ mpp_domains_mod::overlapspec

type mpp_domains_mod::overlapspec

Private type for overlap specifications.

Definition at line 341 of file mpp_domains.F90.

Collaboration diagram for overlapspec:
[legend]

Private Attributes

integer ehalo
 
type(overlapspec), pointer next => NULL()
 
integer nhalo
 halo size
 
integer nrecv
 
integer nsend
 
type(overlap_type), dimension(:), pointer recv => NULL()
 
integer recvsize
 
type(overlap_type), dimension(:), pointer send => NULL()
 
integer sendsize
 
integer shalo
 
integer whalo
 
integer xbegin
 
integer xend
 
integer ybegin
 
integer yend
 

◆ mpp_domains_mod::tile_type

type mpp_domains_mod::tile_type

Upper and lower x and y bounds for a tile.

Definition at line 354 of file mpp_domains.F90.

Collaboration diagram for tile_type:
[legend]

Private Attributes

integer xbegin
 
integer xend
 
integer ybegin
 
integer yend
 

◆ mpp_domains_mod::unstruct_axis_spec

type mpp_domains_mod::unstruct_axis_spec

Private type for axis specification data for an unstructured grid.

Definition at line 229 of file mpp_domains.F90.

Collaboration diagram for unstruct_axis_spec:
[legend]

Private Attributes

integer begin
 
integer begin_index
 
integer end
 
integer end_index
 
integer max_size
 
integer size
 

◆ mpp_domains_mod::unstruct_domain_spec

type mpp_domains_mod::unstruct_domain_spec

Private type for axis specification data for an unstructured domain.

Definition at line 237 of file mpp_domains.F90.

Collaboration diagram for unstruct_domain_spec:
[legend]

Private Attributes

type(unstruct_axis_speccompute
 
integer pe
 
integer pos
 
integer tile_id
 

◆ mpp_domains_mod::unstruct_overlap_type

type mpp_domains_mod::unstruct_overlap_type

Private type.

Definition at line 247 of file mpp_domains.F90.

Collaboration diagram for unstruct_overlap_type:
[legend]

Private Attributes

integer count = 0
 
integer, dimension(:), pointer i =>NULL()
 
integer, dimension(:), pointer j =>NULL()
 
integer pe
 

◆ mpp_domains_mod::unstruct_pass_type

type mpp_domains_mod::unstruct_pass_type

Private type.

Definition at line 257 of file mpp_domains.F90.

Collaboration diagram for unstruct_pass_type:
[legend]

Private Attributes

integer nrecv
 
integer nsend
 
type(unstruct_overlap_type), dimension(:), pointer recv =>NULL()
 
type(unstruct_overlap_type), dimension(:), pointer send =>NULL()
 

Function/Subroutine Documentation

◆ compute_overlaps()

subroutine compute_overlaps ( type(domain2d), intent(inout)  domain,
integer, intent(in)  position,
type(overlapspec), intent(inout), pointer  update,
type(overlapspec), intent(inout), pointer  check,
integer, intent(in)  ishift,
integer, intent(in)  jshift,
integer, intent(in)  x_cyclic_offset,
integer, intent(in)  y_cyclic_offset,
integer, intent(in)  whalo,
integer, intent(in)  ehalo,
integer, intent(in)  shalo,
integer, intent(in)  nhalo 
)

Computes remote domain overlaps.

Assumes only one in each direction will calculate the overlapping for T,E,C,N-cell seperately.

Definition at line 1594 of file mpp_domains_define.inc.

◆ define_contact_point()

subroutine define_contact_point ( type(domain2d), intent(inout)  domain,
integer, intent(in)  position,
integer, intent(in)  num_contact,
integer, dimension(:), intent(in)  tile1,
integer, dimension(:), intent(in)  tile2,
integer, dimension(:), intent(in)  align1,
integer, dimension(:), intent(in)  align2,
real, dimension(:), intent(in)  refine1,
real, dimension(:), intent(in)  refine2,
integer, dimension(:), intent(in)  istart1,
integer, dimension(:), intent(in)  iend1,
integer, dimension(:), intent(in)  jstart1,
integer, dimension(:), intent(in)  jend1,
integer, dimension(:), intent(in)  istart2,
integer, dimension(:), intent(in)  iend2,
integer, dimension(:), intent(in)  jstart2,
integer, dimension(:), intent(in)  jend2,
integer, dimension(:), intent(in)  isgList,
integer, dimension(:), intent(in)  iegList,
integer, dimension(:), intent(in)  jsgList,
integer, dimension(:), intent(in)  jegList 
)

compute the overlapping between tiles for the T-cell.

Parameters
[in]num_contactnumber of contact regions
[in]tile2tile number
[in]align2align direction of contact region
[in]refine2refinement between tiles
[in]iend1i-index in tile_1 of contact region
[in]jend1j-index in tile_1 of contact region
[in]iend2i-index in tile_2 of contact region
[in]jend2j-index in tile_2 of contact region
[in]ieglisti-global domain of each tile
[in]jeglistj-global domain of each tile

Definition at line 5291 of file mpp_domains_define.inc.

◆ define_nest_level_type()

subroutine define_nest_level_type ( type(nest_level_type), intent(inout)  nest_domain,
integer, intent(in)  x_refine,
integer, intent(in)  y_refine,
integer, intent(in)  extra_halo 
)
Parameters
[in,out]nest_domainnest domain to be defined
[in]extra_halohalo value
[in]y_refinex and y refinements

Definition at line 465 of file mpp_define_nest_domains.inc.

◆ domain_update_is_needed()

logical function domain_update_is_needed ( type(domain2d), intent(in)  domain,
integer, intent(in)  whalo,
integer, intent(in)  ehalo,
integer, intent(in)  shalo,
integer, intent(in)  nhalo 
)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 1002 of file mpp_domains_util.inc.

◆ get_mesgsize()

integer function get_mesgsize ( type(overlap_type), intent(in)  overlap,
logical, dimension(:), intent(in)  do_dir 
)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 1715 of file mpp_domains_util.inc.

◆ get_rank_recv()

integer function get_rank_recv ( type(domain2d), intent(in)  domain,
type(overlapspec), intent(in)  overlap_x,
type(overlapspec), intent(in)  overlap_y,
integer, intent(out)  rank_x,
integer, intent(out)  rank_y,
integer, intent(out)  ind_x,
integer, intent(out)  ind_y 
)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 1523 of file mpp_domains_util.inc.

◆ get_rank_send()

integer function get_rank_send ( type(domain2d), intent(in)  domain,
type(overlapspec), intent(in)  overlap_x,
type(overlapspec), intent(in)  overlap_y,
integer, intent(out)  rank_x,
integer, intent(out)  rank_y,
integer, intent(out)  ind_x,
integer, intent(out)  ind_y 
)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 1496 of file mpp_domains_util.inc.

◆ get_rank_unpack()

integer function get_rank_unpack ( type(domain2d), intent(in)  domain,
type(overlapspec), intent(in)  overlap_x,
type(overlapspec), intent(in)  overlap_y,
integer, intent(out)  rank_x,
integer, intent(out)  rank_y,
integer, intent(out)  ind_x,
integer, intent(out)  ind_y 
)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 1687 of file mpp_domains_util.inc.

◆ get_vector_recv()

integer function get_vector_recv ( type(domain2d), intent(in)  domain,
type(overlapspec), intent(in)  update_x,
type(overlapspec), intent(in)  update_y,
integer, dimension(:), intent(out)  ind_x,
integer, dimension(:), intent(out)  ind_y,
integer, dimension(:), intent(out)  start_pos,
integer, dimension(:), intent(out)  pelist 
)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 1553 of file mpp_domains_util.inc.

◆ get_vector_send()

integer function get_vector_send ( type(domain2d), intent(in)  domain,
type(overlapspec), intent(in)  update_x,
type(overlapspec), intent(in)  update_y,
integer, dimension(:), intent(out)  ind_x,
integer, dimension(:), intent(out)  ind_y,
integer, dimension(:), intent(out)  start_pos,
integer, dimension(:), intent(out)  pelist 
)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 1619 of file mpp_domains_util.inc.

◆ init_nonblock_type()

subroutine init_nonblock_type ( type(nonblock_type), intent(inout)  nonblock_obj)

Initialize domain decomp package.

Called to initialize the mpp_domains_mod package. flags can be set to MPP_VERBOSE to have mpp_domains_mod keep you informed of what it's up to. MPP_DEBUG returns even more information for debugging.

mpp_domains_init will call mpp_init, to make sure mpp_mod is initialized. (Repeated calls to mpp_init do no harm, so don't worry if you already called it).

Definition at line 125 of file mpp_domains_misc.inc.

◆ mpp_check_field_2d()

subroutine mpp_check_field_2d ( real, dimension(:,:), intent(in)  field_in,
integer, dimension(:), intent(in)  pelist1,
integer, dimension(:), intent(in)  pelist2,
type(domain2d), intent(in)  domain,
character(len=*), intent(in)  mesg,
integer, intent(in), optional  w_halo,
integer, intent(in), optional  s_halo,
integer, intent(in), optional  e_halo,
integer, intent(in), optional  n_halo,
logical, intent(in), optional  force_abort,
integer, intent(in), optional  position 
)

This routine is used to do parallel checking for 2d data between n and m pe. The comparison is is done on pelist2. When size of pelist2 is 1, we can check the halo; otherwise, halo can not be checked.

Parameters
[in]field_infield to be checked
[in]pelist2pe list for the two groups
[in]domaindomain for each pe
[in]mesgmessage to be printed out if differences found
[in]n_halohalo size for west, south, east and north
[in]force_abortwhen true, call mpp_error if any difference found. default value is false.
[in]positionwhen domain is symmetry, only value = CENTER is implemented.

Definition at line 218 of file mpp_domains_misc.inc.

◆ mpp_check_field_2d_type1()

subroutine mpp_check_field_2d_type1 ( real, dimension(:,:), intent(in)  field_in,
integer, dimension(:), intent(in)  pelist1,
integer, dimension(:), intent(in)  pelist2,
type(domain2d), intent(in)  domain,
character(len=*), intent(in)  mesg,
integer, intent(in), optional  w_halo,
integer, intent(in), optional  s_halo,
integer, intent(in), optional  e_halo,
integer, intent(in), optional  n_halo,
logical, intent(in), optional  force_abort 
)

This routine is used to check field between running on 1 pe (pelist2) and n pe(pelist1). The need_to_be_checked data is sent to the pelist2 and All the comparison is done on pelist2.

Parameters
[in]field_infield to be checked
[in]pelist2pe list for the two groups
[in]domaindomain for each pe
[in]mesgmessage to be printed out if differences found
[in]n_halohalo size for west, south, east and north
[in]force_abortwhen, call mpp_error if any difference found. default value is false.

Definition at line 257 of file mpp_domains_misc.inc.

◆ mpp_check_field_2d_type2()

subroutine mpp_check_field_2d_type2 ( real, dimension(:,:), intent(in)  field_in,
integer, dimension(:), intent(in)  pelist1,
integer, dimension(:), intent(in)  pelist2,
type(domain2d), intent(in)  domain,
character(len=*), intent(in)  mesg,
logical, intent(in), optional  force_abort 
)

This routine is used to check field between running on m pe (root pe) and n pe. This routine can not check halo.

Parameters
[in]force_abortwhen, call mpp_error if any difference found. default value is false.

Definition at line 388 of file mpp_domains_misc.inc.

◆ mpp_check_field_3d()

subroutine mpp_check_field_3d ( real, dimension(:,:,:), intent(in)  field_in,
integer, dimension(:), intent(in)  pelist1,
integer, dimension(:), intent(in)  pelist2,
type(domain2d), intent(in)  domain,
character(len=*), intent(in)  mesg,
integer, intent(in), optional  w_halo,
integer, intent(in), optional  s_halo,
integer, intent(in), optional  e_halo,
integer, intent(in), optional  n_halo,
logical, intent(in), optional  force_abort,
integer, intent(in), optional  position 
)

This routine is used to do parallel checking for 3d data between n and m pe. The comparison is is done on pelist2. When size of pelist2 is 1, we can check the halo; otherwise, halo can not be checked.

Parameters
[in]field_infield to be checked
[in]pelist2pe list for the two groups
[in]domaindomain for each pe
[in]mesgmessage to be printed out if differences found
[in]n_halohalo size for west, south, east and north
[in]force_abortwhen true, call mpp_error if any difference found. default value is false.
[in]positionwhen domain is symmetry, only value = CENTER is implemented.

Definition at line 185 of file mpp_domains_misc.inc.

◆ mpp_clear_group_update()

subroutine mpp_clear_group_update ( type(mpp_group_update_type), intent(inout)  group)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 2374 of file mpp_domains_util.inc.

◆ mpp_compute_block_extent()

subroutine mpp_compute_block_extent ( integer, intent(in)  isg,
integer, intent(in)  ieg,
integer, intent(in)  ndivs,
integer, dimension(:), intent(out)  ibegin,
integer, dimension(:), intent(out)  iend 
)

Computes the extents of a grid block.

Tis implementation is different from mpp_compute_extents The last block might have most points

Definition at line 161 of file mpp_domains_define.inc.

◆ mpp_copy_domain1d()

recursive subroutine mpp_copy_domain1d ( type(domain1d), intent(in)  domain_in,
type(domain1d), intent(inout)  domain_out 
)

Copies input 1d domain to the output 1d domain.

Parameters
[in]domain_inInput domain
[in,out]domain_outOutput domain

Definition at line 1741 of file mpp_domains_util.inc.

◆ mpp_copy_domain1d_spec()

subroutine mpp_copy_domain1d_spec ( type(domain1d_spec), intent(in)  domain1D_spec_in,
type(domain1d_spec), intent(out)  domain1D_spec_out 
)

Copies input 1d domain spec to the output 1d domain spec.

Parameters
[in]domain1d_spec_inInput
[out]domain1d_spec_outOutput

Definition at line 1896 of file mpp_domains_util.inc.

◆ mpp_copy_domain2d()

subroutine mpp_copy_domain2d ( type(domain2d), intent(in)  domain_in,
type(domain2d), intent(inout)  domain_out 
)

Copies input 2d domain to the output 2d domain.

Parameters
[in]domain_inInput domain
[in,out]domain_outOutput domain

Definition at line 1773 of file mpp_domains_util.inc.

◆ mpp_copy_domain2d_spec()

subroutine mpp_copy_domain2d_spec ( type(domain2d_spec), intent(in)  domain2D_spec_in,
type(domain2d_spec), intent(out)  domain2d_spec_out 
)

Copies input 2d domain spec to the output 2d domain spec.

Parameters
[in]domain2d_spec_inInput
[out]domain2d_spec_outOutput

Definition at line 1850 of file mpp_domains_util.inc.

◆ mpp_copy_domain_axis_spec()

subroutine mpp_copy_domain_axis_spec ( type(domain_axis_spec), intent(in)  domain_axis_spec_in,
type(domain_axis_spec), intent(out)  domain_axis_spec_out 
)

Copies input domain_axis_spec to the output domain_axis_spec.

Parameters
[in]domain_axis_spec_inInput
[out]domain_axis_spec_outOutput

Definition at line 1907 of file mpp_domains_util.inc.

◆ mpp_create_super_grid_domain()

subroutine mpp_create_super_grid_domain ( type(domain2d), intent(inout)  domain)

Modifies the indices of the input domain to create the supergrid domain.

This is an example of how to use mpp_create_super_grid_domain

call mpp_copy_domain(domain_in, domain_out)
call super_grid_domain(domain_out)

domain_in is the original domain, domain_out is the domain with the supergrid indices.

Parameters
[in,out]domainInput domain

Definition at line 293 of file mpp_domains_util.inc.

◆ mpp_define_domains1d()

subroutine mpp_define_domains1d ( integer, dimension(:), intent(in)  global_indices,
integer, intent(in)  ndivs,
type(domain1d), intent(inout)  domain,
integer, dimension(0:), intent(in), optional  pelist,
integer, intent(in), optional  flags,
integer, intent(in), optional  halo,
integer, dimension(0:), intent(in), optional  extent,
logical, dimension(0:), intent(in), optional  maskmap,
integer, intent(in), optional  memory_size,
integer, intent(in), optional  begin_halo,
integer, intent(in), optional  end_halo 
)

Define data and computational domains on a 1D set of data (isg:ieg) and assign them to PEs.

Parameters
[in]global_indices(/ isg, ieg /) gives the extent of global domain
[in]ndivsnumber of divisions of domain: even divisions unless extent is present.
[in,out]domainthe returned domain1D; declared inout so that existing links, if any, can be nullified
[in]pelistlist of PEs to which domains are to be assigned (default 0...npes-1); size of pelist must correspond to number of mask=.TRUE. divisions
[in]haloflags define whether compute and data domains are global (undecomposed) and whether the global domain has periodic boundaries. halo defines halo width (currently the same on both sides)
[in]extentarray extent; defines width of each division (used for non-uniform domain decomp, for e.g load-balancing)
[in]maskmapa division whose maskmap=.FALSE. is not assigned to any domain. By default we assume decomposition of compute and data domains, non-periodic boundaries, no halo, as close to uniform extents as the input parameters permit

Definition at line 281 of file mpp_domains_define.inc.

◆ mpp_define_domains2d()

subroutine mpp_define_domains2d ( integer, dimension(:), intent(in)  global_indices,
integer, dimension(:), intent(in)  layout,
type(domain2d), intent(inout)  domain,
integer, dimension(0:), intent(in), optional  pelist,
integer, intent(in), optional  xflags,
integer, intent(in), optional  yflags,
integer, intent(in), optional  xhalo,
integer, intent(in), optional  yhalo,
integer, dimension(0:), intent(in), optional  xextent,
integer, dimension(0:), intent(in), optional  yextent,
logical, dimension(0:,0:), intent(in), optional  maskmap,
character(len=*), intent(in), optional  name,
logical, intent(in), optional  symmetry,
integer, dimension(:), intent(in), optional  memory_size,
integer, intent(in), optional  whalo,
integer, intent(in), optional  ehalo,
integer, intent(in), optional  shalo,
integer, intent(in), optional  nhalo,
logical, intent(in), optional  is_mosaic,
integer, intent(in), optional  tile_count,
integer, intent(in), optional  tile_id,
logical, intent(in), optional  complete,
integer, intent(in), optional  x_cyclic_offset,
integer, intent(in), optional  y_cyclic_offset 
)

Define 2D data and computational domain on global rectilinear cartesian domain (isg:ieg,jsg:jeg) and assign them to PEs.

Parameters
[in]global_indices(/ isg, ieg, jsg, jeg /)
[in]layoutpe layout
[in,out]domain2D domain decomposition to define
[in]pelistcurrent pelist to run on
[in]yflagsdirectional flag
[in]yhalohalo sizes for x and y indices
[in]is_mosaicindicate if calling mpp_define_domains from mpp_define_mosaic.
[in]nhalohalo size for West, East, South and North direction. if whalo and ehalo is not present, will take the value of xhalo if shalo and nhalo is not present, will take the value of yhalo
[in]tile_counttile number on current pe, default value is 1. this is for the situation that multiple tiles on one processor
[in]tile_idtile id
[in]completetrue indicate mpp_define_domain is completed for mosaic definition.
[in]x_cyclic_offsetoffset for x-cyclic boundary condition, (0,j) = (ni, mod(j+x_cyclic_offset,nj)) (ni+1, j)=(1 ,mod(j+nj-x_cyclic_offset,nj))
[in]y_cyclic_offsetoffset for y-cyclic boundary condition (i,0) = (mod(i+y_cyclic_offset,ni), nj)) (i,nj+1) =(mod(mod(i+ni-y_cyclic_offset,ni), 1) )

Definition at line 608 of file mpp_domains_define.inc.

◆ mpp_define_io_domain()

subroutine mpp_define_io_domain ( type(domain2d), intent(inout)  domain,
integer, dimension(2), intent(in)  io_layout 
)

Define the layout for IO pe's for the given domain.

Parameters
[in,out]domainInput 2D domain
[in]io_layout2 value io pe layout to define

Definition at line 457 of file mpp_domains_define.inc.

◆ mpp_define_layout2d()

subroutine mpp_define_layout2d ( integer, dimension(:), intent(in)  global_indices,
integer, intent(in)  ndivs,
integer, dimension(:), intent(out)  layout 
)
Parameters
[in]global_indices(/ isg, ieg, jsg, jeg /); Defines the global domain.
[in]ndivsnumber of divisions to divide global domain

Definition at line 27 of file mpp_domains_define.inc.

◆ mpp_define_mosaic()

subroutine mpp_define_mosaic ( integer, dimension(:,:), intent(in)  global_indices,
integer, dimension(:,:), intent(in)  layout,
type(domain2d), intent(inout)  domain,
integer, intent(in)  num_tile,
integer, intent(in)  num_contact,
integer, dimension(:), intent(in)  tile1,
integer, dimension(:), intent(in)  tile2,
integer, dimension(:), intent(in)  istart1,
integer, dimension(:), intent(in)  iend1,
integer, dimension(:), intent(in)  jstart1,
integer, dimension(:), intent(in)  jend1,
integer, dimension(:), intent(in)  istart2,
integer, dimension(:), intent(in)  iend2,
integer, dimension(:), intent(in)  jstart2,
integer, dimension(:), intent(in)  jend2,
integer, dimension(:), intent(in)  pe_start,
integer, dimension(:), intent(in)  pe_end,
integer, dimension(:), intent(in), optional  pelist,
integer, intent(in), optional  whalo,
integer, intent(in), optional  ehalo,
integer, intent(in), optional  shalo,
integer, intent(in), optional  nhalo,
integer, dimension(:,:), intent(in), optional  xextent,
integer, dimension(:,:), intent(in), optional  yextent,
logical, dimension(:,:,:), intent(in), optional  maskmap,
character(len=*), intent(in), optional  name,
integer, dimension(2), intent(in), optional  memory_size,
logical, intent(in), optional  symmetry,
integer, intent(in), optional  xflags,
integer, intent(in), optional  yflags,
integer, dimension(:), intent(in), optional  tile_id 
)

Defines a domain for mosaic tile grids.

Parameters
[in]num_tilenumber of tiles in the mosaic
[in]num_contactnumber of contact region between tiles.
[in]tile2tile number
[in]iend1i-index in tile_1 of contact region
[in]jend1j-index in tile_1 of contact region
[in]iend2i-index in tile_2 of contact region
[in]jend2j-index in tile_2 of contact region
[in]pe_startstart pe of the pelist used in each tile
[in]pe_endend pe of the pelist used in each tile
[in]pelistlist of processors used in mosaic
[in]tile_idtile_id of each tile in the mosaic

Definition at line 1201 of file mpp_domains_define.inc.

◆ mpp_define_mosaic_pelist()

subroutine mpp_define_mosaic_pelist ( integer, dimension(:), intent(in)  sizes,
integer, dimension(:), intent(inout)  pe_start,
integer, dimension(:), intent(inout)  pe_end,
integer, dimension(:), intent(in), optional  pelist,
integer, dimension(:), intent(in), optional  costpertile 
)

Defines a pelist for use with mosaic tiles.

Note
The following routine may need to revised to improve the capability. It is very hard to make it balance for all the situation. Hopefully some smart idea will come up someday.

Definition at line 62 of file mpp_domains_define.inc.

◆ mpp_define_nest_domains()

subroutine mpp_define_nest_domains ( type(nest_domain_type), intent(inout)  nest_domain,
type(domain2d), intent(in), target  domain,
integer, intent(in)  num_nest,
integer, dimension(:), intent(in)  nest_level,
integer, dimension(:), intent(in)  tile_fine,
integer, dimension(:), intent(in)  tile_coarse,
integer, dimension(:), intent(in)  istart_coarse,
integer, dimension(:), intent(in)  icount_coarse,
integer, dimension(:), intent(in)  jstart_coarse,
integer, dimension(:), intent(in)  jcount_coarse,
integer, dimension(:), intent(in)  npes_nest_tile,
integer, dimension(:), intent(in)  x_refine,
integer, dimension(:), intent(in)  y_refine,
integer, intent(in), optional  extra_halo,
character(len=*), intent(in), optional  name 
)

Set up a domain to pass data between aligned coarse and fine grid of nested model.

Set up a domain to pass data between aligned coarse and fine grid of a nested model. Supports multiple and telescoping nests. A telescoping nest is defined as a nest within a nest. Nest domains may span multiple tiles, but cannot contain a coarse-grid, cube corner. Concurrent nesting is the only supported mechanism, i.e. coarse and fine grid are on individual, non-overlapping, processor lists. Coarse and fine grid domain need to be defined before calling mpp_define_nest_domains. An mpp_broadcast is needed to broadcast both fine and coarse grid domain onto all processors.

mpp_update_nest_coarse is used to pass data from fine grid to coarse grid computing domain. mpp_update_nest_fine is used to pass data from coarse grid to fine grid halo. You may call mpp_get_C2F_index before calling mpp_update_nest_fine to get the index for passing data from coarse to fine. You may call mpp_get_F2C_index before calling mpp_update_nest_coarse to get the index for passing data from coarse to fine.

Note
The following tests for nesting of regular lat-lon grids upon a cubed-sphere grid are done in test_mpp_domains:
a) a first-level nest spanning multiple cubed-sphere faces (tiles 1, 2, & 4)
b) a first-level nest wholly contained within tile 3
c) a second-level nest contained within the nest mentioned in a)
Tests are done for data at T, E, C, N-cell center.

Below is an example to pass data between fine and coarse grid (More details on how to use the nesting domain update are available in routine test_update_nest_domain of test_fms/mpp/test_mpp_domains.F90.

if( concurrent ) then
call mpp_broadcast_domain(domain_fine)
call mpp_broadcast_domain(domain_coarse)
endif
call mpp_define_nest_domains(nest_domain,domain,num_nest,nest_level(1:num_nest), &
tile_fine(1:num_nest), tile_coarse(1:num_nest), &
istart_coarse(1:num_nest), icount_coarse(1:num_nest), &
jstart_coarse(1:num_nest), jcount_coarse(1:num_nest), &
npes_nest_tile, x_refine(1:num_nest), y_refine(1:num_nest), &
extra_halo=extra_halo, name="nest_domain")
call mpp_get_c2f_index(nest_domain, isw_f, iew_f, jsw_f, jew_f, isw_c, iew_c, jsw_c, jew_c, west, level)
call mpp_get_c2f_index(nest_domain, ise_f, iee_f, jse_f, jee_f, ise_c, iee_c, jse_c, jee_c, east, level)
call mpp_get_c2f_index(nest_domain, iss_f, ies_f, jss_f, jes_f, iss_c, ies_c, jss_c, jes_c, south, level)
call mpp_get_c2f_index(nest_domain, isn_f, ien_f, jsn_f, jen_f, isn_c, ien_c, jsn_c, jen_c, north, level)
allocate(wbuffer(isw_c:iew_c, jsw_c:jew_c,nz))
allocate(ebuffer(ise_c:iee_c, jse_c:jee_c,nz))
allocate(sbuffer(iss_c:ies_c, jss_c:jes_c,nz))
allocate(nbuffer(isn_c:ien_c, jsn_c:jen_c,nz))
call mpp_update_nest_fine(x, nest_domain, wbuffer, sbuffer, ebuffer, nbuffer)
call mpp_get_f2c_index(nest_domain, is_c, ie_c, js_c, je_c, is_f, ie_f, js_f, je_f, nest_level=level)
allocate(buffer(is_f:ie_f, js_f:je_f,nz))
call mpp_update_nest_coarse(x, nest_domain, buffer)
@note currently the contact will be limited to overlap contact.
@param [in,out] nest_domain holds the information to pass data
between nest and parent grids.
@param [in] domain domain for the grid defined in the current pelist
@param [in] num_nest number of nests
@param [in] nest_level array containing the nest level for each nest
(>1 implies a telescoping nest)
@param [in] tile_coarse array containing tile number of the
nest grid(monotonically increasing starting with 7),
array containing tile number of the parent grid corresponding
to the lower left corner of a given nest
@param [in] jcount_coarse start
array containing index in the parent grid of the lower left corner of a given
nest, count: array containing span of the nest on the parent grid
@param [in] npes_nest_tile array containing number of pes to allocated
to each defined tile
@param [in] y_refine array containing refinement ratio
for each nest
@param [in] extra_halo extra halo for passing data from coarse grid to fine grid.
default is 0 and currently only support extra_halo = 0.
@param [in] name name of the nest domain
subroutine mpp_define_nest_domains(nest_domain, domain, num_nest, nest_level, tile_fine, tile_coarse, istart_coarse, icount_coarse, jstart_coarse, jcount_coarse, npes_nest_tile, x_refine, y_refine, extra_halo, name)
Set up a domain to pass data between aligned coarse and fine grid of nested model.
subroutine mpp_get_c2f_index(nest_domain, is_fine, ie_fine, js_fine, je_fine, is_coarse, ie_coarse, js_coarse, je_coarse, dir, nest_level, position)
Get the index of the data passed from coarse grid to fine grid.

Definition at line 95 of file mpp_define_nest_domains.inc.

◆ mpp_define_unstruct_domain()

subroutine mpp_define_unstruct_domain ( type(domainug), intent(inout)  UG_domain,
type(domain2d), intent(in), target  SG_domain,
integer, dimension(:), intent(in)  npts_tile,
integer, dimension(:), intent(in)  grid_nlev,
integer, intent(in)  ndivs,
integer, intent(in)  npes_io_group,
integer, dimension(:), intent(in)  grid_index,
character(len=*), intent(in), optional  name 
)
Parameters
[in]npts_tilenumber of unstructured points on each tile
[in]grid_nlevnumber of levels in each unstructured grid.
[in]npes_io_groupnumber of processors in a io group. Only pe with same tile_id in the same group

Definition at line 25 of file mpp_unstruct_domain.inc.

◆ mpp_domain1d_eq()

logical function mpp_domain1d_eq ( type(domain1d), intent(in)  a,
type(domain1d), intent(in)  b 
)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 60 of file mpp_domains_util.inc.

◆ mpp_domain1d_ne()

logical function mpp_domain1d_ne ( type(domain1d), intent(in)  a,
type(domain1d), intent(in)  b 
)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 78 of file mpp_domains_util.inc.

◆ mpp_domain2d_eq()

logical function mpp_domain2d_eq ( type(domain2d), intent(in)  a,
type(domain2d), intent(in)  b 
)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 86 of file mpp_domains_util.inc.

◆ mpp_domain2d_ne()

logical function mpp_domain2d_ne ( type(domain2d), intent(in)  a,
type(domain2d), intent(in)  b 
)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 110 of file mpp_domains_util.inc.

◆ mpp_domain_is_initialized()

logical function mpp_domain_is_initialized ( type(domain2d), intent(in)  domain)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 988 of file mpp_domains_util.inc.

◆ mpp_domain_is_symmetry()

logical function mpp_domain_is_symmetry ( type(domain2d), intent(in)  domain)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 978 of file mpp_domains_util.inc.

◆ mpp_domains_init()

subroutine mpp_domains_init ( integer, intent(in), optional  flags)

Initialize domain decomp package.

Called to initialize the mpp_domains_mod package. flags can be set to MPP_VERBOSE to have mpp_domains_mod keep you informed of what it's up to. MPP_DEBUG returns even more information for debugging.

mpp_domains_init will call mpp_init, to make sure mpp_mod is initialized. (Repeated calls to mpp_init do no harm, so don't worry if you already called it).

Definition at line 44 of file mpp_domains_misc.inc.

◆ mpp_domains_set_stack_size()

subroutine mpp_domains_set_stack_size ( integer, intent(in)  n)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 35 of file mpp_domains_util.inc.

◆ mpp_get_c2f_index()

subroutine mpp_get_c2f_index ( type(nest_domain_type), intent(in)  nest_domain,
integer, intent(out)  is_fine,
integer, intent(out)  ie_fine,
integer, intent(out)  js_fine,
integer, intent(out)  je_fine,
integer, intent(out)  is_coarse,
integer, intent(out)  ie_coarse,
integer, intent(out)  js_coarse,
integer, intent(out)  je_coarse,
integer, intent(in)  dir,
integer, intent(in)  nest_level,
integer, intent(in), optional  position 
)

Get the index of the data passed from coarse grid to fine grid.

Get the index of the data passed from coarse grid to fine grid.


Example usage:

call mpp_get_c2f_index(nest_domain, is_fine, ie_fine, js_fine, je_fine,
is_coarse, ie_coarse, js_coarse, je_coarse, dir,
nest_level, position)
Parameters
[in]nest_domainholds the information to pass data between fine and coarse grids
[out]je_fineindex in the fine grid of the nested region
[out]je_coarseindex in the coarse grid of the nested region
[in]nest_leveldirection of the halo update. Its value should be WEST, EAST, SOUTH or NORTH.; level of the nest (> 1 implies a telescoping nest)
[in]positionCell position. It value should be CENTER, EAST, CORNER, or NORTH.

Definition at line 1638 of file mpp_define_nest_domains.inc.

◆ mpp_get_compute_domain1d()

subroutine mpp_get_compute_domain1d ( type(domain1d), intent(in)  domain,
integer, intent(out), optional  begin,
integer, intent(out), optional  end,
integer, intent(out), optional  size,
integer, intent(out), optional  max_size,
logical, intent(out), optional  is_global 
)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 124 of file mpp_domains_util.inc.

◆ mpp_get_compute_domain2d()

subroutine mpp_get_compute_domain2d ( type(domain2d), intent(in)  domain,
integer, intent(out), optional  xbegin,
integer, intent(out), optional  xend,
integer, intent(out), optional  ybegin,
integer, intent(out), optional  yend,
integer, intent(out), optional  xsize,
integer, intent(out), optional  xmax_size,
integer, intent(out), optional  ysize,
integer, intent(out), optional  ymax_size,
logical, intent(out), optional  x_is_global,
logical, intent(out), optional  y_is_global,
integer, intent(in), optional  tile_count,
integer, intent(in), optional  position 
)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 178 of file mpp_domains_util.inc.

◆ mpp_get_compute_domains1d()

subroutine mpp_get_compute_domains1d ( type(domain1d), intent(in)  domain,
integer, dimension(:), intent(out), optional  begin,
integer, dimension(:), intent(out), optional  end,
integer, dimension(:), intent(out), optional  size 
)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 445 of file mpp_domains_util.inc.

◆ mpp_get_compute_domains2d()

subroutine mpp_get_compute_domains2d ( type(domain2d), intent(in)  domain,
integer, dimension(:), intent(out), optional  xbegin,
integer, dimension(:), intent(out), optional  xend,
integer, dimension(:), intent(out), optional  xsize,
integer, dimension(:), intent(out), optional  ybegin,
integer, dimension(:), intent(out), optional  yend,
integer, dimension(:), intent(out), optional  ysize,
integer, intent(in), optional  position 
)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 471 of file mpp_domains_util.inc.

◆ mpp_get_data_domain1d()

subroutine mpp_get_data_domain1d ( type(domain1d), intent(in)  domain,
integer, intent(out), optional  begin,
integer, intent(out), optional  end,
integer, intent(out), optional  size,
integer, intent(out), optional  max_size,
logical, intent(out), optional  is_global 
)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 138 of file mpp_domains_util.inc.

◆ mpp_get_data_domain2d()

subroutine mpp_get_data_domain2d ( type(domain2d), intent(in)  domain,
integer, intent(out), optional  xbegin,
integer, intent(out), optional  xend,
integer, intent(out), optional  ybegin,
integer, intent(out), optional  yend,
integer, intent(out), optional  xsize,
integer, intent(out), optional  xmax_size,
integer, intent(out), optional  ysize,
integer, intent(out), optional  ymax_size,
logical, intent(out), optional  x_is_global,
logical, intent(out), optional  y_is_global,
integer, intent(in), optional  tile_count,
integer, intent(in), optional  position 
)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 203 of file mpp_domains_util.inc.

◆ mpp_get_domain_commid()

integer function mpp_get_domain_commid ( integer, intent(in)  domain)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 702 of file mpp_domains_util.inc.

◆ mpp_get_domain_components()

subroutine mpp_get_domain_components ( type(domain2d), intent(in)  domain,
type(domain1d), intent(inout), optional  x,
type(domain1d), intent(inout), optional  y,
integer, intent(in), optional  tile_count 
)

Retrieve 1D components of 2D decomposition.

It is sometime necessary to have direct recourse to the domain1D types that compose a domain2D object. This call retrieves them.

call mpp_get_domain_components( domain, x, y )
subroutine mpp_get_domain_components(domain, x, y, tile_count)
Retrieve 1D components of 2D decomposition.

Definition at line 431 of file mpp_domains_util.inc.

◆ mpp_get_domain_extents1d()

subroutine mpp_get_domain_extents1d ( type(domain2d), intent(in)  domain,
integer, dimension(0:), intent(inout)  xextent,
integer, dimension(0:), intent(inout)  yextent 
)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 617 of file mpp_domains_util.inc.

◆ mpp_get_domain_name()

character(len=name_length) function mpp_get_domain_name ( type(domain2d), intent(in)  domain)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 1441 of file mpp_domains_util.inc.

◆ mpp_get_domain_npes()

integer function mpp_get_domain_npes ( type(domain2d), intent(in)  domain)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 1458 of file mpp_domains_util.inc.

◆ mpp_get_domain_pe()

integer function mpp_get_domain_pe ( type(domain2d), intent(in)  domain)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 674 of file mpp_domains_util.inc.

◆ mpp_get_domain_pelist()

subroutine mpp_get_domain_pelist ( type(domain2d), intent(in)  domain,
integer, dimension(:), intent(out)  pelist 
)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 1469 of file mpp_domains_util.inc.

◆ mpp_get_domain_root_pe()

integer function mpp_get_domain_root_pe ( type(domain2d), intent(in)  domain)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 1450 of file mpp_domains_util.inc.

◆ mpp_get_domain_shift()

subroutine mpp_get_domain_shift ( type(domain2d), intent(in)  domain,
integer, intent(out)  ishift,
integer, intent(out)  jshift,
integer, intent(in), optional  position 
)

Returns the shift value in x and y-direction according to domain position..

When domain is symmetry, one extra point maybe needed in x- and/or y-direction. This routine will return the shift value based on the position

call mpp_get_domain_shift( domain, ishift, jshift, position )
subroutine mpp_get_domain_shift(domain, ishift, jshift, position)
Returns the shift value in x and y-direction according to domain position..
Parameters
[out]jshiftreturn value will be 0 or 1.
[in]positionposition of data. Its value can be CENTER, EAST, NORTH or CORNER.

Definition at line 811 of file mpp_domains_util.inc.

◆ mpp_get_domain_tile_commid()

integer function mpp_get_domain_tile_commid ( type(domain2d), intent(in)  domain)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 693 of file mpp_domains_util.inc.

◆ mpp_get_domain_tile_root_pe()

integer function mpp_get_domain_tile_root_pe ( type(domain2d), intent(in)  domain)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 684 of file mpp_domains_util.inc.

◆ mpp_get_f2c_index_coarse()

subroutine mpp_get_f2c_index_coarse ( type(nest_domain_type), intent(in)  nest_domain,
integer, intent(out)  is_coarse,
integer, intent(out)  ie_coarse,
integer, intent(out)  js_coarse,
integer, intent(out)  je_coarse,
integer, intent(in)  nest_level,
integer, intent(in), optional  position 
)
Parameters
[in]nest_domainHolds the information to pass data between fine and coarse grid.
[out]je_coarseindex in the fine grid of the nested region
[in]nest_levellevel of the nest (> 1 implies a telescoping nest)
[in]positionCell position. It value should be CENTER, EAST, CORNER, or NORTH.

Definition at line 1768 of file mpp_define_nest_domains.inc.

◆ mpp_get_f2c_index_fine()

subroutine mpp_get_f2c_index_fine ( type(nest_domain_type), intent(in)  nest_domain,
integer, intent(out)  is_coarse,
integer, intent(out)  ie_coarse,
integer, intent(out)  js_coarse,
integer, intent(out)  je_coarse,
integer, intent(out)  is_fine,
integer, intent(out)  ie_fine,
integer, intent(out)  js_fine,
integer, intent(out)  je_fine,
integer, intent(in)  nest_level,
integer, intent(in), optional  position 
)
Parameters
[in]nest_domainHolds the information to pass data between fine and coarse grid.
[out]je_fineindex in the fine grid of the nested region
[out]je_coarseindex in the coarse grid of the nested region
[in]nest_levellevel of the nest (> 1 implies a telescoping nest)
[in]positionCell position. It value should be CENTER, EAST, CORNER, or NORTH.

Definition at line 1719 of file mpp_define_nest_domains.inc.

◆ mpp_get_global_domain1d()

subroutine mpp_get_global_domain1d ( type(domain1d), intent(in)  domain,
integer, intent(out), optional  begin,
integer, intent(out), optional  end,
integer, intent(out), optional  size,
integer, intent(out), optional  max_size 
)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 152 of file mpp_domains_util.inc.

◆ mpp_get_global_domain2d()

subroutine mpp_get_global_domain2d ( type(domain2d), intent(in)  domain,
integer, intent(out), optional  xbegin,
integer, intent(out), optional  xend,
integer, intent(out), optional  ybegin,
integer, intent(out), optional  yend,
integer, intent(out), optional  xsize,
integer, intent(out), optional  xmax_size,
integer, intent(out), optional  ysize,
integer, intent(out), optional  ymax_size,
integer, intent(in), optional  tile_count,
integer, intent(in), optional  position 
)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 228 of file mpp_domains_util.inc.

◆ mpp_get_global_domains1d()

subroutine mpp_get_global_domains1d ( type(domain1d), intent(in)  domain,
integer, dimension(:), intent(out), optional  begin,
integer, dimension(:), intent(out), optional  end,
integer, dimension(:), intent(out), optional  size 
)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 530 of file mpp_domains_util.inc.

◆ mpp_get_global_domains2d()

subroutine mpp_get_global_domains2d ( type(domain2d), intent(in)  domain,
integer, dimension(:), intent(out), optional  xbegin,
integer, dimension(:), intent(out), optional  xend,
integer, dimension(:), intent(out), optional  xsize,
integer, dimension(:), intent(out), optional  ybegin,
integer, dimension(:), intent(out), optional  yend,
integer, dimension(:), intent(out), optional  ysize,
integer, intent(in), optional  position 
)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 557 of file mpp_domains_util.inc.

◆ mpp_get_io_domain()

type(domain2d) function, pointer mpp_get_io_domain ( type(domain2d), intent(in)  domain)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 711 of file mpp_domains_util.inc.

◆ mpp_get_io_domain_layout()

integer function, dimension(2) mpp_get_io_domain_layout ( type(domain2d), intent(in)  domain)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 1487 of file mpp_domains_util.inc.

◆ mpp_get_layout1d()

subroutine mpp_get_layout1d ( type(domain1d), intent(in)  domain,
integer, intent(out)  layout 
)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 773 of file mpp_domains_util.inc.

◆ mpp_get_layout2d()

subroutine mpp_get_layout2d ( type(domain2d), intent(in)  domain,
integer, dimension(2), intent(out)  layout 
)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 789 of file mpp_domains_util.inc.

◆ mpp_get_memory_domain1d()

subroutine mpp_get_memory_domain1d ( type(domain1d), intent(in)  domain,
integer, intent(out), optional  begin,
integer, intent(out), optional  end,
integer, intent(out), optional  size,
integer, intent(out), optional  max_size,
logical, intent(out), optional  is_global 
)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 164 of file mpp_domains_util.inc.

◆ mpp_get_memory_domain2d()

subroutine mpp_get_memory_domain2d ( type(domain2d), intent(in)  domain,
integer, intent(out), optional  xbegin,
integer, intent(out), optional  xend,
integer, intent(out), optional  ybegin,
integer, intent(out), optional  yend,
integer, intent(out), optional  xsize,
integer, intent(out), optional  xmax_size,
integer, intent(out), optional  ysize,
integer, intent(out), optional  ymax_size,
logical, intent(out), optional  x_is_global,
logical, intent(out), optional  y_is_global,
integer, intent(in), optional  position 
)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 252 of file mpp_domains_util.inc.

◆ mpp_get_num_overlap()

integer function mpp_get_num_overlap ( type(domain2d), intent(in)  domain,
integer, intent(in)  action,
integer, intent(in)  p,
integer, intent(in), optional  position 
)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 1279 of file mpp_domains_util.inc.

◆ mpp_get_overlap()

subroutine mpp_get_overlap ( type(domain2d), intent(in)  domain,
integer, intent(in)  action,
integer, intent(in)  p,
integer, dimension(:), intent(out)  is,
integer, dimension(:), intent(out)  ie,
integer, dimension(:), intent(out)  js,
integer, dimension(:), intent(out)  je,
integer, dimension(:), intent(out)  dir,
integer, dimension(:), intent(out)  rot,
integer, intent(in), optional  position 
)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 1388 of file mpp_domains_util.inc.

◆ mpp_get_pelist1d()

subroutine mpp_get_pelist1d ( type(domain1d), intent(in)  domain,
integer, dimension(:), intent(out)  pelist,
integer, intent(out), optional  pos 
)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 729 of file mpp_domains_util.inc.

◆ mpp_get_pelist2d()

subroutine mpp_get_pelist2d ( type(domain2d), intent(in)  domain,
integer, dimension(:), intent(out)  pelist,
integer, intent(out), optional  pos 
)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 753 of file mpp_domains_util.inc.

◆ mpp_get_tile_compute_domains()

subroutine mpp_get_tile_compute_domains ( type(domain2d), intent(in)  domain,
integer, dimension(:), intent(out)  xbegin,
integer, dimension(:), intent(out)  xend,
integer, dimension(:), intent(out)  ybegin,
integer, dimension(:), intent(out)  yend,
integer, intent(in), optional  position 
)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 1236 of file mpp_domains_util.inc.

◆ mpp_get_update_pelist()

subroutine mpp_get_update_pelist ( type(domain2d), intent(in)  domain,
integer, intent(in)  action,
integer, dimension(:), intent(inout)  pelist,
integer, intent(in), optional  position 
)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 1346 of file mpp_domains_util.inc.

◆ mpp_get_update_size()

subroutine mpp_get_update_size ( type(domain2d), intent(in)  domain,
integer, intent(out)  nsend,
integer, intent(out)  nrecv,
integer, intent(in), optional  position 
)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 1318 of file mpp_domains_util.inc.

◆ mpp_group_update_initialized()

logical function mpp_group_update_initialized ( type(mpp_group_update_type), intent(in)  group)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 2388 of file mpp_domains_util.inc.

◆ mpp_group_update_is_set()

logical function mpp_group_update_is_set ( type(mpp_group_update_type), intent(in)  group)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 2397 of file mpp_domains_util.inc.

◆ mpp_modify_domain1d()

subroutine mpp_modify_domain1d ( type(domain1d), intent(in)  domain_in,
type(domain1d), intent(inout)  domain_out,
integer, intent(in), optional  cbegin,
integer, intent(in), optional  cend,
integer, intent(in), optional  gbegin,
integer, intent(in), optional  gend,
integer, intent(in), optional  hbegin,
integer, intent(in), optional  hend 
)

Modifies the exents of a domain.

Parameters
[in]domain_inThe source domain.
[in,out]domain_outThe returned domain.
[in]hendhalo size
[in]cendAxis specifications associated with the compute domain of the returned 1D domain.
[in]gendAxis specifications associated with the global domain of the returned 1D domain.

Definition at line 7545 of file mpp_domains_define.inc.

◆ mpp_modify_domain2d()

subroutine mpp_modify_domain2d ( type(domain2d), intent(in)  domain_in,
type(domain2d), intent(inout)  domain_out,
integer, intent(in), optional  isc,
integer, intent(in), optional  iec,
integer, intent(in), optional  jsc,
integer, intent(in), optional  jec,
integer, intent(in), optional  isg,
integer, intent(in), optional  ieg,
integer, intent(in), optional  jsg,
integer, intent(in), optional  jeg,
integer, intent(in), optional  whalo,
integer, intent(in), optional  ehalo,
integer, intent(in), optional  shalo,
integer, intent(in), optional  nhalo 
)
Parameters
[in]domain_inThe source domain.
[in,out]domain_outThe returned domain.
[in]jecZonal and meridional axis specifications associated with the global domain of the returned 2D domain.
[in]jegZonal axis specifications associated with the global domain of the returned 2D domain.
[in]nhalohalo size in x- and y- directions

Definition at line 7581 of file mpp_domains_define.inc.

◆ mpp_set_compute_domain1d()

subroutine mpp_set_compute_domain1d ( type(domain1d), intent(inout)  domain,
integer, intent(in), optional  begin,
integer, intent(in), optional  end,
integer, intent(in), optional  size,
logical, intent(in), optional  is_global 
)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 338 of file mpp_domains_util.inc.

◆ mpp_set_compute_domain2d()

subroutine mpp_set_compute_domain2d ( type(domain2d), intent(inout)  domain,
integer, intent(in), optional  xbegin,
integer, intent(in), optional  xend,
integer, intent(in), optional  ybegin,
integer, intent(in), optional  yend,
integer, intent(in), optional  xsize,
integer, intent(in), optional  ysize,
logical, intent(in), optional  x_is_global,
logical, intent(in), optional  y_is_global,
integer, intent(in), optional  tile_count 
)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 351 of file mpp_domains_util.inc.

◆ mpp_set_data_domain1d()

subroutine mpp_set_data_domain1d ( type(domain1d), intent(inout)  domain,
integer, intent(in), optional  begin,
integer, intent(in), optional  end,
integer, intent(in), optional  size,
logical, intent(in), optional  is_global 
)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 368 of file mpp_domains_util.inc.

◆ mpp_set_data_domain2d()

subroutine mpp_set_data_domain2d ( type(domain2d), intent(inout)  domain,
integer, intent(in), optional  xbegin,
integer, intent(in), optional  xend,
integer, intent(in), optional  ybegin,
integer, intent(in), optional  yend,
integer, intent(in), optional  xsize,
integer, intent(in), optional  ysize,
logical, intent(in), optional  x_is_global,
logical, intent(in), optional  y_is_global,
integer, intent(in), optional  tile_count 
)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 381 of file mpp_domains_util.inc.

◆ mpp_set_domain_symmetry()

subroutine mpp_set_domain_symmetry ( type(domain2d), intent(inout)  domain,
logical, intent(in)  symmetry 
)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 1732 of file mpp_domains_util.inc.

◆ mpp_set_global_domain1d()

subroutine mpp_set_global_domain1d ( type(domain1d), intent(inout)  domain,
integer, intent(in), optional  begin,
integer, intent(in), optional  end,
integer, intent(in), optional  size 
)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 398 of file mpp_domains_util.inc.

◆ mpp_set_global_domain2d()

subroutine mpp_set_global_domain2d ( type(domain2d), intent(inout)  domain,
integer, intent(in), optional  xbegin,
integer, intent(in), optional  xend,
integer, intent(in), optional  ybegin,
integer, intent(in), optional  yend,
integer, intent(in), optional  xsize,
integer, intent(in), optional  ysize,
integer, intent(in), optional  tile_count 
)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 409 of file mpp_domains_util.inc.

◆ mpp_set_super_grid_indices()

subroutine mpp_set_super_grid_indices ( type(domain_axis_spec), intent(inout)  grid)

Modifies the indices in the domain_axis_spec type to those of the supergrid.

Parameters
[in,out]griddomain_axis_spec type

Definition at line 276 of file mpp_domains_util.inc.

◆ mpp_shift_nest_domains()

subroutine mpp_shift_nest_domains ( type(nest_domain_type), intent(inout)  nest_domain,
type(domain2d), intent(in), target  domain,
integer, dimension(:), intent(in)  delta_i_coarse,
integer, dimension(:), intent(in)  delta_j_coarse,
integer, intent(in), optional  extra_halo 
)

Based on mpp_define_nest_domains, but just resets positioning of nest Modifies the parent/coarse start and end indices of the nest location Computes new overlaps of nest PEs on parent PEs Ramstrom/HRD Moving Nest.

Parameters
[in,out]nest_domainholds the information to pass data between nest and parent grids.
[in]domaindomain for the grid defined in the current pelist
[in]delta_i_coarseArray of deltas of coarse grid in y direction
[in]delta_j_coarseArray of deltas of coarse grid in y direction
[in]extra_haloExtra halo size

Definition at line 387 of file mpp_define_nest_domains.inc.

◆ nullify_domain2d_list()

subroutine nullify_domain2d_list ( type(domain2d), intent(inout)  domain)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 970 of file mpp_domains_util.inc.

◆ set_group_update()

subroutine set_group_update ( type(mpp_group_update_type), intent(inout)  group,
type(domain2d), intent(inout)  domain 
)

Set user stack size.

This sets the size of an array that is used for internal storage by mpp_domains. This array is used, for instance, to buffer the data sent and received in halo updates.
This call has implied global synchronization. It should be placed somewhere where all PEs can call it.

Definition at line 1919 of file mpp_domains_util.inc.

Variable Documentation

◆ debug_update_domain

character(len=32) debug_update_domain = "none"
private

namelist interface

when debug_update_domain = none, no debug will be done. When debug_update_domain is set to fatal, the run will be exited with fatal error message When debug_update_domain is set to warning, the run will output warning message. When debug update_domain is set to note, the run will output some note message.

Definition at line 730 of file mpp_domains.F90.