FMS  2025.04
Flexible Modeling System
xgrid.F90
1 !***********************************************************************
2 !* Apache License 2.0
3 !*
4 !* This file is part of the GFDL Flexible Modeling System (FMS).
5 !*
6 !* Licensed under the Apache License, Version 2.0 (the "License");
7 !* you may not use this file except in compliance with the License.
8 !* You may obtain a copy of the License at
9 !*
10 !* http://www.apache.org/licenses/LICENSE-2.0
11 !*
12 !* FMS is distributed in the hope that it will be useful, but WITHOUT
13 !* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied;
14 !* without even the implied warranty of MERCHANTABILITY or FITNESS FOR A
15 !* PARTICULAR PURPOSE. See the License for the specific language
16 !* governing permissions and limitations under the License.
17 !***********************************************************************
18 !> @defgroup xgrid_mod xgrid_mod
19 !> @ingroup exchange
20 !> @brief Implements exchange grids for coupled models running on multiple processors
21 !> @author Michael Winton, Zhi Liang
22 !!
23 !! An exchange grid is formed from the union of
24 !! the bounding lines of the two (logically rectangular) participating
25 !! grids. The exchange grid is therefore the coarsest grid that is a
26 !! refinement of both participating grids. Exchange grids are used for
27 !! two purposes by coupled models:
28 !! 1. conservative interpolation of fields
29 !! between models uses the exchange grid cell areas as weights and
30 !! 2. the surface flux calculation takes place on the exchange grid thereby
31 !! using the finest scale data available.
32 !! <TT>xgrid_mod</TT> uses a NetCDF grid
33 !! specification file containing the grid cell overlaps in combination with
34 !! the @link ftp://ftp.gfdl.gov/pub/vb/mpp/mpp_domains.F90 @endlink domain
35 !! decomposition information to determine
36 !! the grid and processor connectivities.
37 !!
38 !!
39 !! xgrid_mod - implements exchange grids. An exchange grid is the grid whose
40 !! boundary set is the union of the boundaries of the participating
41 !! grids. The exchange grid is the coarsest grid that is a
42 !! refinement of each of the participating grids. Every exchange
43 !! grid cell is a subarea of one and only one cell in each of the
44 !! participating grids. The exchange grid has two purposes:
45 !!
46 !! (1) The exchange cell areas are used as weights for
47 !! conservative interpolation between model grids.
48 !!
49 !! (2) Computation of surface fluxes takes place on it,
50 !! thereby using the finest scale data obtainable.
51 !!
52 !! The exchange cells are the 2D intersections between cells of the
53 !! participating grids. They are computed elsewhere and are
54 !! read here from a NetCDF grid file as a sequence of quintuples
55 !! (i and j on each of two grids and the cell area).
56 !!
57 !! Each processing element (PE) computes a subdomain of each of the
58 !! participating grids as well as a subset of the exchange cells.
59 !! The geographic regions corresponding to these subdomains will,
60 !! in general, not be the same so communication must occur between
61 !! the PEs. The scheme for doing this is as follows. A distinction
62 !! is drawn between the participating grids. There is a single
63 !! "side 1" grid and it does not have partitions (sub-grid surface
64 !! types). There are one or more "side 2" grids and they may have
65 !! more than 1 partition. In standard usage, the atmosphere grid is
66 !! on side 1 and the land and sea ice grids are on side 2. The set
67 !! of exchange cells computed on a PE corresponds to its side 2
68 !! geographic region(s). Communication between the PEs takes place
69 !! on the side 1 grid. Note: this scheme does not generally allow
70 !! reproduction of answers across varying PE counts. This is
71 !! because, in the side 1 "get", exchange cells are first summed
72 !! locally onto a side 1 grid, then these side 1 contributions are
73 !! further summed after they have been communicated to their target
74 !! PE. For the make_exchange_reproduce option, a special side 1 get
75 !! is used. This get communicates individual exchange cells. The
76 !! cells are summed in the order they appear in the grid spec. file.
77 !!
78 !! <TT>xgrid_mod</TT> reads a NetCDF grid specification file to determine the
79 !! grid and processor connectivities. The exchange grids are defined
80 !! by a sequence of quintuples: the <TT>i/j</TT> indices of the intersecting
81 !! cells of the two participating grids and their areal overlap.
82 !! The names of the five fields are generated automatically from the
83 !! three character ids of the participating grids. For example, if
84 !! the side one grid id is "ATM" and the side two grid id is "OCN",
85 !! <TT>xgrid_mod</TT> expects to find the following five fields in the grid
86 !! specification file: <TT>I_ATM_ATMxOCN, J_ATM_ATMxOCN, I_OCN_ATMxOCN,
87 !! J_OCN_ATMxOCN, and AREA_ATMxOCN</TT>. These fields may be generated
88 !! by the <TT>make_xgrids</TT> utility.
89 
90 !> @addtogroup xgrid_mod
91 !> @{
92 module xgrid_mod
93 
94 
95 use fms_mod, only: check_nml_error, &
96  error_mesg, fatal, note, stdlog, &
97  write_version_number, lowercase, string
98 use mpp_mod, only: mpp_npes, mpp_pe, mpp_root_pe, mpp_send, mpp_recv, &
99  mpp_sync_self, stdout, mpp_max, event_recv, &
100  mpp_get_current_pelist, mpp_clock_id, mpp_min, &
101  mpp_alltoall, &
102  mpp_clock_begin, mpp_clock_end, mpp_clock_sync, &
103  comm_tag_1, comm_tag_2, comm_tag_3, comm_tag_4, &
104  comm_tag_5, comm_tag_6, comm_tag_7, comm_tag_8, &
105  comm_tag_9, comm_tag_10
106 use mpp_mod, only: input_nml_file, mpp_set_current_pelist, mpp_sum, mpp_sync
107 use mpp_domains_mod, only: mpp_get_compute_domain, mpp_get_compute_domains, &
117  domainug, mpp_get_ug_compute_domains, &
118  mpp_get_ug_domains_index, mpp_get_ug_domain_grid_index, &
119  mpp_get_ug_domain_tile_list, mpp_pass_sg_to_ug
120 use constants_mod, only: pi, radius
121 use mosaic2_mod, only: get_mosaic_xgrid, get_mosaic_xgrid_size, &
125 use stock_constants_mod, only: istock_top, istock_bottom, istock_side, stock_names, &
126  stock_units, nelems, stocks_file, stock_type
127 use gradient_mod, only: gradient_cubic
128 use fms2_io_mod, only: fmsnetcdffile_t, open_file, variable_exists, close_file
129 use fms2_io_mod, only: fmsnetcdfdomainfile_t, read_data, get_dimension_size
130 use fms2_io_mod, only: get_variable_units, dimension_exists
131 use platform_mod, only: r8_kind, i8_kind, fms_file_len
132 
133 implicit none
134 private
135 
138  area_atm_sphere, area_ocn_sphere, &
139  area_atm_model, area_ocn_model, &
143 
144 !--- parameters that determine the remapping method
145 integer, parameter :: FIRST_ORDER = 1
146 integer, parameter :: SECOND_ORDER = 2
147 integer, parameter :: version1 = 1 !< grid spec file
148 integer, parameter :: version2 = 2 !< mosaic grid file
149 integer, parameter :: max_fields = 100
150 
151 logical :: make_exchange_reproduce = .false. !< Set to .true. to make <TT>xgrid_mod</TT> reproduce answers on different
152  !! numbers of PEs. This option has a considerable performance impact.
153 !< exactly same on different # PEs
154 character(len=64) :: interp_method = 'first_order' !< Exchange grid interpolation method.
155  !! It has two options: "first_order", "second_order".
156 logical :: debug_stocks = .false.
157 logical :: xgrid_clocks_on = .false.
158 logical :: monotonic_exchange = .false.
159 integer :: nsubset = 0 !< Number of processors to read exchange grid information. Those processors
160  !! that read the exchange grid information will send data to other processors
161  !! to prepare for flux exchange. Default value is 0. When nsubset is 0, each
162  !! processor will read part of the exchange grid information. The purpose of
163  !! this namelist is to improve performance of setup_xmap when running on
164  !! higher processor count and solve receiving size mismatch issue on high
165  !! processor count. Try to set nsubset = mpp_npes/MPI_rank_per_node.
166 logical :: do_alltoall = .true.
167 logical :: do_alltoallv = .false.
168 logical :: use_mpp_io = .false.!< use_mpp_io Default = .false. When true, uses mpp_io for IO.
169  !! When false, uses fms2_io for IO.
170 !> @brief xgrid nml
171 namelist /xgrid_nml/ make_exchange_reproduce, interp_method, debug_stocks, xgrid_clocks_on, &
172  monotonic_exchange, nsubset, do_alltoall, do_alltoallv, &
173  use_mpp_io
174 
176 
177 !> Area elements used inside each model
178 real(r8_kind), allocatable, dimension(:,:) :: area_atm_model, area_lnd_model, area_ocn_model
179 !> Area elements based on a the spherical model used by the ICE layer
180 real(r8_kind), allocatable, dimension(:,:) :: area_atm_sphere, area_lnd_sphere, area_ocn_sphere
181 
182 !> @}
183 
184 !> @brief Scatters data from model grid onto exchange grid.
185 !!
186 !> Example usage:
187 !! @code{.F90}
188 !! call put_to_xgrid(d, grid_id, x, xmap, remap_order)
189 !! @endcode
190 !!
191 !> @ingroup xgrid_mod
192 interface put_to_xgrid
193  module procedure put_side1_to_xgrid
194  module procedure put_side2_to_xgrid
195 end interface
196 
197 !> @brief Sums data from exchange grid to model grid.
198 !!
199 !> <br>Example usage:
200 !! @code{.F90}
201 !! call get_from_xgrid(d, grid_id, x, xmap)
202 !! @endcode
203 !> @ingroup xgrid_mod
204 interface get_from_xgrid
205  module procedure get_side1_from_xgrid
206  module procedure get_side2_from_xgrid
207 end interface
208 
209 !> @brief @ref put_to_xgrid for unstructured grids.
210 !!
211 !> Scatters data from unstructured grid onto exchange grid.
212 !> @ingroup xgrid_mod
214  module procedure put_side1_to_xgrid_ug
215  module procedure put_side2_to_xgrid_ug
216 end interface
217 
218 !> @brief @ref get_from_xgrid for unstructured grids.
219 !!
220 !> Sums data from exchange grid to model grid.
221 !> @ingroup xgrid_mod
223  module procedure get_side2_from_xgrid_ug
224  module procedure get_side1_from_xgrid_ug
225 end interface
226 
227 !> @brief Sets sub-grid area and numbering in the given exchange grid.
228 !> @ingroup xgrid_mod
229 interface set_frac_area
230  module procedure set_frac_area_sg
231  module procedure set_frac_area_ug
232 end interface
233 
234 !> @brief Returns three numbers which are the global sum of a variable.
235 !! @details Returns three numbers which are the global sum of a
236 !! variable (1) on its home model grid, (2) after interpolation to the other
237 !! side grid(s), and (3) after re_interpolation back onto its home side grid(s).
238 !! Conservation_check must be called by all PEs to work properly.
239 !!
240 !! @param d real(r8_kind) data
241 !! @param grid_id 3 character grid ID
242 !! @param[inout] xmap exchange grid
243 !! @param[out] global sum of a variable on home model grid, after side grid interpolation and after
244 !! reinterpolation
245 !!
246 !! <br>Example usage:
247 !! @code{.F90}
248 !! call conservation_check(d, grid_id, xmap,remap_order)
249 !! @endcode
250 !> @ingroup xgrid_mod
252  module procedure conservation_check_side1
253  module procedure conservation_check_side2
254 end interface
255 
256 !> For an unstructured grid, returns three numbers which are the global sum of a
257 !! variable (1) on its home model grid, (2) after interpolation to the other
258 !! side grid(s), and (3) after re_interpolation back onto its home side grid(s).
259 !> @ingroup xgrid_mod
261  module procedure conservation_check_ug_side1
262  module procedure conservation_check_ug_side2
263 end interface
264 
265 
266 !> Private type for cell indices and data in the exchange grid
267 !> @ingroup xgrid_mod
269  integer :: i1 !< indices of cell in model arrays on both sides
270  integer :: j1 !< indices of cell in model arrays on both sides
271  integer :: i2 !< indices of cell in model arrays on both sides
272  integer :: j2 !< indices of cell in model arrays on both sides
273  integer :: l1, l2
274  integer :: recv_pos !< position in the receive buffer.
275  integer :: pe !< other side pe that has this cell
276  integer :: tile !< tile index of side 1 mosaic.
277  real(r8_kind) :: area !< geographic area of exchange cell
278 ! real(r8_kind) :: area1_ratio !(= x_area/grid1_area), will be added in the future to improve efficiency
279 ! real(r8_kind) :: area2_ratio !(= x_area/grid2_area), will be added in the future to improve efficiency
280  real(r8_kind) :: di !< Weight for the gradient of flux
281  real(r8_kind) :: dj !< Weight for the gradient of flux
282  real(r8_kind) :: scale
283 end type xcell_type
284 
285 !> Type to hold pointers for grid boxes
286 !> @ingroup xgrid_mod
288  real(r8_kind), dimension(:,:), pointer :: dx => null()
289  real(r8_kind), dimension(:,:), pointer :: dy => null()
290  real(r8_kind), dimension(:,:), pointer :: area => null()
291  real(r8_kind), dimension(:), pointer :: edge_w => null()
292  real(r8_kind), dimension(:), pointer :: edge_e => null()
293  real(r8_kind), dimension(:), pointer :: edge_s => null()
294  real(r8_kind), dimension(:), pointer :: edge_n => null()
295  real(r8_kind), dimension(:,:,:), pointer :: en1 => null()
296  real(r8_kind), dimension(:,:,:), pointer :: en2 => null()
297  real(r8_kind), dimension(:,:,:), pointer :: vlon => null()
298  real(r8_kind), dimension(:,:,:), pointer :: vlat => null()
299 end type grid_box_type
300 
301 !> Private type to hold all data needed from given grid for an exchange grid
302 !> @ingroup xgrid_mod
304  character(len=3) :: id !< grid identifier
305  integer :: npes !< number of processor on this grid.
306  logical :: on_this_pe !< indicate the domain is defined on this pe
307  integer :: root_pe !< indicate the root pe of the domain
308  integer, pointer, dimension(:) :: pelist !< pelist of the domain
309  integer :: ntile !< number of tiles in mosaic
310  integer :: ni !< max of global size of all the tiles
311  integer :: nj !< max of global size of all the tiles
312  integer, pointer, dimension(:) :: tile =>null() !< tile id ( pe index )
313  integer, pointer, dimension(:) :: is =>null() !< domain - i-range (pe index)
314  integer, pointer, dimension(:) :: ie =>null() !< domain - i-range (pe index)
315  integer, pointer, dimension(:) :: js =>null() !< domain - j-range (pe index)
316  integer, pointer, dimension(:) :: je =>null() !< domain - j-range (pe index)
317  integer, pointer :: is_me =>null() !< my domain - i-range
318  integer, pointer :: ie_me =>null() !< my domain - i-range
319  integer, pointer :: js_me =>null() !< my domain - j-range
320  integer, pointer :: je_me =>null() !< my domain - j-range
321  integer :: isd_me !< my data domain - i-range
322  integer :: ied_me !< my data domain - i-range
323  integer :: jsd_me !< my data domain - j-range
324  integer :: jed_me !< my data domain - j-range
325  integer :: nxd_me !< data domain size
326  integer :: nyd_me !< data domain size
327  integer :: nxc_me !< compute domain size
328  integer :: nyc_me !< compute domain size
329  integer, pointer :: tile_me !< my tile id
330  integer :: im !< global domain range
331  integer :: jm !< global domain range
332  integer :: km !< global domain range
333  real(r8_kind), pointer, dimension(:) :: lon =>null() !< center of global grids
334  real(r8_kind), pointer, dimension(:) :: lat =>null() !< center of global grids
335  real(r8_kind), pointer, dimension(:,:) :: geolon=>null() !< geographical grid center
336  real(r8_kind), pointer, dimension(:,:) :: geolat=>null() !< geographical grid center
337  real(r8_kind), pointer, dimension(:,:,:) :: frac_area =>null() !< partition fractions
338  real(r8_kind), pointer, dimension(:,:) :: area =>null() !< cell area
339  real(r8_kind), pointer, dimension(:,:) :: area_inv =>null() !< 1 / area for normalization
340  integer :: first !< xgrid index range
341  integer :: last !< xgrid index range
342  integer :: first_get !< xgrid index range for get_2_from_xgrid
343  integer :: last_get !< xgrid index range for get_2_from_xgrid
344  integer :: size !< # xcell patterns
345  type(xcell_type), pointer :: x(:) =>null() !< xcell patterns
346  integer :: size_repro !< # side 1 patterns for repro
347  type(xcell_type), pointer :: x_repro(:) =>null() !< side 1 patterns for repro
348  type(domain2d) :: domain !< used for conservation checks
349  type(domain2d) :: domain_with_halo !< used for second order remapping
350  logical :: is_latlon !< indicate if the grid is lat-lon grid or not.
351  type(grid_box_type) :: box !< used for second order remapping.
352  !--- The following is for land unstruct domain
353  logical :: is_ug
354  integer :: nxl_me
355  integer, pointer :: ls_me =>null() !< unstruct domain
356  integer, pointer :: le_me =>null() !< unstruct domain
357  integer, pointer, dimension(:) :: ls =>null(), le =>null()
358  integer, pointer :: gs_me =>null(), ge_me =>null()
359  integer, pointer, dimension(:) :: gs =>null(), ge =>null()
360  integer, pointer, dimension(:) :: l_index =>null()
361  type(domainug) :: ug_domain
362 
363 end type grid_type
364 
365 !> Private type for exchange grid data
366 !> @ingroup xgrid_mod
368  integer :: i, j
369  real(r8_kind) :: area !< (= geographic area * frac_area)
370 ! real(r8_kind) :: area_ratio !(= x1_area/grid1_area) ! will be added in the future to improve efficiency
371  real(r8_kind) :: di !< weight for the gradient of flux
372  real(r8_kind) :: dj !< weight for the gradient of flux
373  integer :: tile !< tile index of side 1 mosaic.
374  integer :: pos
375 end type x1_type
376 
377 !> Private type for exchange grid data
378 !> @ingroup xgrid_mod
380  integer :: i, j, l, k, pos
381  real(r8_kind) :: area !< geographic area of exchange cell
382 ! real(r8_kind) :: area_ratio !(=x2_area/grid2_area ) ! will be added in the future to improve efficiency
383 end type x2_type
384 
385 !> Private type for overlap exchange grid data
386 !> @ingroup xgrid_mod
388  integer :: count
389  integer :: pe
390  integer :: buffer_pos
391  integer, allocatable :: i(:)
392  integer, allocatable :: j(:)
393  integer, allocatable :: g(:)
394  integer, allocatable :: xLoc(:)
395  integer, allocatable :: tile(:)
396  real(r8_kind), allocatable :: di(:)
397  real(r8_kind), allocatable :: dj(:)
398 end type overlap_type
399 
400 !> Private type used for exchange grid communication
401 !> @ingroup xgrid_mod
403  integer :: nsend, nrecv
404  integer :: sendsize, recvsize
405  integer, pointer, dimension(:) :: unpack_ind=>null()
406  type(overlap_type), pointer, dimension(:) :: send=>null()
407  type(overlap_type), pointer, dimension(:) :: recv=>null()
408 end type comm_type
409 
410 !> @brief Type for an exchange grid, holds pointers to included grids and any necessary data.
411 !> @ingroup xgrid_mod
413  private
414  integer :: size !< # of exchange grid cells with area > 0 on this pe
415  integer :: size_put1 !< # of exchange grid cells for put_1_to_xgrid
416  integer :: size_get2 !< # of exchange grid cells for get_2_to_xgrid
417  integer :: me, npes, root_pe
418  logical, pointer, dimension(:) :: your1my2 =>null()!< true if side 1 domain on
419  !! indexed pe overlaps side 2
420  !! domain on this pe
421  logical, pointer, dimension(:) :: your2my1 =>null() !< true if a side 2 domain on
422  !! indexed pe overlaps side 1
423  !! domain on this pe
424  integer, pointer, dimension(:) :: your2my1_size=>null() !< number of exchange grid of
425  !! a side 2 domain on
426  !! indexed pe overlaps side 1
427  !! domain on this pe
428 
429  type (grid_type), pointer, dimension(:) :: grids =>null() !< 1st grid is side 1;
430  !! rest on side 2
431  !
432  ! Description of the individual exchange grid cells (index is cell #)
433  !
434  type(x1_type), pointer, dimension(:) :: x1 =>null() !< side 1 info
435  type(x1_type), pointer, dimension(:) :: x1_put =>null() !< side 1 info
436  type(x2_type), pointer, dimension(:) :: x2 =>null() !< side 2 info
437  type(x2_type), pointer, dimension(:) :: x2_get =>null() !< side 2 info
438 
439  integer, pointer, dimension(:) :: send_count_repro =>null()
440  integer, pointer, dimension(:) :: recv_count_repro =>null()
441  integer :: send_count_repro_tot !< sum(send_count_repro)
442  integer :: recv_count_repro_tot !< sum(recv_count_repro)
443  integer :: version !< version of xgrids. version=VERSION! is for grid_spec file
444  !! and version=VERSION2 is for mosaic grid.
445  integer, pointer, dimension(:) :: ind_get1 =>null() !< indx for side1 get and side2 put.
446  integer, pointer, dimension(:) :: ind_put1 =>null() !< indx for side1 put and side 2get.
447  type(comm_type), pointer :: put1 =>null() !< for put_1_to_xgrid
448  type(comm_type), pointer :: get1 =>null() !< for get_1_from_xgrid
449  type(comm_type), pointer :: get1_repro =>null()!< for get_1_from_xgrid_repro
450 end type xmap_type
451 
452 !> @addtogroup stock_constants_mod
453 !> @{
454 !-----------------------------------------------------------------------
455 ! Include variable "version" to be written to log file.
456 #include<file_version.h>
457 
458  real(r8_kind), parameter :: eps = 1.0e-10_r8_kind
459  real(r8_kind), parameter :: large_number = 1.e20_r8_kind
460  logical :: module_is_initialized = .false.
461  integer :: id_put_1_to_xgrid_order_1 = 0
462  integer :: id_put_1_to_xgrid_order_2 = 0
463  integer :: id_get_1_from_xgrid = 0
464  integer :: id_get_1_from_xgrid_repro = 0
465  integer :: id_get_2_from_xgrid = 0
466  integer :: id_put_2_to_xgrid = 0
467  integer :: id_setup_xmap = 0
468  integer :: id_load_xgrid1, id_load_xgrid2, id_load_xgrid3
469  integer :: id_load_xgrid4, id_load_xgrid5
470  integer :: id_load_xgrid, id_set_comm, id_regen, id_conservation_check
471 
472 
473  ! The following is for nested model
474  integer :: nnest=0, tile_nest, tile_parent
475  integer :: is_nest=0, ie_nest=0, js_nest=0, je_nest=0
476  integer :: is_parent=0, ie_parent=0, js_parent=0, je_parent=0
477 
478 !> @}
479  ! The following is required to compute stocks of water, heat, ...
480 
481  !> @ingroup xgrid_mod
482  interface stock_move
483  module procedure stock_move_3d, stock_move_2d
484  end interface
485 
486  !> @ingroup xgrid_mod
487  interface stock_move_ug
488  module procedure stock_move_ug_3d
489  end interface
490 
491  public stock_move, stock_type, stock_print, get_index_range, stock_integrate_2d
492  public first_order, second_order, stock_move_ug
493 
494  !> @ingroup xgrid_mod
496  module procedure get_area_elements_fms2_io
497  end interface
498  !> @ingroup xgrid_mod
500  module procedure get_nest_contact_fms2_io
501  end interface
502 
503 contains
504 
505 !> @addtogroup xgrid_mod
506 !> @{
507 
508 !#######################################################################
509 !> @return logical in_box
510 logical function in_box(i, j, is, ie, js, je)
511  integer, intent(in) :: i, j, is, ie, js, je
512 
513  in_box = (i>=is) .and. (i<=ie) .and. (j>=js) .and. (j<=je)
514 end function in_box
515 
516 !#######################################################################
517 
518 !> @brief Initialize the xgrid_mod.
519 !! @details Initialization routine for the xgrid module. It reads the xgrid_nml,
520 !! writes the version information and xgrid_nml to the log file.
521 subroutine xgrid_init(remap_method)
522  integer, intent(out) :: remap_method !< exchange grid interpolation method. It has four possible values:
523  !! FIRST_ORDER (=1), SECOND_ORDER(=2).
524 
525  integer :: iunit, ierr, io, out_unit
526 
527  if (module_is_initialized) return
528  module_is_initialized = .true.
529 
530  read (input_nml_file, xgrid_nml, iostat=io)
531  ierr = check_nml_error( io, 'xgrid_nml' )
532 
533 !--------- write version number and namelist ------------------
534  call write_version_number("XGRID_MOD", version)
535 
536  iunit = stdlog( )
537  out_unit = stdout()
538  if ( mpp_pe() == mpp_root_pe() ) write (iunit,nml=xgrid_nml)
539 
540  if (use_mpp_io) then
541  ! FATAL error if trying to use mpp_io
542  call error_mesg('xgrid_init', &
543  'MPP_IO is no longer supported. Please remove use_mpp_io from namelists',&
544  fatal)
545  endif
546 !--------- check interp_method has suitable value
547 !--- when monotonic_exchange is true, interp_method must be second order.
548 
549  select case(trim(interp_method))
550  case('first_order')
551  remap_method = first_order
552  if( monotonic_exchange ) call error_mesg('xgrid_mod', &
553  'xgrid_nml monotonic_exchange must be .false. when interp_method = first_order', fatal)
554  write(out_unit,*)"NOTE from xgrid_mod: use first_order conservative exchange"
555  case('second_order')
556  if(monotonic_exchange) then
557  write(out_unit,*)"NOTE from xgrid_mod: use monotonic second_order conservative exchange"
558  else
559  write(out_unit,*)"NOTE from xgrid_mod: use second_order conservative exchange"
560  endif
561  remap_method = second_order
562  case default
563  call error_mesg('xgrid_mod', ' nml interp_method = ' //trim(interp_method)// &
564  ' is not a valid namelist option', fatal)
565  end select
566 
567  if(xgrid_clocks_on) then
568  id_put_1_to_xgrid_order_1 = mpp_clock_id("put_1_to_xgrid_order_1", flags=mpp_clock_sync)
569  id_put_1_to_xgrid_order_2 = mpp_clock_id("put_1_to_xgrid_order_2", flags=mpp_clock_sync)
570  id_get_1_from_xgrid = mpp_clock_id("get_1_from_xgrid", flags=mpp_clock_sync)
571  id_get_1_from_xgrid_repro = mpp_clock_id("get_1_from_xgrid_repro", flags=mpp_clock_sync)
572  id_get_2_from_xgrid = mpp_clock_id("get_2_from_xgrid", flags=mpp_clock_sync)
573  id_put_2_to_xgrid = mpp_clock_id("put_2_to_xgrid", flags=mpp_clock_sync)
574  id_setup_xmap = mpp_clock_id("setup_xmap", flags=mpp_clock_sync)
575  id_set_comm = mpp_clock_id("set_comm")
576  id_regen = mpp_clock_id("regen")
577  id_conservation_check = mpp_clock_id("conservation_check")
578  id_load_xgrid = mpp_clock_id("load_xgrid")
579  id_load_xgrid1 = mpp_clock_id("load_xgrid1")
580  id_load_xgrid2 = mpp_clock_id("load_xgrid2")
581  id_load_xgrid3 = mpp_clock_id("load_xgrid3")
582  id_load_xgrid4 = mpp_clock_id("load_xgrid4")
583  id_load_xgrid5 = mpp_clock_id("load_xgrid5")
584  endif
585 
586  remapping_method = remap_method
587 
588 end subroutine xgrid_init
589 
590 !#######################################################################
591 
592 subroutine load_xgrid (xmap, grid, grid_file, grid1_id, grid_id, tile1, tile2, use_higher_order)
593 type(xmap_type), intent(inout) :: xmap
594 type(grid_type), intent(inout) :: grid
595 character(len=*), intent(in) :: grid_file
596 character(len=3), intent(in) :: grid1_id, grid_id
597 integer, intent(in) :: tile1, tile2
598 logical, intent(in) :: use_higher_order
599 
600  integer, pointer, dimension(:) :: i1=>null(), j1=>null()
601  integer, pointer, dimension(:) :: i2=>null(), j2=>null()
602  real(r8_kind), pointer, dimension(:) :: di=>null(), dj=>null()
603  real(r8_kind), pointer, dimension(:) :: area =>null()
604  integer, pointer, dimension(:) :: i1_tmp=>null(), j1_tmp=>null()
605  integer, pointer, dimension(:) :: i2_tmp=>null(), j2_tmp=>null()
606  real(r8_kind), pointer, dimension(:) :: di_tmp=>null(), dj_tmp=>null()
607  real(r8_kind), pointer, dimension(:) :: area_tmp =>null()
608  integer, pointer, dimension(:) :: i1_side1=>null(), j1_side1=>null()
609  integer, pointer, dimension(:) :: i2_side1=>null(), j2_side1=>null()
610  real(r8_kind), pointer, dimension(:) :: di_side1=>null(), dj_side1=>null()
611  real(r8_kind), pointer, dimension(:) :: area_side1 =>null()
612 
613  real(r8_kind), allocatable, dimension(:,:) :: tmp
614  real(r8_kind), allocatable, dimension(:) :: send_buffer, recv_buffer
615  type (grid_type), pointer, save :: grid1 =>null()
616  integer :: l, ll, ll_repro, p, nxgrid, size_prev
617  type(xcell_type), allocatable :: x_local(:)
618  integer :: size_repro, out_unit
619  logical :: scale_exist = .false.
620  logical :: is_distribute = .false.
621  real(r8_kind), allocatable, dimension(:) :: scale
622  real(r8_kind) :: garea
623  integer :: npes, isc, iec, nxgrid_local, pe, nxgrid_local_orig
624  integer :: nxgrid1, nxgrid2, nset1, nset2, ndivs, cur_ind
625  integer :: pos, nsend, nrecv, l1, l2, n, mypos
626  integer :: start(4), nread(4)
627  logical :: found
628  character(len=128) :: attvalue
629  integer, dimension(0:xmap%npes-1) :: pelist
630  logical, dimension(0:xmap%npes-1) :: subset_rootpe
631  integer, dimension(0:xmap%npes-1) :: nsend1, nsend2, nrecv1, nrecv2
632  integer, dimension(0:xmap%npes-1) :: send_cnt, recv_cnt
633  integer, dimension(0:xmap%npes-1) :: send_buffer_pos, recv_buffer_pos
634  integer, dimension(0:xmap%npes-1) :: ibegin, iend, pebegin, peend
635  integer, dimension(2*xmap%npes) :: ibuf1, ibuf2
636  integer, dimension(0:xmap%npes-1) :: pos_x, y2m1_size
637  integer, allocatable, dimension(:) :: y2m1_pe
638  integer, pointer, save :: iarray(:), jarray(:)
639  integer, allocatable, save :: pos_s(:)
640  integer, pointer, dimension(:) :: iarray2(:)=>null(), jarray2(:)=>null()
641  logical :: last_grid
642  integer :: nxgrid1_old
643  integer :: lll
644  type(fmsnetcdffile_t) :: fileobj
645 
646  if(.not. open_file(fileobj, grid_file, 'read' )) then
647  call error_mesg('xgrid_mod(load_xgrid)', 'Error in opening file '//trim(grid_file), fatal)
648  endif
649 
650  scale_exist = .false.
651  grid1 => xmap%grids(1)
652  out_unit = stdout()
653  npes = xmap%npes
654  pe = mpp_pe()
655  mypos = mpp_pe()-mpp_root_pe()
656 
657  call mpp_get_current_pelist(pelist)
658  !--- make sure npes = pelist(npes-1) - pelist(0) + 1
659  if( npes .NE. pelist(npes-1) - pelist(0) + 1 ) then
660  print*, "npes =", npes, ", pelist(npes-1)=", pelist(npes-1), ", pelist(0)=", pelist(0)
661  call error_mesg('xgrid_mod', .NE.'npes pelist(npes-1) - pelist(0)', fatal)
662  endif
663 
664  select case(xmap%version)
665  case(version1)
666  nxgrid = 0
667  if (dimension_exists(fileobj, 'i_'//lowercase(grid1_id)//'X'//lowercase(grid_id))) then
668  call get_dimension_size(fileobj, 'i_'//lowercase(grid1_id)//'X'//lowercase(grid_id), nxgrid)
669  endif
670  if(nxgrid .LE. 0) return
671  case(version2)
672  !--- max_size is the exchange grid size between super grid.
673  nxgrid = get_mosaic_xgrid_size(fileobj)
674  if(nxgrid .LE. 0) return
675  end select
676 
677  !--- define a domain to read exchange grid.
678  if(nxgrid > npes) then
679  ndivs = npes
680  if(nsubset >0 .AND. nsubset < npes) ndivs = nsubset
681  call mpp_compute_extent( 1, nxgrid, ndivs, ibegin, iend)
682  if(npes == ndivs) then
683  p = mpp_pe()-mpp_root_pe()
684  isc = ibegin(p)
685  iec = iend(p)
686  subset_rootpe(:) = .true.
687  else
688  isc = 0; iec = -1
689  call mpp_compute_extent(pelist(0), pelist(npes-1), ndivs, pebegin, peend)
690  do n = 0, ndivs-1
691  if(pe == pebegin(n)) then
692  isc = ibegin(n)
693  iec = iend(n)
694  exit
695  endif
696  enddo
697  cur_ind = 0
698  subset_rootpe(:) = .false.
699 
700  do n = 0, npes-1
701  if(pelist(n) == pebegin(cur_ind)) then
702  subset_rootpe(n) = .true.
703  cur_ind = cur_ind+1
704  if(cur_ind == ndivs) exit
705  endif
706  enddo
707  endif
708  is_distribute = .true.
709  else
710  is_distribute = .false.
711  isc = 1; iec = nxgrid
712  endif
713 
714  nset1 = 5
715  nset2 = 5
716  if(use_higher_order) then
717  nset1 = nset1 + 2
718  nset2 = nset2 + 2
719  end if
720  if(scale_exist) nset2 = nset1 + 1
721 
722  call mpp_clock_begin(id_load_xgrid1)
723  if(iec .GE. isc) then
724  nxgrid_local = iec - isc + 1
725  allocate(i1_tmp(isc:iec), j1_tmp(isc:iec), i2_tmp(isc:iec), j2_tmp(isc:iec), area_tmp(isc:iec) )
726  if(use_higher_order) allocate(di_tmp(isc:iec), dj_tmp(isc:iec))
727 
728  start = 1; nread = 1
729 
730  select case(xmap%version)
731  case(version1)
732  start(1) = isc; nread(1) = nxgrid_local
733  allocate(tmp(nxgrid_local,1))
734  call read_data(fileobj, 'I_'//grid1_id//'_'//grid1_id//'x'//grid_id, tmp, corner=start, edge_lengths=nread)
735  i1_tmp = int(tmp(:,1))
736  call read_data(fileobj, 'J_'//grid1_id//'_'//grid1_id//'x'//grid_id, tmp, corner=start, edge_lengths=nread)
737  j1_tmp = int(tmp(:,1))
738  call read_data(fileobj, 'I_'//grid_id//'_'//grid1_id//'x'//grid_id, tmp, corner=start, edge_lengths=nread)
739  i2_tmp = int(tmp(:,1))
740  call read_data(fileobj, 'J_'//grid_id//'_'//grid1_id//'x'//grid_id, tmp, corner=start, edge_lengths=nread)
741  j2_tmp = int(tmp(:,1))
742  call read_data(fileobj, 'AREA_'//grid1_id//'x'//grid_id, tmp, corner=start, edge_lengths=nread)
743  area_tmp = tmp(:,1)
744  if(use_higher_order) then
745  call read_data(fileobj, 'DI_'//grid1_id//'x'//grid_id, tmp, corner=start, edge_lengths=nread)
746  di_tmp = tmp(:,1)
747  call read_data(fileobj, 'DJ_'//grid1_id//'x'//grid_id, tmp, corner=start, edge_lengths=nread)
748  dj_tmp = tmp(:,1)
749  end if
750  deallocate(tmp)
751  case(version2)
752  nread(1) = 2; start(2) = isc; nread(2) = nxgrid_local
753  allocate(tmp(2, isc:iec))
754  call read_data(fileobj, "tile1_cell", tmp, corner=start, edge_lengths=nread)
755  i1_tmp(isc:iec) = int(tmp(1, isc:iec))
756  j1_tmp(isc:iec) = int(tmp(2, isc:iec))
757  call read_data(fileobj, "tile2_cell", tmp, corner=start, edge_lengths=nread)
758  i2_tmp(isc:iec) = int(tmp(1, isc:iec))
759  j2_tmp(isc:iec) = int(tmp(2, isc:iec))
760  if(use_higher_order) then
761  call read_data(fileobj, "tile1_distance", tmp, corner=start, edge_lengths=nread)
762  di_tmp(isc:iec) = tmp(1, isc:iec)
763  dj_tmp(isc:iec) = tmp(2, isc:iec)
764  end if
765  start = 1; nread = 1
766  start(1) = isc; nread(1) = nxgrid_local
767  deallocate(tmp)
768  allocate(tmp(isc:iec,1) )
769 
770  call read_data(fileobj, "xgrid_area", tmp(:,1:1), corner=start, edge_lengths=nread)
771  ! check the units of "xgrid_area
772  call get_variable_units(fileobj, "xgrid_area", attvalue)
773 
774  if( trim(attvalue) == 'm2' ) then
775  garea = 4.0_r8_kind * pi * radius * radius;
776  area_tmp = tmp(:,1)/garea
777  else if( trim(attvalue) == 'none' ) then
778  area_tmp = tmp(:,1)
779  else
780  call error_mesg('xgrid_mod', 'In file '//trim(grid_file)//', xgrid_area units = '// &
781  trim(attvalue)//' should be "m2" or "none"', fatal)
782  endif
783 
784  !--- if field "scale" exist, read this field. Normally this
785  !--- field only exist in landXocean exchange grid cell.
786  if(grid1_id == 'LND' .AND. grid_id == 'OCN') then
787  if(variable_exists(fileobj, "scale")) then
788  allocate(scale(isc:iec))
789  write(out_unit, *)"NOTE from load_xgrid(xgrid_mod): field 'scale' exist in the file "// &
790  & trim(grid_file)//", this field will be read and the exchange grid cell area will be"// &
791  & " multiplied by scale"
792  call read_data(fileobj, "scale", tmp, corner=start, edge_lengths=nread)
793  scale = tmp(:,1)
794  scale_exist = .true.
795  endif
796  endif
797  deallocate(tmp)
798  end select
799 
800  !---z1l: The following change is for the situation that some processor is masked out.
801  !---loop through all the pe to see if side 1 and side of each exchange grid is on some processor
802  nxgrid_local_orig = nxgrid_local
803  allocate(i1(isc:iec), j1(isc:iec), i2(isc:iec), j2(isc:iec), area(isc:iec) )
804  if(use_higher_order) allocate(di(isc:iec), dj(isc:iec))
805  pos = isc-1
806  do l = isc, iec
807  found = .false.
808  !--- first check if the exchange grid is on one of side 1 processor
809  do p = 0, npes - 1
810  if(grid1%tile(p) == tile1) then
811  if(in_box_nbr(i1_tmp(l), j1_tmp(l), grid1, p)) then
812  found = .true.
813  exit
814  endif
815  endif
816  enddo
817  !--- Then check if the exchange grid is on one of side 2 processor
818  if( found ) then
819  do p = 0, npes - 1
820  if(grid%tile(p) == tile2) then
821  if (in_box_nbr(i2_tmp(l), j2_tmp(l), grid, p)) then
822  pos = pos+1
823  i1(pos) = i1_tmp(l)
824  j1(pos) = j1_tmp(l)
825  i2(pos) = i2_tmp(l)
826  j2(pos) = j2_tmp(l)
827  area(pos) = area_tmp(l)
828  if(use_higher_order) then
829  di(pos) = di_tmp(l)
830  dj(pos) = dj_tmp(l)
831  endif
832  exit
833  endif
834  endif
835  enddo
836  endif
837  enddo
838 
839  deallocate(i1_tmp, i2_tmp, j1_tmp, j2_tmp, area_tmp)
840  if(use_higher_order) deallocate( di_tmp, dj_tmp)
841  iec = pos
842  if(iec .GE. isc) then
843  nxgrid_local = iec - isc + 1
844  else
845  nxgrid_local = 0
846  endif
847  else
848  nxgrid_local = 0
849  nxgrid_local_orig = 0
850  endif
851 
852  call close_file(fileobj)
853 
854  call mpp_clock_end(id_load_xgrid1)
855 
856  if(is_distribute) then
857  !--- Since the xgrid is distributed according to side 2 grid. Send all the xgrid to its own side 2.
858  !--- Also need to send the xgrid to its own side 1 for the reproducing ability between processor count.
859  !--- first find out number of points need to send to other pe and fill the send buffer.
860  nsend1(:) = 0; nrecv1(:) = 0
861  nsend2(:) = 0; nrecv2(:) = 0
862  ibuf1(:)= 0; ibuf2(:)= 0
863 
864  call mpp_clock_begin(id_load_xgrid2)
865  if(nxgrid_local>0) then
866  allocate( send_buffer(nxgrid_local * (nset1+nset2)) )
867  pos = 0
868  do p = 0, npes - 1
869  send_buffer_pos(p) = pos
870  if(grid%tile(p) == tile2) then
871  do l = isc, iec
872  if(in_box_nbr(i2(l), j2(l), grid, p) ) then
873  nsend2(p) = nsend2(p) + 1
874  send_buffer(pos+1) = real(i1(l), r8_kind)
875  send_buffer(pos+2) = real(j1(l), r8_kind)
876  send_buffer(pos+3) = real(i2(l), r8_kind)
877  send_buffer(pos+4) = real(j2(l), r8_kind)
878  send_buffer(pos+5) = area(l)
879  if(use_higher_order) then
880  send_buffer(pos+6) = di(l)
881  send_buffer(pos+7) = dj(l)
882  endif
883  if(scale_exist) send_buffer(pos+nset2) = scale(l)
884  pos = pos + nset2
885  endif
886  enddo
887  endif
888  if(grid1%tile(p) == tile1) then
889  do l = isc, iec
890  if(in_box_nbr(i1(l), j1(l), grid1, p)) then
891  nsend1(p) = nsend1(p) + 1
892  send_buffer(pos+1) = real(i1(l), r8_kind)
893  send_buffer(pos+2) = real(j1(l), r8_kind)
894  send_buffer(pos+3) = real(i2(l), r8_kind)
895  send_buffer(pos+4) = real(j2(l), r8_kind)
896  send_buffer(pos+5) = area(l)
897  if(use_higher_order) then
898  send_buffer(pos+6) = di(l)
899  send_buffer(pos+7) = dj(l)
900  endif
901  pos = pos + nset1
902  endif
903  enddo
904  endif
905  enddo
906  endif
907  call mpp_clock_end(id_load_xgrid2)
908 
909  !--- send the size of the data on side 1 to be sent over.
910  call mpp_clock_begin(id_load_xgrid3)
911 
912  if (do_alltoall) then
913  do p = 0, npes-1
914  ibuf1(2*p+1) = nsend1(p)
915  ibuf1(2*p+2) = nsend2(p)
916  enddo
917  call mpp_alltoall(ibuf1, 2, ibuf2, 2)
918  else
919  do n = 0, npes-1
920  p = mod(mypos+npes-n, npes)
921  if(.not. subset_rootpe(p)) cycle
922  call mpp_recv( ibuf2(2*p+1), glen=2, from_pe=pelist(p), block=.false., tag=comm_tag_1)
923  enddo
924 
925  if(nxgrid_local_orig>0) then
926  do n = 0, npes-1
927  p = mod(mypos+n, npes)
928  ibuf1(2*p+1) = nsend1(p)
929  ibuf1(2*p+2) = nsend2(p)
930  call mpp_send( ibuf1(2*p+1), plen=2, to_pe=pelist(p), tag=comm_tag_1)
931  enddo
932  endif
933  call mpp_sync_self(check=event_recv)
934  endif
935  do p = 0, npes-1
936  nrecv1(p) = ibuf2(2*p+1)
937  nrecv2(p) = ibuf2(2*p+2)
938  enddo
939 
940  if(.not. do_alltoall) call mpp_sync_self()
941  call mpp_clock_end(id_load_xgrid3)
942  call mpp_clock_begin(id_load_xgrid4)
943  pos = 0
944  do p = 0, npes - 1
945  recv_buffer_pos(p) = pos
946  pos = pos + nrecv1(p) * nset1 + nrecv2(p) * nset2
947  end do
948 
949  !--- now get the data
950  nxgrid1 = sum(nrecv1)
951  nxgrid2 = sum(nrecv2)
952  if(nxgrid1>0 .OR. nxgrid2>0) allocate(recv_buffer(nxgrid1*nset1+nxgrid2*nset2))
953 
954  if (do_alltoallv) then
955  ! Construct the send and receive counters
956  send_cnt(:) = nset1 * nsend1(:) + nset2 * nsend2(:)
957  recv_cnt(:) = nset1 * nrecv1(:) + nset2 * nrecv2(:)
958 
959  call mpp_alltoall(send_buffer, send_cnt, send_buffer_pos, &
960  recv_buffer, recv_cnt, recv_buffer_pos)
961  else
962  do n = 0, npes-1
963  p = mod(mypos+npes-n, npes)
964  nrecv = nrecv1(p)*nset1+nrecv2(p)*nset2
965  if(nrecv==0) cycle
966  pos = recv_buffer_pos(p)
967  call mpp_recv(recv_buffer(pos+1), glen=nrecv, from_pe=pelist(p), &
968  block=.false., tag=comm_tag_2)
969  end do
970 
971  do n = 0, npes-1
972  p = mod(mypos+n, npes)
973  nsend = nsend1(p)*nset1 + nsend2(p)*nset2
974  if(nsend==0) cycle
975  pos = send_buffer_pos(p)
976  call mpp_send(send_buffer(pos+1), plen=nsend, to_pe=pelist(p), &
977  tag=comm_tag_2)
978  end do
979  call mpp_sync_self(check=event_recv)
980  end if
981  call mpp_clock_end(id_load_xgrid4)
982  !--- unpack buffer.
983  if( nxgrid_local>0) then
984  deallocate(i1,j1,i2,j2,area)
985  endif
986 
987  allocate(i1(nxgrid2), j1(nxgrid2))
988  allocate(i2(nxgrid2), j2(nxgrid2))
989  allocate(area(nxgrid2))
990  allocate(i1_side1(nxgrid1), j1_side1(nxgrid1))
991  allocate(i2_side1(nxgrid1), j2_side1(nxgrid1))
992  allocate(area_side1(nxgrid1))
993  if(use_higher_order) then
994  if(nxgrid_local>0) deallocate(di,dj)
995  allocate(di(nxgrid2), dj(nxgrid2))
996  allocate(di_side1(nxgrid1), dj_side1(nxgrid1))
997  endif
998  if(scale_exist) then
999  if(nxgrid_local>0)deallocate(scale)
1000  allocate(scale(nxgrid2))
1001  endif
1002  pos = 0
1003  l1 = 0; l2 = 0
1004  do p = 0,npes-1
1005  do n = 1, nrecv2(p)
1006  l2 = l2+1
1007  i1(l2) = int(recv_buffer(pos+1))
1008  j1(l2) = int(recv_buffer(pos+2))
1009  i2(l2) = int(recv_buffer(pos+3))
1010  j2(l2) = int(recv_buffer(pos+4))
1011  area(l2) = recv_buffer(pos+5)
1012  if(use_higher_order) then
1013  di(l2) = recv_buffer(pos+6)
1014  dj(l2) = recv_buffer(pos+7)
1015  endif
1016  if(scale_exist)scale(l2) = recv_buffer(pos+nset2)
1017  pos = pos + nset2
1018  enddo
1019  do n = 1, nrecv1(p)
1020  l1 = l1+1
1021  i1_side1(l1) = int(recv_buffer(pos+1))
1022  j1_side1(l1) = int(recv_buffer(pos+2))
1023  i2_side1(l1) = int(recv_buffer(pos+3))
1024  j2_side1(l1) = int(recv_buffer(pos+4))
1025  area_side1(l1) = recv_buffer(pos+5)
1026  if(use_higher_order) then
1027  di_side1(l1) = recv_buffer(pos+6)
1028  dj_side1(l1) = recv_buffer(pos+7)
1029  endif
1030  pos = pos + nset1
1031  enddo
1032  enddo
1033  call mpp_sync_self()
1034  if(allocated(send_buffer)) deallocate(send_buffer)
1035  if(allocated(recv_buffer)) deallocate(recv_buffer)
1036 
1037  else
1038  nxgrid1 = nxgrid
1039  nxgrid2 = nxgrid
1040  i1_side1 => i1; j1_side1 => j1
1041  i2_side1 => i2; j2_side1 => j2
1042  area_side1 => area
1043  if(use_higher_order) then
1044  di_side1 => di
1045  dj_side1 => dj
1046  endif
1047  endif
1048 
1049  call mpp_clock_begin(id_load_xgrid5)
1050 
1051 
1052  size_prev = grid%size
1053 
1054  if(grid%tile_me == tile2) then
1055  do l=1,nxgrid2
1056  if (in_box_me(i2(l), j2(l), grid) ) then
1057  grid%size = grid%size + 1
1058  ! exclude the area overlapped with parent grid
1059  if( grid1_id .NE. "ATM" .OR. tile1 .NE. tile_parent .OR. &
1060  .NOT. in_box(i1(l), j1(l), is_parent, ie_parent, js_parent, je_parent) ) then
1061  if(grid%is_ug) then
1062  lll = grid%l_index((j2(l)-1)*grid%im+i2(l))
1063  grid%area(lll,1) = grid%area(lll,1)+area(l)
1064  else
1065  grid%area(i2(l),j2(l)) = grid%area(i2(l),j2(l))+area(l)
1066  endif
1067  endif
1068  do p=0,xmap%npes-1
1069  if(grid1%tile(p) == tile1) then
1070  if (in_box_nbr(i1(l), j1(l), grid1, p)) then
1071  xmap%your1my2(p) = .true.
1072  end if
1073  end if
1074  end do
1075  end if
1076  end do
1077  end if
1078 
1079  if(grid%size > size_prev) then
1080  if(size_prev > 0) then ! need to extend data
1081  allocate(x_local(size_prev))
1082  x_local = grid%x
1083  if(ASSOCIATED(grid%x)) deallocate(grid%x)
1084  allocate( grid%x( grid%size ) )
1085  grid%x(1:size_prev) = x_local
1086  deallocate(x_local)
1087  else
1088  if(ASSOCIATED(grid%x)) deallocate(grid%x) !< Check if allocated
1089  allocate( grid%x( grid%size ) )
1090  grid%x%di = 0.0_r8_kind; grid%x%dj = 0.0_r8_kind
1091  end if
1092  end if
1093 
1094  ll = size_prev
1095  if( grid%tile_me == tile2 ) then ! me is tile2
1096  do l=1,nxgrid2
1097  if (in_box_me(i2(l), j2(l), grid)) then
1098  ! insert in this grids cell pattern list and add area to side 2 area
1099  ll = ll + 1
1100  grid%x(ll)%i1 = i1(l); grid%x(ll)%i2 = i2(l)
1101  grid%x(ll)%j1 = j1(l); grid%x(ll)%j2 = j2(l)
1102  if(grid%is_ug) then
1103  grid%x(ll)%l2 = grid%l_index((j2(l)-1)*grid%im + i2(l))
1104  endif
1105 ! if(grid1%is_ug) then
1106 ! grid1%x(ll)%l1 = grid1%l_index((j1(l)-1)*grid1%im + i1(l))
1107 ! endif
1108  grid%x(ll)%tile = tile1
1109  grid%x(ll)%area = area(l)
1110  if(scale_exist) then
1111  grid%x(ll)%scale = scale(l)
1112  else
1113  grid%x(ll)%scale = 1.0_r8_kind
1114  endif
1115  if(use_higher_order) then
1116  grid%x(ll)%di = di(l)
1117  grid%x(ll)%dj = dj(l)
1118  end if
1119 
1120  if (make_exchange_reproduce) then
1121  do p=0,xmap%npes-1
1122  if(grid1%tile(p) == tile1) then
1123  if (in_box_nbr(i1(l), j1(l), grid1, p)) then
1124  grid%x(ll)%pe = p + xmap%root_pe
1125  end if
1126  end if
1127  end do
1128  end if ! make_exchange reproduce
1129  end if
1130  end do
1131  end if
1132 
1133  if(grid%id == xmap%grids(size(xmap%grids(:)))%id) then
1134  last_grid = .true.
1135  else
1136  last_grid = .false.
1137  endif
1138 
1139  size_repro = 0
1140  if(grid1%tile_me == tile1) then
1141  if(associated(iarray)) then
1142  nxgrid1_old = size(iarray(:))
1143  else
1144  nxgrid1_old = 0
1145  endif
1146 
1147  allocate(y2m1_pe(nxgrid1))
1148  if(.not. last_grid ) allocate(pos_s(0:xmap%npes-1))
1149  y2m1_pe = -1
1150  if(nxgrid1_old > 0) then
1151  do p=0,xmap%npes-1
1152  y2m1_size(p) = xmap%your2my1_size(p)
1153  enddo
1154  else
1155  y2m1_size = 0
1156  endif
1157 
1158  do l=1,nxgrid1
1159  if (in_box_me(i1_side1(l), j1_side1(l), grid1) ) then
1160  if(grid1%is_ug) then
1161  lll = grid1%l_index((j1_side1(l)-1)*grid1%im+i1_side1(l))
1162  grid1%area(lll,1) = grid1%area(lll,1) + area_side1(l)
1163  else
1164  grid1%area(i1_side1(l),j1_side1(l)) = grid1%area(i1_side1(l),j1_side1(l))+area_side1(l)
1165  endif
1166  do p=0,xmap%npes-1
1167  if (grid%tile(p) == tile2) then
1168  if (in_box_nbr(i2_side1(l), j2_side1(l), grid, p)) then
1169  xmap%your2my1(p) = .true.
1170  y2m1_pe(l) = p
1171  y2m1_size(p) = y2m1_size(p) + 1
1172  endif
1173  endif
1174  enddo
1175  size_repro = size_repro + 1
1176  endif
1177  enddo
1178  pos_x = 0
1179  do p = 1, npes-1
1180  pos_x(p) = pos_x(p-1) + y2m1_size(p-1)
1181  enddo
1182 
1183  if(.not. last_grid) pos_s(:) = pos_x(:)
1184 
1185  if(nxgrid1_old > 0) then
1186  y2m1_size(:) = xmap%your2my1_size(:)
1187  iarray2 => iarray
1188  jarray2 => jarray
1189  allocate(iarray(nxgrid1+nxgrid1_old), jarray(nxgrid1+nxgrid1_old))
1190  ! copy the i-j index
1191  do p=0,xmap%npes-1
1192  do n = 1, xmap%your2my1_size(p)
1193  iarray(pos_x(p)+n) = iarray2(pos_s(p)+n)
1194  jarray(pos_x(p)+n) = jarray2(pos_s(p)+n)
1195  enddo
1196  enddo
1197  deallocate(iarray2, jarray2)
1198  else
1199  allocate(iarray(nxgrid1), jarray(nxgrid1))
1200  iarray(:) = 0
1201  jarray(:) = 0
1202  y2m1_size(:) = 0
1203  endif
1204 
1205  do l=1,nxgrid1
1206  p = y2m1_pe(l)
1207  if(p<0) cycle
1208  found = .false.
1209  if(y2m1_size(p) > 0) then
1210  pos = pos_x(p)+y2m1_size(p)
1211  if( i1_side1(l) == iarray(pos) .AND. j1_side1(l) == jarray(pos) ) then
1212  found = .true.
1213  else
1214  !---may need to replace with a fast search algorithm
1215  do n = 1, y2m1_size(p)
1216  pos = pos_x(p)+n
1217  if(i1_side1(l) == iarray(pos) .AND. j1_side1(l) == jarray(pos)) then
1218  found = .true.
1219  exit
1220  endif
1221  enddo
1222  endif
1223  endif
1224  if( (.NOT. found) .OR. monotonic_exchange ) then
1225  y2m1_size(p) = y2m1_size(p)+1
1226  pos = pos_x(p)+y2m1_size(p)
1227  iarray(pos) = i1_side1(l)
1228  jarray(pos) = j1_side1(l)
1229  endif
1230  end do
1231  xmap%your2my1_size(:) = y2m1_size(:)
1232  deallocate(y2m1_pe)
1233  if(last_grid) then
1234  deallocate(iarray, jarray)
1235  if(allocated(pos_s)) deallocate(pos_s)
1236  end if
1237  end if
1238 
1239  if (grid1%tile_me == tile1 .and. size_repro > 0) then
1240  ll_repro = grid%size_repro
1241  grid%size_repro = ll_repro + size_repro
1242  if(ll_repro > 0) then ! extend data
1243  allocate(x_local(ll_repro))
1244  x_local = grid%x_repro
1245  if(ASSOCIATED(grid%x_repro)) deallocate(grid%x_repro)
1246  allocate( grid%x_repro(grid%size_repro ) )
1247  grid%x_repro(1:ll_repro) = x_local
1248  deallocate(x_local)
1249  else
1250  if(ASSOCIATED(grid%x_repro)) deallocate(grid%x_repro) !< Check if allocated
1251  allocate( grid%x_repro( grid%size_repro ) )
1252  grid%x_repro%di = 0.0_r8_kind; grid%x_repro%dj = 0.0_r8_kind
1253  end if
1254  do l=1,nxgrid1
1255  if (in_box_me(i1_side1(l),j1_side1(l), grid1) ) then
1256  ll_repro = ll_repro + 1
1257  grid%x_repro(ll_repro)%i1 = i1_side1(l); grid%x_repro(ll_repro)%i2 = i2_side1(l)
1258  grid%x_repro(ll_repro)%j1 = j1_side1(l); grid%x_repro(ll_repro)%j2 = j2_side1(l)
1259  if(grid1%is_ug) then
1260  grid%x_repro(ll_repro)%l1 = grid1%l_index((j1_side1(l)-1)*grid1%im+i1_side1(l))
1261  endif
1262  if(grid%is_ug) then
1263 ! grid%x_repro(ll_repro)%l2 = grid%l_index((j2_side1(l)-1)*grid%im+i2_side1(l))
1264  endif
1265  grid%x_repro(ll_repro)%tile = tile1
1266  grid%x_repro(ll_repro)%area = area_side1(l)
1267  if(use_higher_order) then
1268  grid%x_repro(ll_repro)%di = di_side1(l)
1269  grid%x_repro(ll_repro)%dj = dj_side1(l)
1270  end if
1271 
1272  do p=0,xmap%npes-1
1273  if(grid%tile(p) == tile2) then
1274  if (in_box_nbr(i2_side1(l), j2_side1(l), grid, p)) then
1275  grid%x_repro(ll_repro)%pe = p + xmap%root_pe
1276  end if
1277  end if
1278  end do
1279  end if ! make_exchange_reproduce
1280  end do
1281  end if
1282 
1283  deallocate(i1, j1, i2, j2, area)
1284  if(use_higher_order) deallocate(di, dj)
1285  if(scale_exist) deallocate(scale)
1286  if(is_distribute) then
1287  deallocate(i1_side1, j1_side1, i2_side1, j2_side1, area_side1)
1288  if(use_higher_order) deallocate(di_side1, dj_side1)
1289  endif
1290 
1291  i1=>null(); j1=>null(); i2=>null(); j2=>null()
1292  call mpp_clock_end(id_load_xgrid5)
1293 
1294 
1295 
1296 end subroutine load_xgrid
1297 
1298 !#######################################################################
1299 
1300 !> @brief read the center point of the grid from version 1 grid file.
1301 !! only the grid at the side 1 is needed, so we only read
1302 !! atm and land grid.
1303 subroutine get_grid_version1(grid, grid_id, grid_file)
1304  type(grid_type), intent(inout) :: grid
1305  character(len=3), intent(in) :: grid_id
1306  character(len=*), intent(in) :: grid_file
1307 
1308  real(r8_kind), dimension(grid%im) :: lonb
1309  real(r8_kind), dimension(grid%jm) :: latb
1310  real(r8_kind) :: d2r
1311  integer :: is, ie, js, je
1312  type(fmsnetcdfdomainfile_t) :: fileobj
1313 
1314  d2r = pi / 180.0_r8_kind
1315 
1316  if(.not. open_file(fileobj, grid_file, 'read', grid%domain) ) then
1317  call error_mesg('xgrid_mod(get_grid_version1)', 'Error in opening file '//trim(grid_file), fatal)
1318  endif
1319 
1320  call mpp_get_compute_domain(grid%domain, is, ie, js, je)
1321  if (associated(grid%lon)) deallocate(grid%lon) !< Check if allocated
1322  if (associated(grid%lat)) deallocate(grid%lat) !< Check if allocated
1323  allocate(grid%lon(grid%im), grid%lat(grid%jm))
1324  if(grid_id == 'ATM') then
1325  call read_data(fileobj, 'xta', lonb)
1326  call read_data(fileobj, 'yta', latb)
1327 
1328  if(.not. allocated(area_atm_model)) then
1329  allocate(area_atm_model(is:ie, js:je))
1330  call get_area_elements(fileobj, 'AREA_ATM_MODEL', area_atm_model)
1331  endif
1332  if(.not. allocated(area_atm_sphere)) then
1333  allocate(area_atm_sphere(is:ie, js:je))
1334  call get_area_elements(fileobj, 'AREA_ATM', area_atm_sphere)
1335  endif
1336  else if(grid_id == 'LND') then
1337  call read_data(fileobj, 'xtl', lonb)
1338  call read_data(fileobj, 'ytl', latb)
1339  if(.not. allocated(area_lnd_model)) then
1340  allocate(area_lnd_model(is:ie, js:je))
1341  call get_area_elements(fileobj, 'AREA_LND_MODEL', area_lnd_model)
1342  endif
1343  if(.not. allocated(area_lnd_sphere)) then
1344  allocate(area_lnd_sphere(is:ie, js:je))
1345  call get_area_elements(fileobj, 'AREA_LND', area_lnd_sphere)
1346  endif
1347  else if(grid_id == 'OCN' ) then
1348  if(.not. allocated(area_ocn_sphere)) then
1349  allocate(area_ocn_sphere(is:ie, js:je))
1350  call get_area_elements(fileobj, 'AREA_OCN', area_ocn_sphere)
1351  endif
1352  endif
1353  !--- second order remapping suppose second order
1354  if(grid_id == 'LND' .or. grid_id == 'ATM') then
1355  grid%lon = lonb * d2r
1356  grid%lat = latb * d2r
1357  endif
1358  grid%is_latlon = .true.
1359 
1360  call close_file(fileobj)
1361 
1362  return
1363 
1364 end subroutine get_grid_version1
1365 
1366 !#######################################################################
1367 
1368 !> @brief read the center point of the grid from version 1 grid file.
1369 !! only the grid at the side 1 is needed, so we only read
1370 !! atm and land grid
1371 subroutine get_grid_version2(grid, grid_id, grid_file)
1372  type(grid_type), intent(inout) :: grid
1373  character(len=3), intent(in) :: grid_id
1374  character(len=*), intent(in) :: grid_file
1375 
1376  real(r8_kind), allocatable :: tmpx(:,:), tmpy(:,:)
1377  real(r8_kind) :: d2r
1378  integer :: is, ie, js, je, nlon, nlat, i, j
1379  integer :: start(4), nread(4), isc2, iec2, jsc2, jec2
1380  type(fmsnetcdffile_t) :: fileobj
1381 
1382  if(.not. open_file(fileobj, grid_file, 'read') ) then
1383  call error_mesg('xgrid_mod(get_grid_version2)', 'Error in opening file '//trim(grid_file), fatal)
1384  endif
1385 
1386  d2r = pi / 180.0_r8_kind
1387 
1388  call mpp_get_compute_domain(grid%domain, is, ie, js, je)
1389 
1390  call get_dimension_size(fileobj, "nx", nlon)
1391  call get_dimension_size(fileobj, "ny", nlat)
1392  if( mod(nlon,2) .NE. 0) call error_mesg('xgrid_mod', &
1393  'flux_exchange_mod: atmos supergrid longitude size can not be divided by 2', fatal)
1394  if( mod(nlat,2) .NE. 0) call error_mesg('xgrid_mod', &
1395  'flux_exchange_mod: atmos supergrid latitude size can not be divided by 2', fatal)
1396  nlon = nlon/2
1397  nlat = nlat/2
1398  if(nlon .NE. grid%im .OR. nlat .NE. grid%jm) call error_mesg('xgrid_mod', &
1399  'grid size in tile_file does not match the global grid size', fatal)
1400 
1401  if( grid_id == 'LND' .or. grid_id == 'ATM' .or. grid_id == 'WAV' ) then
1402  isc2 = 2*grid%is_me-1; iec2 = 2*grid%ie_me+1
1403  jsc2 = 2*grid%js_me-1; jec2 = 2*grid%je_me+1
1404  allocate(tmpx(isc2:iec2, jsc2:jec2) )
1405  allocate(tmpy(isc2:iec2, jsc2:jec2) )
1406  start = 1; nread = 1
1407  start(1) = isc2; nread(1) = iec2 - isc2 + 1
1408  start(2) = jsc2; nread(2) = jec2 - jsc2 + 1
1409  call read_data(fileobj, 'x', tmpx, corner=start, edge_lengths=nread)
1410  call read_data(fileobj, 'y', tmpy, corner=start, edge_lengths=nread)
1411  if(is_lat_lon(tmpx, tmpy) ) then
1412  deallocate(tmpx, tmpy)
1413  start = 1; nread = 1
1414  start(2) = 2; nread(1) = nlon*2+1
1415  allocate(tmpx(nlon*2+1, 1), tmpy(1, nlat*2+1))
1416  call read_data(fileobj, "x", tmpx, corner=start, edge_lengths=nread)
1417  if (associated(grid%lon)) deallocate(grid%lon) !< Check if allocated
1418  if (associated(grid%lat)) deallocate(grid%lat) !< Check if allocated
1419  allocate(grid%lon(grid%im), grid%lat(grid%jm))
1420  do i = 1, grid%im
1421  grid%lon(i) = tmpx(2*i,1) * d2r
1422  end do
1423  start = 1; nread = 1
1424  start(1) = 2; nread(2) = nlat*2+1
1425  call read_data(fileobj, "y", tmpy, corner=start, edge_lengths=nread)
1426  do j = 1, grid%jm
1427  grid%lat(j) = tmpy(1, 2*j) * d2r
1428  end do
1429  grid%is_latlon = .true.
1430  else
1431  if (associated(grid%geolon)) deallocate(grid%geolon) !< Check if allocated
1432  if (associated(grid%geolat)) deallocate(grid%geolat) !< Check if allocated
1433  allocate(grid%geolon(grid%isd_me:grid%ied_me, grid%jsd_me:grid%jed_me))
1434  allocate(grid%geolat(grid%isd_me:grid%ied_me, grid%jsd_me:grid%jed_me))
1435  grid%geolon = 1.0e10_r8_kind
1436  grid%geolat = 1.0e10_r8_kind
1437  !--- area_ocn_sphere, area_lnd_sphere, area_atm_sphere is not been defined.
1438  do j = grid%js_me,grid%je_me
1439  do i = grid%is_me,grid%ie_me
1440  grid%geolon(i, j) = tmpx(i*2,j*2)*d2r
1441  grid%geolat(i, j) = tmpy(i*2,j*2)*d2r
1442  end do
1443  end do
1444  call mpp_update_domains(grid%geolon, grid%domain)
1445  call mpp_update_domains(grid%geolat, grid%domain)
1446  grid%is_latlon = .false.
1447  end if
1448  deallocate(tmpx, tmpy)
1449  end if
1450 
1451  call close_file(fileobj)
1452 
1453 return
1454 
1455 end subroutine get_grid_version2
1456 
1457 !#######################################################################
1458 !> @brief Read the area elements from NetCDF file
1459 subroutine get_area_elements_fms2_io(fileobj, name, get_area_data)
1460  type(fmsnetcdfdomainfile_t), intent(in) :: fileobj
1461  character(len=*), intent(in) :: name
1462  real(r8_kind), intent(out) :: get_area_data(:,:)
1463 
1464  if(variable_exists(fileobj, name)) then
1465  call read_data(fileobj, name, get_area_data)
1466  else
1467  call error_mesg('xgrid_mod', 'no field named '//trim(name)//' in grid file '//trim(fileobj%path)// &
1468  ' Will set data to negative values...', note)
1469  ! area elements no present in grid_spec file, set to negative values....
1470  get_area_data = -1.0_r8_kind
1471  endif
1472 
1473 end subroutine get_area_elements_fms2_io
1474 
1475 !#######################################################################
1476 
1477 !> @brief Read Ocean area element data from netCDF file.
1478 !! @details If available in the NetCDF file, this routine will read the
1479 !! AREA_OCN_MODEL field and load the data into global AREA_OCN_MODEL.
1480 !! If not available, then the array AREA_OCN_MODEL will be left
1481 !! unallocated. Must be called by all PEs.
1482 subroutine get_ocean_model_area_elements(domain, grid_file)
1483 
1484  type(domain2d), intent(in) :: domain
1485  character(len=*), intent(in) :: grid_file
1486  integer :: is, ie, js, je
1487  type(fmsnetcdffile_t) :: fileobj
1488 
1489  if(allocated(area_ocn_model)) return
1490 
1491  call mpp_get_compute_domain(domain, is, ie, js, je)
1492  ! allocate even if ie<is, ... in which case the array will have zero size
1493  ! but will still return .T. for allocated(...)
1494  allocate(area_ocn_model(is:ie, js:je))
1495  if(ie < is .or. je < js ) return
1496 
1497  if(.not. open_file(fileobj, grid_file, 'read') ) then
1498  call error_mesg('xgrid_mod(get_ocean_model_area_elements)', 'Error in opening file '//trim(grid_file), fatal)
1499  endif
1500 
1501  if(variable_exists(fileobj, 'AREA_OCN_MODEL') )then
1502  call read_data(fileobj, 'AREA_OCN_MODEL', area_ocn_model)
1503  else
1504  deallocate(area_ocn_model)
1505  endif
1506  call close_file(fileobj)
1507 
1508 
1509 end subroutine get_ocean_model_area_elements
1510 
1511 !#######################################################################
1512 
1513 !> @brief Sets up exchange grid connectivity using grid specification file and
1514 !! processor domain decomposition.
1515 subroutine setup_xmap(xmap, grid_ids, grid_domains, grid_file, atm_grid, lnd_ug_domain)
1516  type(xmap_type), intent(inout) :: xmap
1517  character(len=3), dimension(:), intent(in ) :: grid_ids
1518  type(domain2d), dimension(:), intent(in ) :: grid_domains
1519  character(len=*), intent(in ) :: grid_file
1520  type(grid_box_type), optional, intent(in ) :: atm_grid
1521  type(domainug), optional, intent(in ) :: lnd_ug_domain
1522 
1523  integer :: g, p, i
1524  integer :: nxgrid_file, i1, i2, i3, tile1, tile2, j
1525  integer :: nxc, nyc, out_unit
1526  type(grid_type), pointer :: grid => null()!< pointer to loop through grid_type's in list
1527  type(grid_type), pointer, save :: grid1 => null() !< saved pointer to the first grid in the list
1528  real(r8_kind), dimension(3) :: xxx
1529  real(r8_kind), dimension(:,:), allocatable :: check_data
1530  real(r8_kind), dimension(:,:,:), allocatable :: check_data_3d
1531  real(r8_kind), allocatable :: tmp_2d(:,:), tmp_3d(:,:,:)
1532  character(len=FMS_FILE_LEN) :: xgrid_file, xgrid_name
1533  character(len=FMS_FILE_LEN) :: tile_file, mosaic_file
1534  character(len=256) :: mosaic1, mosaic2, contact, xgrid_dimname
1535  character(len=256) :: tile1_name, tile2_name
1536  character(len=256), allocatable :: tile1_list(:), tile2_list(:)
1537  character(len=FMS_FILE_LEN), allocatable :: xgrid_filelist(:)
1538  integer :: npes, npes2
1539  integer, allocatable :: pelist(:)
1540  type(domain2d), save :: domain2
1541  logical :: use_higher_order = .false.
1542  integer :: lnd_ug_id, l
1543  integer, allocatable :: grid_index(:)
1544  type(fmsnetcdffile_t) :: gridfileobj, mosaicfileobj, fileobj
1545  type(grid_type), allocatable, target :: grids_tmp(:) !< added for nvhpc workaround, stores xmap's
1546  !! grid_type array so we can safely point to it
1547 
1548  call mpp_clock_begin(id_setup_xmap)
1549 
1550  if(interp_method .ne. 'first_order') use_higher_order = .true.
1551 
1552  out_unit = stdout()
1553  xmap%me = mpp_pe()
1554  xmap%npes = mpp_npes()
1555  xmap%root_pe = mpp_root_pe()
1556 
1557  if (associated(xmap%grids)) deallocate(xmap%grids) !< Check if allocated
1558  allocate( xmap%grids(1:size(grid_ids(:))) )
1559 
1560  if (associated(xmap%your1my2)) deallocate(xmap%your1my2) !< Check if allocated
1561  if (associated(xmap%your2my1)) deallocate(xmap%your2my1) !< Check if allocated
1562  if (associated(xmap%your2my1_size)) deallocate(xmap%your2my1_size) !< Check if allocated
1563  allocate ( xmap%your1my2(0:xmap%npes-1), xmap%your2my1(0:xmap%npes-1) )
1564  allocate ( xmap%your2my1_size(0:xmap%npes-1) )
1565 
1566  xmap%your1my2 = .false.; xmap%your2my1 = .false.;
1567  xmap%your2my1_size = 0
1568 
1569  if(.not. open_file(gridfileobj,trim(grid_file), "read")) then
1570  call error_mesg('xgrid_mod', 'Error when opening file'//trim(grid_file), fatal)
1571  endif
1572 
1573 ! check the exchange grid file version to be used by checking the field in the file
1574  if(variable_exists(gridfileobj, "AREA_ATMxOCN" ) ) then
1575  call close_file(gridfileobj)
1576  xmap%version = version1
1577  else if(variable_exists(gridfileobj, "ocn_mosaic_file" ) ) then
1578  xmap%version = version2
1579  else
1580  call error_mesg('xgrid_mod', 'both AREA_ATMxOCN and ocn_mosaic_file does not exist in '//trim(grid_file), fatal)
1581  end if
1582 
1583 
1584  if(xmap%version==version1) then
1585  call error_mesg('xgrid_mod', 'reading exchange grid information from grid spec file', note)
1586  else
1587  call error_mesg('xgrid_mod', 'reading exchange grid information from mosaic grid file', note)
1588  end if
1589 
1590  ! check to see the id of lnd.
1591  lnd_ug_id = 0
1592  if(present(lnd_ug_domain)) then
1593  do g=1,size(grid_ids(:))
1594  if(grid_ids(g) == 'LND') lnd_ug_id = g
1595  enddo
1596  endif
1597 
1598  call mpp_clock_begin(id_load_xgrid)
1599 
1600  ! nvhpc compiler workaround
1601  ! saves grid array as an allocatable and points to that to avoid error from pointing to xmap%grids in loop
1602  grids_tmp = xmap%grids
1603 
1604  grid1 => xmap%grids(1)
1605 
1606  do g=1, size(grid_ids(:))
1607 
1608  grid => grids_tmp(g)
1609 
1610  grid%id = grid_ids(g)
1611  grid%domain = grid_domains(g)
1612  grid%on_this_pe = mpp_domain_is_initialized(grid_domains(g))
1613  if (associated(grid%is)) deallocate(grid%is) !< Check if allocated
1614  if (associated(grid%ie)) deallocate(grid%ie) !< Check if allocated
1615  if (associated(grid%js)) deallocate(grid%js) !< Check if allocated
1616  if (associated(grid%je)) deallocate(grid%je) !< Check if allocated
1617  if (associated(grid%tile)) deallocate(grid%tile) !< Check if allocated
1618  allocate ( grid%is(0:xmap%npes-1), grid%ie(0:xmap%npes-1) )
1619  allocate ( grid%js(0:xmap%npes-1), grid%je(0:xmap%npes-1) )
1620  allocate ( grid%tile(0:xmap%npes-1) )
1621  grid%npes = 0
1622  grid%ni = 0
1623  grid%nj = 0
1624  grid%is = 0
1625  grid%ie = -1
1626  grid%js = 0
1627  grid%je = -1
1628  grid%tile = -1
1629 
1630  select case(xmap%version)
1631  case(version1)
1632  grid%ntile = 1
1633  case(version2)
1634  call read_data(gridfileobj, lowercase(grid_ids(g))//'_mosaic_file', mosaic_file)
1635  if(.not. open_file(mosaicfileobj,'INPUT/'//trim(mosaic_file), "read")) then
1636  call error_mesg('xgrid_mod', 'Error when opening solo mosaic file INPUT/'//trim(mosaic_file), fatal)
1637  endif
1638  call get_dimension_size(mosaicfileobj, 'ntiles', grid%ntile)
1639  end select
1640 
1641  if( g == 1 .AND. grid_ids(1) == 'ATM' ) then
1642  if( .NOT. grid%on_this_pe ) call error_mesg('xgrid_mod', 'ATM domain is not defined on some processor' ,fatal)
1643  endif
1644  grid%npes = mpp_get_domain_npes(grid%domain)
1645  if( xmap%npes > grid%npes .AND. g == 1 .AND. grid_ids(1) == 'ATM' ) then
1646  call mpp_broadcast_domain(grid%domain, domain2)
1647  else if(xmap%npes > grid%npes) then
1648  call mpp_broadcast_domain(grid%domain)
1649  grid%npes = mpp_get_domain_npes(grid%domain)
1650  endif
1651 
1652  npes = grid%npes
1653  allocate(grid%pelist(0:npes-1))
1654  call mpp_get_domain_pelist(grid%domain, grid%pelist)
1655  grid%root_pe = mpp_get_domain_root_pe(grid%domain)
1656 
1657  call mpp_get_data_domain(grid%domain, grid%isd_me, grid%ied_me, grid%jsd_me, grid%jed_me, &
1658  xsize=grid%nxd_me, ysize=grid%nyd_me)
1659  call mpp_get_global_domain(grid%domain, xsize=grid%ni, ysize=grid%nj)
1660 
1661  if( grid%root_pe == xmap%root_pe ) then
1662  call mpp_get_compute_domains(grid%domain, xbegin=grid%is(0:npes-1), xend=grid%ie(0:npes-1), &
1663  ybegin=grid%js(0:npes-1), yend=grid%je(0:npes-1) )
1664  call mpp_get_tile_list(grid%domain, grid%tile(0:npes-1))
1665  if( xmap%npes > npes .AND. g == 1 .AND. grid_ids(1) == 'ATM' ) then
1666  call mpp_get_compute_domains(domain2, xbegin=grid%is(npes:xmap%npes-1), xend=grid%ie(npes:xmap%npes-1), &
1667  ybegin=grid%js(npes:xmap%npes-1), yend=grid%je(npes:xmap%npes-1) )
1668  call mpp_get_tile_list(domain2, grid%tile(npes:xmap%npes-1))
1669  endif
1670  else
1671  npes2 = xmap%npes-npes
1672  call mpp_get_compute_domains(domain2, xbegin=grid%is(0:npes2-1), xend=grid%ie(0:npes2-1), &
1673  ybegin=grid%js(0:npes2-1), yend=grid%je(0:npes2-1) )
1674  call mpp_get_compute_domains(grid%domain, xbegin=grid%is(npes2:xmap%npes-1), xend=grid%ie(npes2:xmap%npes-1), &
1675  ybegin=grid%js(npes2:xmap%npes-1), yend=grid%je(npes2:xmap%npes-1) )
1676  call mpp_get_tile_list(domain2, grid%tile(0:npes2-1))
1677  call mpp_get_tile_list(grid%domain, grid%tile(npes2:xmap%npes-1))
1678  endif
1679  if( xmap%npes > grid%npes .AND. g == 1 .AND. grid_ids(1) == 'ATM' ) then
1680  call mpp_deallocate_domain(domain2)
1681  endif
1682  npes = grid%npes
1683  if( g == 1 .AND. grid_ids(1) == 'ATM' ) npes = xmap%npes
1684  do p = 0, npes-1
1685  if(grid%tile(p) > grid%ntile .or. grid%tile(p) < 1) call error_mesg('xgrid_mod', &
1686  'tile id should between 1 and ntile', fatal)
1687  end do
1688 
1689  grid%im = grid%ni
1690  grid%jm = grid%nj
1691  call mpp_max(grid%ni)
1692  call mpp_max(grid%nj)
1693 
1694  grid%is_me => grid%is(xmap%me-xmap%root_pe); grid%ie_me => grid%ie(xmap%me-xmap%root_pe)
1695  grid%js_me => grid%js(xmap%me-xmap%root_pe); grid%je_me => grid%je(xmap%me-xmap%root_pe)
1696  grid%nxc_me = grid%ie_me - grid%is_me + 1
1697  grid%nyc_me = grid%je_me - grid%js_me + 1
1698  grid%tile_me => grid%tile(xmap%me-xmap%root_pe)
1699 
1700  grid%km = 1
1701  grid%is_ug = .false.
1702  !--- setup for land unstructure grid
1703  if( g == lnd_ug_id ) then
1704  if(xmap%version == version1) call error_mesg('xgrid_mod', &
1705  'does not support unstructured grid for VERSION1 grid' ,fatal)
1706  grid%is_ug = .true.
1707  grid%ug_domain = lnd_ug_domain
1708  if (associated(grid%ls)) deallocate(grid%ls) !< Check if allocated
1709  if (associated(grid%le)) deallocate(grid%le) !< Check if allocated
1710  if (associated(grid%gs)) deallocate(grid%gs) !< Check if allocated
1711  if (associated(grid%ge)) deallocate(grid%ge) !< Check if allocated
1712  allocate ( grid%ls(0:xmap%npes-1), grid%le(0:xmap%npes-1) )
1713  allocate ( grid%gs(0:xmap%npes-1), grid%ge(0:xmap%npes-1) )
1714  grid%ls = 0
1715  grid%le = -1
1716  grid%gs = 0
1717  grid%ge = -1
1718  if(xmap%npes > grid%npes) then
1719  call mpp_broadcast_domain(grid%ug_domain)
1720  endif
1721  call mpp_get_ug_compute_domains(grid%ug_domain, begin=grid%ls(0:npes-1), end=grid%le(0:npes-1) )
1722  call mpp_get_ug_domains_index(grid%ug_domain, grid%gs(0:npes-1), grid%ge(0:npes-1) )
1723  call mpp_get_ug_domain_tile_list(grid%ug_domain, grid%tile(0:npes-1))
1724  grid%ls_me => grid%ls(xmap%me-xmap%root_pe); grid%le_me => grid%le(xmap%me-xmap%root_pe)
1725  grid%gs_me => grid%gs(xmap%me-xmap%root_pe); grid%ge_me => grid%ge(xmap%me-xmap%root_pe)
1726  grid%tile_me => grid%tile(xmap%me-xmap%root_pe)
1727  grid%nxl_me = grid%le_me - grid%ls_me + 1
1728  if (associated(grid%l_index)) deallocate(grid%l_index) !< Check if allocated
1729  allocate(grid%l_index(grid%gs_me:grid%ge_me))
1730  allocate(grid_index(grid%ls_me:grid%le_me))
1731  call mpp_get_ug_domain_grid_index(grid%ug_domain, grid_index)
1732 
1733  grid%l_index = 0
1734  do l = grid%ls_me,grid%le_me
1735  grid%l_index(grid_index(l)) = l
1736  enddo
1737 
1738  if( grid%on_this_pe ) then
1739  if (associated(grid%area)) deallocate(grid%area) !< Check if allocated
1740  if (associated(grid%area_inv)) deallocate(grid%area_inv) !< Check if allocated
1741  allocate( grid%area (grid%ls_me:grid%le_me,1) )
1742  allocate( grid%area_inv(grid%ls_me:grid%le_me,1) )
1743  grid%area = 0.0_r8_kind
1744  grid%size = 0
1745  grid%size_repro = 0
1746  endif
1747  else if( grid%on_this_pe ) then
1748  if (associated(grid%area)) deallocate(grid%area) !< Check if allocated
1749  if (associated(grid%area_inv)) deallocate(grid%area_inv) !< Check if allocated
1750  allocate( grid%area (grid%is_me:grid%ie_me, grid%js_me:grid%je_me) )
1751  allocate( grid%area_inv(grid%is_me:grid%ie_me, grid%js_me:grid%je_me) )
1752  grid%area = 0.0_r8_kind
1753  grid%size = 0
1754  grid%size_repro = 0
1755  endif
1756 
1757  ! get the center point of the grid box
1758  if(.not. grid%is_ug) then
1759  select case(xmap%version)
1760  case(version1)
1761  if( grid%npes .NE. xmap%npes ) then
1762  call error_mesg('xgrid_mod', .NE.' grid%npes xmap%npes ', fatal)
1763  endif
1764  call get_grid_version1(grid, grid_ids(g), grid_file)
1765  case(version2)
1766  allocate(pelist(0:xmap%npes-1))
1767  call mpp_get_current_pelist(pelist)
1768  if( grid%on_this_pe ) then
1769  call mpp_set_current_pelist(grid%pelist)
1770  call get_mosaic_tile_grid(tile_file, mosaicfileobj, grid%domain)
1771  call get_grid_version2(grid, grid_ids(g), tile_file)
1772  endif
1773  call mpp_set_current_pelist(pelist)
1774  deallocate(pelist)
1775  ! read the contact information from mosaic_file to check if atmosphere is nested model
1776  if( g == 1 .AND. grid_ids(1) == 'ATM' ) then
1777  nnest = get_nest_contact(mosaicfileobj, tile_nest, tile_parent, is_nest, &
1778  ie_nest, js_nest, je_nest, is_parent, ie_parent, js_parent, je_parent)
1779 
1780  endif
1781  end select
1782 
1783  if( use_higher_order .AND. grid%id == 'ATM') then
1784  if( nnest > 0 ) call error_mesg('xgrid_mod', 'second_order is not supported for nested coupler', fatal)
1785  if( grid%is_latlon ) then
1786  call mpp_modify_domain(grid%domain, grid%domain_with_halo, whalo=1, ehalo=1, shalo=1, nhalo=1)
1787  call mpp_get_data_domain(grid%domain_with_halo, grid%isd_me, grid%ied_me, grid%jsd_me, grid%jed_me, &
1788  xsize=grid%nxd_me, ysize=grid%nyd_me)
1789  else
1790  if(.NOT. present(atm_grid)) call error_mesg('xgrid_mod', &
1791  'when first grid is "ATM", atm_grid should be present', fatal)
1792  if(grid%is_me-grid%isd_me .NE. 1 .or. grid%ied_me-grid%ie_me .NE. 1 .or. &
1793  grid%js_me-grid%jsd_me .NE. 1 .or. grid%jed_me-grid%je_me .NE. 1 ) &
1794  & call error_mesg('xgrid_mod', 'for non-latlon grid (cubic grid), '//&
1795  & 'the halo size should be 1 in all four direction', fatal)
1796  if(.NOT.( ASSOCIATED(atm_grid%dx) .AND. ASSOCIATED(atm_grid%dy) .AND. ASSOCIATED(atm_grid%edge_w) .AND. &
1797  ASSOCIATED(atm_grid%edge_e) .AND. ASSOCIATED(atm_grid%edge_s) .AND.ASSOCIATED(atm_grid%edge_n).AND.&
1798  ASSOCIATED(atm_grid%en1) .AND. ASSOCIATED(atm_grid%en2) .AND. ASSOCIATED(atm_grid%vlon) .AND. &
1799  ASSOCIATED(atm_grid%vlat) ) ) call error_mesg( 'xgrid_mod', &
1800  'for non-latlon grid (cubic grid), all the fields in atm_grid data type should be allocated', fatal)
1801  nxc = grid%ie_me - grid%is_me + 1
1802  nyc = grid%je_me - grid%js_me + 1
1803  if(size(atm_grid%dx,1) .NE. nxc .OR. size(atm_grid%dx,2) .NE. nyc+1) &
1804  call error_mesg('xgrid_mod', 'incorrect dimension size of atm_grid%dx', fatal)
1805  if(size(atm_grid%dy,1) .NE. nxc+1 .OR. size(atm_grid%dy,2) .NE. nyc) &
1806  call error_mesg('xgrid_mod', 'incorrect dimension sizeof atm_grid%dy', fatal)
1807  if(size(atm_grid%area,1) .NE. nxc .OR. size(atm_grid%area,2) .NE. nyc) &
1808  call error_mesg('xgrid_mod', 'incorrect dimension size of atm_grid%area', fatal)
1809  if(size(atm_grid%edge_w(:)) .NE. nyc+1 .OR. size(atm_grid%edge_e(:)) .NE. nyc+1) &
1810  call error_mesg('xgrid_mod', 'incorrect dimension size of atm_grid%edge_w/edge_e', fatal)
1811  if(size(atm_grid%edge_s(:)) .NE. nxc+1 .OR. size(atm_grid%edge_n(:)) .NE. nxc+1) &
1812  call error_mesg('xgrid_mod', 'incorrect dimension size of atm_grid%edge_s/edge_n', fatal)
1813  if(size(atm_grid%en1,1) .NE. 3 .OR. size(atm_grid%en1,2) .NE. nxc .OR. size(atm_grid%en1,3) .NE. nyc+1) &
1814  call error_mesg( 'xgrid_mod', 'incorrect dimension size of atm_grid%en1', fatal)
1815  if(size(atm_grid%en2,1) .NE. 3 .OR. size(atm_grid%en2,2) .NE. nxc+1 .OR. size(atm_grid%en2,3) .NE. nyc) &
1816  call error_mesg( 'xgrid_mod', 'incorrect dimension size of atm_grid%en2', fatal)
1817  if(size(atm_grid%vlon,1) .NE. 3 .OR. size(atm_grid%vlon,2) .NE. nxc .OR. size(atm_grid%vlon,3) .NE. nyc)&
1818  call error_mesg('xgrid_mod', 'incorrect dimension size of atm_grid%vlon', fatal)
1819  if(size(atm_grid%vlat,1) .NE. 3 .OR. size(atm_grid%vlat,2) .NE. nxc .OR. size(atm_grid%vlat,3) .NE. nyc)&
1820  call error_mesg('xgrid_mod', 'incorrect dimension size of atm_grid%vlat', fatal)
1821  if (associated(grid%box%dx)) deallocate(grid%box%dx) !< Check if allocated
1822  if (associated(grid%box%dy)) deallocate(grid%box%dy) !< Check if allocated
1823  if (associated(grid%box%area)) deallocate(grid%box%area) !< Check if allocated
1824  if (associated(grid%box%edge_w)) deallocate(grid%box%edge_w) !< Check if allocated
1825  if (associated(grid%box%edge_e)) deallocate(grid%box%edge_e) !< Check if allocated
1826  if (associated(grid%box%edge_s)) deallocate(grid%box%edge_s) !< Check if allocated
1827  if (associated(grid%box%edge_n)) deallocate(grid%box%edge_n) !< Check if allocated
1828  if (associated(grid%box%en1)) deallocate(grid%box%en1) !< Check if allocated
1829  if (associated(grid%box%en2)) deallocate(grid%box%en2) !< Check if allocated
1830  if (associated(grid%box%vlon)) deallocate(grid%box%vlon) !< Check if allocated
1831  if (associated(grid%box%vlat)) deallocate(grid%box%vlat) !< Check if allocated
1832  allocate(grid%box%dx (grid%is_me:grid%ie_me, grid%js_me:grid%je_me+1 ))
1833  allocate(grid%box%dy (grid%is_me:grid%ie_me+1, grid%js_me:grid%je_me ))
1834  allocate(grid%box%area (grid%is_me:grid%ie_me, grid%js_me:grid%je_me ))
1835  allocate(grid%box%edge_w(grid%js_me:grid%je_me+1))
1836  allocate(grid%box%edge_e(grid%js_me:grid%je_me+1))
1837  allocate(grid%box%edge_s(grid%is_me:grid%ie_me+1))
1838  allocate(grid%box%edge_n(grid%is_me:grid%ie_me+1))
1839  allocate(grid%box%en1 (3, grid%is_me:grid%ie_me, grid%js_me:grid%je_me+1 ))
1840  allocate(grid%box%en2 (3, grid%is_me:grid%ie_me+1, grid%js_me:grid%je_me ))
1841  allocate(grid%box%vlon (3, grid%is_me:grid%ie_me, grid%js_me:grid%je_me ))
1842  allocate(grid%box%vlat (3, grid%is_me:grid%ie_me, grid%js_me:grid%je_me ))
1843  grid%box%dx = atm_grid%dx
1844  grid%box%dy = atm_grid%dy
1845  grid%box%area = atm_grid%area
1846  grid%box%edge_w = atm_grid%edge_w
1847  grid%box%edge_e = atm_grid%edge_e
1848  grid%box%edge_s = atm_grid%edge_s
1849  grid%box%edge_n = atm_grid%edge_n
1850  grid%box%en1 = atm_grid%en1
1851  grid%box%en2 = atm_grid%en2
1852  grid%box%vlon = atm_grid%vlon
1853  grid%box%vlat = atm_grid%vlat
1854  end if
1855  end if
1856  end if
1857  if(xmap%version==version2) call close_file(mosaicfileobj)
1858  if (g>1) then
1859  if(grid%on_this_pe) then
1860  if (associated(grid%frac_area)) deallocate(grid%frac_area) !< Check if allocated
1861  if(grid%is_ug) then
1862  allocate( grid%frac_area(grid%ls_me:grid%le_me, 1, grid%km) )
1863  else
1864  allocate( grid%frac_area(grid%is_me:grid%ie_me, grid%js_me:grid%je_me, grid%km) )
1865  endif
1866  grid%frac_area = 1.0_r8_kind
1867  endif
1868 
1869  ! nvhpc workaround, needs to save the grid pointer since its allocatable
1870  xmap%grids(g) = grid
1871 
1872  ! load exchange cells, sum grid cell areas, set your1my2/your2my1
1873  select case(xmap%version)
1874  case(version1)
1875  call load_xgrid (xmap, grid, grid_file, grid_ids(1), grid_ids(g), 1, 1, use_higher_order)
1876  case(version2)
1877  select case(grid_ids(1))
1878  case( 'ATM' )
1879  xgrid_name = 'a'
1880  case( 'LND' )
1881  xgrid_name = 'l'
1882  case( 'WAV' )
1883  xgrid_name = 'w'
1884  case default
1885  call error_mesg('xgrid_mod', 'grid_ids(1) should be ATM, LND or WAV', fatal)
1886  end select
1887  select case(grid_ids(g))
1888  case( 'LND' )
1889  xgrid_dimname = 'nfile_'//trim(xgrid_name)//'Xl'
1890  xgrid_name = trim(xgrid_name)//'Xl_file'
1891  case( 'OCN' )
1892  xgrid_dimname = 'nfile_'//trim(xgrid_name)//'Xo'
1893  xgrid_name = trim(xgrid_name)//'Xo_file'
1894  case( 'WAV' )
1895  xgrid_dimname = 'nfile_'//trim(xgrid_name)//'Xw'
1896  xgrid_name = trim(xgrid_name)//'Xw_file'
1897  case default
1898  call error_mesg('xgrid_mod', 'grid_ids(g) should be LND, OCN or WAV', fatal)
1899  end select
1900  ! get the tile list for each mosaic
1901 
1902  call read_data(gridfileobj, lowercase(grid_ids(1))//'_mosaic_file', mosaic1)
1903  call read_data(gridfileobj, lowercase(grid_ids(g))//'_mosaic_file', mosaic2)
1904 
1905  mosaic1 = 'INPUT/'//trim(mosaic1)
1906  mosaic2 = 'INPUT/'//trim(mosaic2)
1907 
1908  allocate(tile1_list(grid1%ntile), tile2_list(grid%ntile) )
1909  if(.not. open_file(fileobj,mosaic1, "read")) then
1910  call error_mesg('xgrid_mod(setup_xmap)', 'Error when opening mosaic1 file '//trim(mosaic1), fatal)
1911  endif
1912  call read_data(fileobj, 'gridtiles', tile1_list)
1913  call close_file(fileobj)
1914 
1915  if(.not. open_file(fileobj,mosaic2, "read")) then
1916  call error_mesg('xgrid_mod(setup_xmap)', 'Error when opening mosaic2 file '//trim(mosaic2), fatal)
1917  endif
1918  call read_data(fileobj, 'gridtiles', tile2_list)
1919  call close_file(fileobj)
1920 
1921  if(variable_exists(gridfileobj, xgrid_name)) then
1922  call get_dimension_size(gridfileobj, xgrid_dimname, nxgrid_file)
1923  if(nxgrid_file>0) then
1924  allocate(xgrid_filelist(nxgrid_file))
1925  call read_data(gridfileobj, xgrid_name, xgrid_filelist)
1926  endif
1927  ! loop through all the exchange grid file
1928  do i = 1, nxgrid_file
1929  xgrid_file = 'INPUT/'//trim(xgrid_filelist(i))
1930  if(.not. open_file(fileobj,xgrid_file, "read")) then
1931  call error_mesg('xgrid_mod(setup_xmap)', 'Error when opening xgrid file '// &
1932  & trim(xgrid_file), fatal)
1933  endif
1934 
1935  ! find the tile number of side 1 and side 2 mosaic, which is contained in field contact
1936  call read_data(fileobj, "contact", contact)
1937  i1 = index(contact, ":")
1938  i2 = index(contact, "::")
1939  i3 = index(contact, ":", back=.true. )
1940  if(i1 == 0 .OR. i2 == 0) call error_mesg('xgrid_mod', &
1941  'field contact in file '//trim(xgrid_file)//' should contains ":" and "::" ', fatal)
1942  if(i1 == i3) call error_mesg('xgrid_mod', &
1943  'field contact in file '//trim(xgrid_file)//' should contains two ":"', fatal)
1944  tile1_name = contact(i1+1:i2-1)
1945  tile2_name = contact(i3+1:len_trim(contact))
1946  tile1 = 0; tile2 = 0
1947  do j = 1, grid1%ntile
1948  if( trim(tile1_name) == trim(tile1_list(j)) ) then
1949  tile1 = j
1950  exit
1951  end if
1952  end do
1953  do j = 1, grid%ntile
1954  if( tile2_name == tile2_list(j) ) then
1955  tile2 = j
1956  exit
1957  end if
1958  end do
1959 
1960  if(tile1 == 0) call error_mesg('xgrid_mod', &
1961  trim(tile1_name)//' is not a tile of mosaic '//trim(mosaic1), fatal)
1962  if(tile2 == 0) call error_mesg('xgrid_mod', &
1963  trim(tile2_name)//' is not a tile of mosaic '//trim(mosaic2), fatal)
1964  call close_file(fileobj)
1965  call load_xgrid (xmap, grid, xgrid_file, grid_ids(1), grid_ids(g), tile1, tile2, &
1966  use_higher_order)
1967  end do
1968  deallocate(xgrid_filelist)
1969  endif
1970  deallocate(tile1_list, tile2_list)
1971  end select
1972  if(grid%on_this_pe) then
1973  grid%area_inv = 0.0_r8_kind;
1974  where (grid%area>0.0_r8_kind) grid%area_inv = 1.0_r8_kind/grid%area
1975  endif
1976  end if
1977 
1978  ! nvhpc workaround, needs to save the grid pointer since its allocatable
1979  xmap%grids(g) = grid
1980  end do
1981 
1982  if(xmap%version == version2) call close_file(gridfileobj)
1983 
1984  call mpp_clock_end(id_load_xgrid)
1985 
1986  grid1%area_inv = 0.0_r8_kind;
1987  where (grid1%area>0.0_r8_kind)
1988  grid1%area_inv = 1.0_r8_kind/grid1%area
1989  end where
1990 
1991  xmap%your1my2(xmap%me-xmap%root_pe) = .false. ! this is not necessarily true but keeps
1992  xmap%your2my1(xmap%me-xmap%root_pe) = .false. ! a PE from communicating with itself
1993 
1994  if (make_exchange_reproduce) then
1995  if (associated(xmap%send_count_repro)) deallocate(xmap%send_count_repro) !< Check if allocated
1996  if (associated(xmap%recv_count_repro)) deallocate(xmap%recv_count_repro) !< Check if allocated
1997  allocate( xmap%send_count_repro(0:xmap%npes-1) )
1998  allocate( xmap%recv_count_repro(0:xmap%npes-1) )
1999  xmap%send_count_repro = 0
2000  xmap%recv_count_repro = 0
2001  do g=2,size(xmap%grids(:))
2002  do p=0,xmap%npes-1
2003  if(xmap%grids(g)%size >0) &
2004  xmap%send_count_repro(p) = xmap%send_count_repro(p) &
2005  +count(xmap%grids(g)%x (:)%pe==p+xmap%root_pe)
2006  if(xmap%grids(g)%size_repro >0) &
2007  xmap%recv_count_repro(p) = xmap%recv_count_repro(p) &
2008  +count(xmap%grids(g)%x_repro(:)%pe==p+xmap%root_pe)
2009  end do
2010  end do
2011  xmap%send_count_repro_tot = sum(xmap%send_count_repro)
2012  xmap%recv_count_repro_tot = sum(xmap%recv_count_repro)
2013  else
2014  xmap%send_count_repro_tot = 0
2015  xmap%recv_count_repro_tot = 0
2016  end if
2017 
2018  if (associated(xmap%x1)) deallocate(xmap%x1) !< Check if allocated
2019  if (associated(xmap%x2)) deallocate(xmap%x2) !< Check if allocated
2020  if (associated(xmap%x1_put)) deallocate(xmap%x1_put) !< Check if allocated
2021  if (associated(xmap%x2_get)) deallocate(xmap%x2_get) !< Check if allocated
2022  allocate( xmap%x1(1:sum(xmap%grids(2:size(xmap%grids(:)))%size)) )
2023  allocate( xmap%x2(1:sum(xmap%grids(2:size(xmap%grids(:)))%size)) )
2024  allocate( xmap%x1_put(1:sum(xmap%grids(2:size(xmap%grids(:)))%size)) )
2025  allocate( xmap%x2_get(1:sum(xmap%grids(2:size(xmap%grids(:)))%size)) )
2026 
2027  !--- The following will setup indx to be used in regen
2028  if (associated(xmap%get1)) deallocate(xmap%get1) !< Check if allocated
2029  if (associated(xmap%put1)) deallocate(xmap%put1) !< Check if allocated
2030  allocate(xmap%get1, xmap%put1)
2031  call mpp_clock_begin(id_set_comm)
2032 
2033  call set_comm_get1(xmap)
2034 
2035  call set_comm_put1(xmap)
2036 
2037  if(make_exchange_reproduce) then
2038  if (associated(xmap%get1_repro)) deallocate(xmap%get1_repro) !< Check if allocated
2039  allocate(xmap%get1_repro)
2040  call set_comm_get1_repro(xmap)
2041  endif
2042 
2043  call mpp_clock_end(id_set_comm)
2044 
2045  call mpp_clock_begin(id_regen)
2046  call regen(xmap)
2047  call mpp_clock_end(id_regen)
2048 
2049  call mpp_clock_begin(id_conservation_check)
2050 
2051  if(lnd_ug_id ==0) then
2052  xxx = conservation_check(grid1%area*0.0_r8_kind+1.0_r8_kind, grid1%id, xmap)
2053  else
2054  allocate(tmp_2d(grid1%is_me:grid1%ie_me, grid1%js_me:grid1%je_me))
2055  tmp_2d = 1.0_r8_kind
2056  xxx = conservation_check_ug(tmp_2d, grid1%id, xmap)
2057  deallocate(tmp_2d)
2058  endif
2059  write(out_unit,* )"Checked data is array of constant 1"
2060  write(out_unit,* )grid1%id,'(',xmap%grids(:)%id,')=', xxx
2061 
2062  if(lnd_ug_id == 0) then
2063  do g=2,size(xmap%grids(:))
2064  xxx = conservation_check(xmap%grids(g)%frac_area*0.0_r8_kind+1.0_r8_kind, xmap%grids(g)%id, xmap )
2065  write( out_unit,* )xmap%grids(g)%id,'(',xmap%grids(:)%id,')=', xxx
2066  enddo
2067  else
2068  do g=2,size(xmap%grids(:))
2069  grid => xmap%grids(g)
2070  allocate(tmp_3d(grid%is_me:grid%ie_me, grid%js_me:grid%je_me,grid%km))
2071  tmp_3d = 1.0_r8_kind
2072  xxx = conservation_check_ug(tmp_3d, xmap%grids(g)%id, xmap )
2073  write( out_unit,* )xmap%grids(g)%id,'(',xmap%grids(:)%id,')=', xxx
2074  deallocate(tmp_3d)
2075  enddo
2076  endif
2077  ! create an random number 2d array
2078  if(grid1%id == "ATM") then
2079  allocate(check_data(size(grid1%area,1), size(grid1%area,2)))
2080  call random_number(check_data)
2081 
2082  !--- second order along both zonal and meridinal direction
2083  if(lnd_ug_id ==0) then
2084  xxx = conservation_check(check_data, grid1%id, xmap, remap_method = remapping_method )
2085  else
2086  xxx = conservation_check_ug(check_data, grid1%id, xmap, remap_method = remapping_method )
2087  endif
2088  write( out_unit,* ) &
2089  "Checked data is array of random number between 0 and 1 using "//trim(interp_method)
2090  write( out_unit,* )grid1%id,'(',xmap%grids(:)%id,')=', xxx
2091 
2092  deallocate(check_data)
2093  do g=2,size(xmap%grids(:))
2094  allocate(check_data_3d(xmap%grids(g)%is_me:xmap%grids(g)%ie_me, &
2095  xmap%grids(g)%js_me:xmap%grids(g)%je_me, grid1%km))
2096  call random_number(check_data_3d)
2097  if(lnd_ug_id ==0) then
2098  xxx = conservation_check(check_data_3d, xmap%grids(g)%id, xmap, remap_method = remapping_method )
2099  else
2100  xxx = conservation_check_ug(check_data_3d, xmap%grids(g)%id, xmap, remap_method = remapping_method )
2101  endif
2102  write( out_unit,* )xmap%grids(g)%id,'(',xmap%grids(:)%id,')=', xxx
2103  deallocate( check_data_3d)
2104  end do
2105  endif
2106  call mpp_clock_end(id_conservation_check)
2107 
2108  call mpp_clock_end(id_setup_xmap)
2109 
2110 end subroutine setup_xmap
2111 
2112 !----------------------------------------------------------------------------
2113 
2114 !> @brief currently we are assuming there is only one nest region
2115 !! @return integer get_nest_contact
2116 function get_nest_contact_fms2_io(fileobj, tile_nest_out, tile_parent_out, is_nest_out, &
2117  ie_nest_out, js_nest_out, je_nest_out, is_parent_out, &
2118  ie_parent_out, js_parent_out, je_parent_out) &
2119  result(get_nest_contact) !< This is needed for use_mpp_io
2120 type(fmsnetcdffile_t), intent(in) :: fileobj
2121 integer, intent(out) :: tile_nest_out, tile_parent_out
2122 integer, intent(out) :: is_nest_out, ie_nest_out
2123 integer, intent(out) :: js_nest_out, je_nest_out
2124 integer, intent(out) :: is_parent_out, ie_parent_out
2125 integer, intent(out) :: js_parent_out, je_parent_out
2126 integer :: get_nest_contact
2127 !--- local variables
2128 integer :: ntiles, ncontacts, n, t1, t2
2129 integer :: nx1_contact, ny1_contact
2130 integer :: nx2_contact, ny2_contact
2131 integer, allocatable, dimension(:) :: nx, ny
2132 integer, allocatable, dimension(:) :: tile1, tile2
2133 integer, allocatable, dimension(:) :: istart1, iend1, jstart1, jend1
2134 integer, allocatable, dimension(:) :: istart2, iend2, jstart2, jend2
2135 
2136  tile_nest_out = 0; tile_parent_out = 0
2137  is_nest_out = 0; ie_nest_out = 0
2138  js_nest_out = 0; je_nest_out = 0
2139  is_parent_out = 0; ie_parent_out = 0
2140  js_parent_out = 0; je_parent_out = 0
2141  get_nest_contact = 0
2142 
2143  ! first read the contact information
2144  ntiles = get_mosaic_ntiles(fileobj)
2145  if( ntiles == 1 ) return
2146 
2147  allocate(nx(ntiles), ny(ntiles))
2148  call get_mosaic_grid_sizes(fileobj, nx, ny)
2149 
2150  ncontacts = get_mosaic_ncontacts(fileobj)
2151  if(ncontacts == 0) return
2152  allocate(tile1(ncontacts), tile2(ncontacts))
2153  allocate(istart1(ncontacts), iend1(ncontacts))
2154  allocate(jstart1(ncontacts), jend1(ncontacts))
2155  allocate(istart2(ncontacts), iend2(ncontacts))
2156  allocate(jstart2(ncontacts), jend2(ncontacts))
2157 
2158  call get_mosaic_contact( fileobj, tile1, tile2, istart1, iend1, jstart1, jend1, &
2159  istart2, iend2, jstart2, jend2)
2160 
2161  do n = 1, ncontacts
2162  if( tile1(n) == tile2(n) ) cycle ! same tile could not be nested
2163 
2164  nx1_contact = iend1(n)-istart1(n)+1
2165  ny1_contact = jend1(n)-jstart1(n)+1
2166  nx2_contact = iend2(n)-istart2(n)+1
2167  ny2_contact = jend2(n)-jstart2(n)+1
2168  t1 = tile1(n);
2169  t2 = tile2(n);
2170  ! For nesting, the contact index of one tile must match its global domain
2171  if( (nx(t1) .NE. nx1_contact .OR. ny(t1) .NE. ny1_contact ) .AND. &
2172  (nx(t2) .NE. nx2_contact .OR. ny(t2) .NE. ny2_contact ) ) cycle
2173  if(nx1_contact == nx2_contact .AND. ny1_contact == ny2_contact) then
2174  call error_mesg('xgrid_mod', 'There is no refinement for the overlapping region', fatal)
2175  endif
2176 
2178  if(get_nest_contact>1) then
2179  call error_mesg('xgrid_mod', 'only support one nest region, contact developer' ,fatal)
2180  endif
2181  if(nx2_contact*ny2_contact > nx1_contact*ny1_contact) then
2182  is_nest_out = istart2(n);
2183  ie_nest_out = iend2(n);
2184  js_nest_out = jstart2(n);
2185  je_nest_out = jend2(n);
2186  tile_nest_out = tile2(n);
2187  is_parent_out = istart1(n);
2188  ie_parent_out = iend1(n);
2189  js_parent_out = jstart1(n);
2190  je_parent_out = jend1(n);
2191  tile_parent_out = tile1(n);
2192  else
2193  is_nest_out = istart1(n);
2194  ie_nest_out = iend1(n);
2195  js_nest_out = jstart1(n);
2196  je_nest_out = jend1(n);
2197  tile_nest_out = tile1(n);
2198  is_parent_out = istart2(n);
2199  ie_parent_out = iend2(n);
2200  js_parent_out = jstart2(n);
2201  je_parent_out = jend2(n);
2202  tile_parent_out = tile2(n);
2203  endif
2204  enddo
2205 
2206  deallocate(nx, ny, tile1, tile2)
2207  deallocate(istart1, iend1, jstart1, jend1)
2208  deallocate(istart2, iend2, jstart2, jend2)
2209 
2210 
2211  return
2212 
2213 end function get_nest_contact_fms2_io
2214 
2215 !#######################################################################
2216 subroutine set_comm_get1_repro(xmap)
2217  type (xmap_type), intent(inout) :: xmap
2218  integer, dimension(xmap%npes) :: pe_ind, cnt
2219  integer, dimension(0:xmap%npes-1) :: send_ind, pl
2220  integer :: npes, nsend, nrecv, mypos
2221  integer :: m, p, pos, n, g, l, im, i, j
2222  type(comm_type), pointer, save :: comm => null()
2223 
2224  comm => xmap%get1_repro
2225  npes = xmap%npes
2226 
2227  nrecv = 0
2228  mypos = mpp_pe() - mpp_root_pe()
2229  do m=0,npes-1
2230  p = mod(mypos+npes-m, npes)
2231  if( xmap%recv_count_repro(p) > 0 ) then
2232  nrecv = nrecv + 1
2233  pe_ind(nrecv) = p
2234  endif
2235  enddo
2236 
2237  comm%nrecv = nrecv
2238  if( nrecv > 0 ) then
2239  if (associated(comm%recv)) deallocate(comm%recv) !< Check if allocated
2240  allocate(comm%recv(nrecv))
2241  pos = 0
2242  do n = 1, nrecv
2243  p = pe_ind(n)
2244  comm%recv(n)%count = xmap%recv_count_repro(p)
2245  comm%recv(n)%pe = p + xmap%root_pe
2246  comm%recv(n)%buffer_pos = pos
2247  pos = pos + comm%recv(n)%count
2248  enddo
2249  endif
2250 
2251 
2252  ! send information
2253  nsend = 0
2254  mypos = mpp_pe() - mpp_root_pe()
2255  do m=0,xmap%npes-1
2256  p = mod(mypos+m, npes)
2257  if( xmap%send_count_repro(p) > 0 ) then
2258  nsend = nsend + 1
2259  pe_ind(nsend) = p
2260  send_ind(p) = nsend
2261  endif
2262  enddo
2263 
2264  comm%nsend = nsend
2265  if( nsend > 0 ) then
2266  if (associated(comm%send)) deallocate(comm%send) !< Check if allocated
2267  allocate(comm%send(nsend))
2268  pos = 0
2269  cnt(:) = 0
2270  do n = 1, nsend
2271  p = pe_ind(n)
2272  comm%send(n)%count = xmap%send_count_repro(p)
2273  comm%send(n)%pe = p + xmap%root_pe
2274  comm%send(n)%buffer_pos = pos
2275  pos = pos + comm%send(n)%count
2276  allocate(comm%send(n)%i(comm%send(n)%count))
2277  allocate(comm%send(n)%j(comm%send(n)%count))
2278  allocate(comm%send(n)%g(comm%send(n)%count))
2279  allocate(comm%send(n)%xLoc(comm%send(n)%count))
2280  enddo
2281 
2282  do g=2,size(xmap%grids(:))
2283  im = xmap%grids(g)%im
2284  do l=1,xmap%grids(g)%size ! index into this side 2 grid's patterns
2285  p = xmap%grids(g)%x(l)%pe-xmap%root_pe
2286  n = send_ind(p)
2287  cnt(n) = cnt(n) + 1
2288  pos = cnt(n)
2289  i = xmap%grids(g)%x(l)%i2
2290  j = xmap%grids(g)%x(l)%j2
2291  if(xmap%grids(g)%is_ug) then
2292  comm%send(n)%i(pos) = xmap%grids(g)%l_index((j-1)*im+i)
2293  comm%send(n)%j(pos) = 1
2294  else
2295  comm%send(n)%i(pos) = xmap%grids(g)%x(l)%i2
2296  comm%send(n)%j(pos) = xmap%grids(g)%x(l)%j2
2297  endif
2298  comm%send(n)%g(pos) = g
2299  enddo
2300  enddo
2301  !--- make sure the count is correct
2302  do n = 1, nsend
2303  if( comm%send(n)%count .NE. cnt(n) ) call error_mesg('xgrid_mod', &
2304  .NE.'comm%send(n)%count cnt(n)', fatal)
2305  enddo
2306  endif
2307 
2308  !--- set up the recv_pos for unpack the data.
2309  pl(:) = 1
2310  do g=2,size(xmap%grids(:))
2311  do l=1,xmap%grids(g)%size_repro ! index into side1 grid's patterns
2312  p = xmap%grids(g)%x_repro(l)%pe-xmap%root_pe
2313  xmap%grids(g)%x_repro(l)%recv_pos = pl(p)
2314  pl(p) = pl(p) + 1
2315  end do
2316  end do
2317 
2318 
2319 
2320 end subroutine set_comm_get1_repro
2321 
2322 !#######################################################################
2323 subroutine set_comm_get1(xmap)
2324  type (xmap_type), intent(inout) :: xmap
2325  type (grid_type), pointer, save :: grid1 =>null()
2326  integer, allocatable :: send_size(:)
2327  integer, allocatable :: recv_size(:)
2328  integer :: max_size, g, npes, l, ll, nset, m
2329  integer :: i1, j1, tile1, p, n, pos, buffer_pos, mypos
2330  integer :: nsend, nrecv, rbuf_size, sbuf_size, msgsize
2331  logical :: found
2332  real(r8_kind), allocatable :: recv_buf(:), send_buf(:)
2333  real(r8_kind), allocatable :: diarray(:), djarray(:)
2334  integer, allocatable :: iarray(:), jarray(:), tarray(:)
2335  integer, allocatable :: pos_x(:), pelist(:), size_pe(:), pe_side1(:)
2336  integer :: recv_buffer_pos(0:xmap%npes)
2337  integer :: send_buffer_pos(0:xmap%npes)
2338  type(comm_type), pointer, save :: comm => null()
2339  integer :: i, j
2340 
2341  max_size = 0
2342  do g=2,size(xmap%grids(:))
2343  max_size = max_size + xmap%grids(g)%size
2344  enddo
2345  comm => xmap%get1
2346  grid1 => xmap%grids(1)
2347  comm%nsend = 0
2348  comm%nrecv = 0
2349  npes = xmap%npes
2350 
2351  allocate(pelist(0:npes-1))
2352  call mpp_get_current_pelist(pelist)
2353  allocate(send_size(0:npes-1))
2354  allocate(recv_size(0:npes-1))
2355  allocate(size_pe(0:npes-1))
2356  allocate(pos_x(0:npes-1))
2357  size_pe = 0
2358  send_size = 0
2359  recv_size = 0
2360 
2361  if(max_size > 0) then
2362  allocate(pe_side1(max_size))
2363  if (associated(xmap%ind_get1)) deallocate(xmap%ind_get1) !< Check if allocated
2364  allocate(xmap%ind_get1(max_size))
2365 
2366  !--- find the recv_indx
2367  ll = 0
2368  do g=2,size(xmap%grids(:))
2369  do l=1,xmap%grids(g)%size
2370  i1 = xmap%grids(g)%x(l)%i1
2371  j1 = xmap%grids(g)%x(l)%j1
2372  tile1 = xmap%grids(g)%x(l)%tile
2373  do p=0,npes-1
2374  if(grid1%tile(p) == tile1) then
2375  if(in_box_nbr(i1, j1, grid1, p)) then
2376  size_pe(p) = size_pe(p) + 1
2377  exit
2378  endif
2379  endif
2380  enddo
2381  if( p == npes ) then
2382  call error_mesg('xgrid_mod', 'tile is not in grid1%tile(:)', fatal)
2383  endif
2384  ll = ll + 1
2385  pe_side1(ll) = p
2386  enddo
2387  enddo
2388 
2389  pos_x = 0
2390  do p = 1, npes-1
2391  pos_x(p) = pos_x(p-1) + size_pe(p-1)
2392  enddo
2393 
2394  !---find the send size for get_1_from_xgrid
2395  allocate(iarray(max_size))
2396  allocate(jarray(max_size))
2397  allocate(tarray(max_size))
2398  if(monotonic_exchange) then
2399  allocate(diarray(max_size))
2400  allocate(djarray(max_size))
2401  endif
2402 
2403  ll = 0
2404 
2405  do g=2,size(xmap%grids(:))
2406  do l=1,xmap%grids(g)%size
2407  i1 = xmap%grids(g)%x(l)%i1
2408  j1 = xmap%grids(g)%x(l)%j1
2409  tile1 = xmap%grids(g)%x(l)%tile
2410  ll = ll + 1
2411  p = pe_side1(ll)
2412 
2413  found = .false.
2414  if(send_size(p) > 0) then
2415  if( i1 == iarray(pos_x(p)+send_size(p)) .AND. j1 == jarray(pos_x(p)+send_size(p)) &
2416  .AND. tile1 == tarray(pos_x(p)+send_size(p))) then
2417  found = .true.
2418  n = send_size(p)
2419  else
2420  !---may need to replace with a fast search algorithm
2421  do n = 1, send_size(p)
2422  if(i1 == iarray(pos_x(p)+n) .AND. j1 == jarray(pos_x(p)+n) .AND. tile1 == tarray(pos_x(p)+n)) then
2423  found = .true.
2424  exit
2425  endif
2426  enddo
2427  endif
2428  endif
2429  if( (.NOT. found) .OR. monotonic_exchange ) then
2430  send_size(p) = send_size(p)+1
2431  pos = pos_x(p)+send_size(p)
2432  iarray(pos) = i1
2433  jarray(pos) = j1
2434  tarray(pos) = tile1
2435  if(monotonic_exchange) then
2436  diarray(pos) = xmap%grids(g)%x(l)%di
2437  djarray(pos) = xmap%grids(g)%x(l)%dj
2438  endif
2439  n = send_size(p)
2440  endif
2441  xmap%ind_get1(ll) = n
2442  enddo
2443  enddo
2444 
2445  pos_x = 0
2446  do p = 1, npes-1
2447  pos_x(p) = pos_x(p-1) + send_size(p-1)
2448  enddo
2449 
2450  ll = 0
2451  do g=2,size(xmap%grids(:))
2452  do l=1,xmap%grids(g)%size
2453  ll = ll + 1
2454  p = pe_side1(ll)
2455  xmap%ind_get1(ll) = pos_x(p) + xmap%ind_get1(ll)
2456  enddo
2457  enddo
2458  endif
2459 
2460  mypos = mpp_pe()-mpp_root_pe()
2461 
2462  ! send/recv for get_1_from_xgrid_recv
2463  recv_size(:) = xmap%your2my1_size(:)
2464  nsend = count( send_size> 0)
2465  comm%nsend = nsend
2466  if(nsend>0) then
2467  if (associated(comm%send)) deallocate(comm%send) !< Check if allocated
2468  allocate(comm%send(nsend))
2469  comm%send(:)%count = 0
2470  endif
2471 
2472  pos = 0
2473  do p = 0, npes-1
2474  send_buffer_pos(p) = pos
2475  pos = pos + send_size(p)
2476  enddo
2477 
2478  pos = 0
2479  comm%sendsize = 0
2480  do n = 0, npes-1
2481  p = mod(mypos+n, npes)
2482  if(send_size(p)>0) then
2483  pos = pos + 1
2484  allocate(comm%send(pos)%i(send_size(p)))
2485  comm%send(pos)%buffer_pos = send_buffer_pos(p)
2486  comm%send(pos)%count = send_size(p)
2487  comm%send(pos)%pe = pelist(p)
2488  comm%sendsize = comm%sendsize + send_size(p)
2489  endif
2490  enddo
2491 
2492  nset = 3
2493  if(monotonic_exchange) nset = 5
2494  rbuf_size = sum(recv_size)*nset
2495  sbuf_size = sum(send_size)*nset
2496  if(rbuf_size>0) allocate(recv_buf(rbuf_size))
2497  if(sbuf_size>0) allocate(send_buf(sbuf_size))
2498 
2499  pos = 0
2500  do n = 0, npes-1
2501  p = mod(mypos+npes-n, npes)
2502  if(recv_size(p) ==0) cycle
2503  msgsize = recv_size(p)*nset
2504  call mpp_recv(recv_buf(pos+1), glen=msgsize, from_pe=pelist(p), block=.false., tag=comm_tag_4)
2505  pos = pos + msgsize
2506  enddo
2507 
2508  pos_x = 0
2509  do p = 1, npes-1
2510  pos_x(p) = pos_x(p-1) + size_pe(p-1)
2511  enddo
2512  ll = 0
2513  pos = 0
2514  do n = 0, npes-1
2515  p = mod(mypos+n, npes)
2516  do l = 1, send_size(p)
2517  send_buf(pos+1) = real(iarray(pos_x(p)+l), r8_kind)
2518  send_buf(pos+2) = real(jarray(pos_x(p)+l), r8_kind)
2519  send_buf(pos+3) = real(tarray(pos_x(p)+l), r8_kind)
2520  if(monotonic_exchange) then
2521  send_buf(pos+4) = diarray(pos_x(p)+l)
2522  send_buf(pos+5) = djarray(pos_x(p)+l)
2523  endif
2524  pos = pos + nset
2525  enddo
2526  enddo
2527 
2528  pos = 0
2529  do n = 0, npes-1
2530  p = mod(mypos+n, npes)
2531  if(send_size(p) ==0) cycle
2532  msgsize = send_size(p)*nset
2533  call mpp_send(send_buf(pos+1), plen=msgsize, to_pe=pelist(p), tag=comm_tag_4 )
2534  pos = pos + msgsize
2535  enddo
2536 
2537  call mpp_sync_self(check=event_recv)
2538  nrecv = count(recv_size>0)
2539  comm%nrecv = nrecv
2540  comm%recvsize = 0
2541 
2542  if(nrecv >0) then
2543  if (associated(comm%recv)) deallocate(comm%recv) !< Check if allocated
2544  allocate(comm%recv(nrecv))
2545  comm%recv(:)%count = 0
2546  !--- set up the buffer pos for each receiving
2547  buffer_pos = 0
2548  do p = 0, npes-1
2549  recv_buffer_pos(p) = buffer_pos
2550  buffer_pos = buffer_pos + recv_size(p)
2551  enddo
2552  pos = 0
2553  buffer_pos = 0
2554  do m=0,npes-1
2555  p = mod(mypos+npes-m, npes)
2556  if(recv_size(p)>0) then
2557  pos = pos + 1
2558  allocate(comm%recv(pos)%i(recv_size(p)))
2559  allocate(comm%recv(pos)%j(recv_size(p)))
2560  allocate(comm%recv(pos)%tile(recv_size(p)))
2561  comm%recv(pos)%buffer_pos = recv_buffer_pos(p)
2562  comm%recv(pos)%pe = pelist(p)
2563  comm%recv(pos)%count = recv_size(p)
2564  comm%recvsize = comm%recvsize + recv_size(p)
2565  if(monotonic_exchange) then
2566  allocate(comm%recv(pos)%di(recv_size(p)))
2567  allocate(comm%recv(pos)%dj(recv_size(p)))
2568  endif
2569  if(grid1%is_ug) then
2570  do n = 1, recv_size(p)
2571  i = int(recv_buf(buffer_pos+1))
2572  j = int(recv_buf(buffer_pos+2))
2573  comm%recv(pos)%i(n) = grid1%l_index((j-1)*grid1%im+i)
2574  comm%recv(pos)%j(n) = 1
2575  comm%recv(pos)%tile(n) = int(recv_buf(buffer_pos+3))
2576  if(monotonic_exchange) then
2577  comm%recv(pos)%di(n) = recv_buf(buffer_pos+4)
2578  comm%recv(pos)%dj(n) = recv_buf(buffer_pos+5)
2579  endif
2580  buffer_pos = buffer_pos + nset
2581  enddo
2582  else
2583  do n = 1, recv_size(p)
2584  comm%recv(pos)%i(n) = int(recv_buf(buffer_pos+1) )- grid1%is_me + 1
2585  comm%recv(pos)%j(n) = int(recv_buf(buffer_pos+2) )- grid1%js_me + 1
2586  comm%recv(pos)%tile(n) = int(recv_buf(buffer_pos+3))
2587  if(monotonic_exchange) then
2588  comm%recv(pos)%di(n) = recv_buf(buffer_pos+4)
2589  comm%recv(pos)%dj(n) = recv_buf(buffer_pos+5)
2590  endif
2591  buffer_pos = buffer_pos + nset
2592  enddo
2593  endif
2594  endif
2595  enddo
2596  if (associated(comm%unpack_ind)) deallocate(comm%unpack_ind) !< Check if allocated
2597  allocate(comm%unpack_ind(nrecv))
2598  pos = 0
2599  do p = 0, npes-1
2600  if(recv_size(p)>0) then
2601  pos = pos + 1
2602  do m = 1, nrecv
2603  if(comm%recv(m)%pe == pelist(p)) then
2604  comm%unpack_ind(pos) = m
2605  exit
2606  endif
2607  enddo
2608  endif
2609  enddo
2610  endif
2611  call mpp_sync_self()
2612 
2613  if(allocated(send_buf) ) deallocate(send_buf)
2614  if(allocated(recv_buf) ) deallocate(recv_buf)
2615  if(allocated(pelist) ) deallocate(pelist)
2616  if(allocated(pos_x) ) deallocate(pos_x)
2617  if(allocated(pelist) ) deallocate(pelist)
2618  if(allocated(iarray) ) deallocate(iarray)
2619  if(allocated(jarray) ) deallocate(jarray)
2620  if(allocated(tarray) ) deallocate(tarray)
2621  if(allocated(size_pe) ) deallocate(size_pe)
2622 
2623 end subroutine set_comm_get1
2624 
2625 !###############################################################################
2626 subroutine set_comm_put1(xmap)
2627  type (xmap_type), intent(inout) :: xmap
2628  type (grid_type), pointer, save :: grid1 =>null()
2629  integer, allocatable :: send_size(:)
2630  integer, allocatable :: recv_size(:)
2631  integer :: max_size, g, npes, l, ll, m, mypos
2632  integer :: i1, j1, tile1, p, n, pos, buffer_pos
2633  integer :: nsend, nrecv, msgsize, nset, rbuf_size, sbuf_size
2634  logical :: found
2635  real(r8_kind), allocatable :: recv_buf(:), send_buf(:)
2636  real(r8_kind), allocatable :: diarray(:), djarray(:)
2637  integer, allocatable :: iarray(:), jarray(:), tarray(:)
2638  integer, allocatable :: pos_x(:), pelist(:), size_pe(:), pe_put1(:)
2639  integer :: recv_buffer_pos(0:xmap%npes)
2640  type(comm_type), pointer, save :: comm => null()
2641 
2642 
2643  comm => xmap%put1
2644  if(nnest == 0 .OR. xmap%grids(1)%id .NE. 'ATM' ) then
2645  comm%nsend = xmap%get1%nrecv
2646  comm%nrecv = xmap%get1%nsend
2647  comm%sendsize = xmap%get1%recvsize
2648  comm%recvsize = xmap%get1%sendsize
2649  comm%send => xmap%get1%recv
2650  comm%recv => xmap%get1%send
2651  xmap%ind_put1 => xmap%ind_get1
2652  return
2653  endif
2654 
2655  max_size = 0
2656  do g=2,size(xmap%grids(:))
2657  max_size = max_size + xmap%grids(g)%size
2658  enddo
2659  grid1 => xmap%grids(1)
2660  comm%nsend = 0
2661  comm%nrecv = 0
2662  npes = xmap%npes
2663  allocate(pelist(0:npes-1))
2664  call mpp_get_current_pelist(pelist)
2665  allocate(send_size(0:npes-1))
2666  allocate(recv_size(0:npes-1))
2667  allocate(size_pe(0:npes-1))
2668  allocate(pos_x(0:npes-1))
2669  size_pe = 0
2670  send_size = 0
2671  recv_size = 0
2672 
2673  if(max_size > 0) then
2674  allocate(pe_put1(max_size))
2675  if (associated(xmap%ind_put1)) deallocate(xmap%ind_put1) !< Check if allocated
2676  allocate(xmap%ind_put1(max_size))
2677 
2678  !--- find the recv_indx
2679  ll = 0
2680  do g=2,size(xmap%grids(:))
2681  do l=1,xmap%grids(g)%size
2682  i1 = xmap%grids(g)%x(l)%i1
2683  j1 = xmap%grids(g)%x(l)%j1
2684  tile1 = xmap%grids(g)%x(l)%tile
2685  do p=0,npes-1
2686  if(grid1%tile(p) == tile1) then
2687  if(in_box(i1, j1, grid1%is(p), grid1%ie(p), grid1%js(p), grid1%je(p))) then
2688  size_pe(p) = size_pe(p) + 1
2689  exit
2690  endif
2691  endif
2692  enddo
2693  ll = ll + 1
2694  pe_put1(ll) = p
2695  enddo
2696  enddo
2697 
2698  pos_x = 0
2699  do p = 1, npes-1
2700  pos_x(p) = pos_x(p-1) + size_pe(p-1)
2701  enddo
2702 
2703  !---find the send size for get_1_from_xgrid
2704  allocate(iarray(max_size))
2705  allocate(jarray(max_size))
2706  allocate(tarray(max_size))
2707  if(monotonic_exchange) then
2708  allocate(diarray(max_size))
2709  allocate(djarray(max_size))
2710  endif
2711 
2712  ll = 0
2713 
2714  do g=2,size(xmap%grids(:))
2715  do l=1,xmap%grids(g)%size
2716  i1 = xmap%grids(g)%x(l)%i1
2717  j1 = xmap%grids(g)%x(l)%j1
2718  tile1 = xmap%grids(g)%x(l)%tile
2719  ll = ll + 1
2720  p = pe_put1(ll)
2721 
2722  found = .false.
2723  if(send_size(p) > 0) then
2724  if( i1 == iarray(pos_x(p)+send_size(p)) .AND. j1 == jarray(pos_x(p)+send_size(p)) &
2725  .AND. tile1 == tarray(pos_x(p)+send_size(p))) then
2726  found = .true.
2727  n = send_size(p)
2728  else
2729  !---may need to replace with a fast search algorithm
2730  do n = 1, send_size(p)
2731  if(i1 == iarray(pos_x(p)+n) .AND. j1 == jarray(pos_x(p)+n) .AND. tile1 == tarray(pos_x(p)+n)) then
2732  found = .true.
2733  exit
2734  endif
2735  enddo
2736  endif
2737  endif
2738  if( (.NOT. found) .OR. monotonic_exchange ) then
2739  send_size(p) = send_size(p)+1
2740  pos = pos_x(p)+send_size(p)
2741  iarray(pos) = i1
2742  jarray(pos) = j1
2743  tarray(pos) = tile1
2744  if(monotonic_exchange) then
2745  diarray(pos) = xmap%grids(g)%x(l)%di
2746  djarray(pos) = xmap%grids(g)%x(l)%dj
2747  endif
2748  n = send_size(p)
2749  endif
2750  xmap%ind_put1(ll) = n
2751  enddo
2752  enddo
2753 
2754  pos_x = 0
2755  do p = 1, npes-1
2756  pos_x(p) = pos_x(p-1) + send_size(p-1)
2757  enddo
2758 
2759  ll = 0
2760  do g=2,size(xmap%grids(:))
2761  do l=1,xmap%grids(g)%size
2762  i1 = xmap%grids(g)%x(l)%i1
2763  j1 = xmap%grids(g)%x(l)%j1
2764  tile1 = xmap%grids(g)%x(l)%tile
2765  ll = ll + 1
2766  p = pe_put1(ll)
2767  xmap%ind_put1(ll) = pos_x(p) + xmap%ind_put1(ll)
2768  enddo
2769  enddo
2770  endif
2771 
2772 
2773  mypos = mpp_pe()-mpp_root_pe()
2774 
2775  if (do_alltoall) then
2776  call mpp_alltoall(send_size, 1, recv_size, 1)
2777  else
2778  do n = 0, npes-1
2779  p = mod(mypos+npes-n, npes)
2780  call mpp_recv(recv_size(p), glen=1, from_pe=pelist(p), block=.false., tag=comm_tag_5)
2781  enddo
2782 
2783  !--- send data
2784  do n = 0, npes-1
2785  p = mod(mypos+n, npes)
2786  call mpp_send(send_size(p), plen=1, to_pe=pelist(p), tag=comm_tag_5)
2787  enddo
2788 
2789  call mpp_sync_self(check=event_recv)
2790  call mpp_sync_self()
2791  endif
2792  !--- recv for put_1_to_xgrid
2793  nrecv = count( send_size> 0)
2794  comm%nrecv = nrecv
2795  if(nrecv>0) then
2796  if (associated(comm%recv)) deallocate(comm%recv) !< Check if allocated
2797  allocate(comm%recv(nrecv))
2798  comm%recv(:)%count = 0
2799  endif
2800  pos = 0
2801  comm%recvsize = 0
2802  do p = 0, npes-1
2803  recv_buffer_pos(p) = pos
2804  pos = pos + send_size(p)
2805  enddo
2806 
2807  pos = 0
2808  do n = 0, npes-1
2809  p = mod(mypos+npes-n, npes)
2810  if(send_size(p)>0) then
2811  pos = pos + 1
2812  allocate(comm%recv(pos)%i(send_size(p)))
2813  comm%recv(pos)%buffer_pos = recv_buffer_pos(p)
2814  comm%recv(pos)%count = send_size(p)
2815  comm%recv(pos)%pe = pelist(p)
2816  comm%recvsize = comm%recvsize + send_size(p)
2817  endif
2818  enddo
2819 
2820  nset = 3
2821  if(monotonic_exchange) nset = 5
2822  rbuf_size = sum(recv_size)*nset
2823  sbuf_size = sum(send_size)*nset
2824  if(rbuf_size>0) allocate(recv_buf(rbuf_size))
2825  if(sbuf_size>0) allocate(send_buf(sbuf_size))
2826 
2827  pos = 0
2828  do n = 0, npes-1
2829  p = mod(mypos+npes-n, npes)
2830  if(recv_size(p) ==0) cycle
2831  msgsize = recv_size(p)*nset
2832  call mpp_recv(recv_buf(pos+1), glen=msgsize, from_pe=pelist(p), block=.false., tag=comm_tag_6)
2833  pos = pos + msgsize
2834  enddo
2835 
2836  pos_x = 0
2837  do p = 1, npes-1
2838  pos_x(p) = pos_x(p-1) + size_pe(p-1)
2839  enddo
2840  ll = 0
2841  pos = 0
2842  do n = 0, npes-1
2843  p = mod(mypos+n, npes)
2844  do l = 1, send_size(p)
2845  send_buf(pos+1) = real(iarray(pos_x(p)+l), r8_kind)
2846  send_buf(pos+2) = real(jarray(pos_x(p)+l), r8_kind)
2847  send_buf(pos+3) = real(tarray(pos_x(p)+l), r8_kind)
2848  if(monotonic_exchange) then
2849  send_buf(pos+4) = diarray(pos_x(p)+l)
2850  send_buf(pos+5) = djarray(pos_x(p)+l)
2851  endif
2852  pos = pos + nset
2853  enddo
2854  enddo
2855 
2856  pos = 0
2857  do n = 0, npes-1
2858  p = mod(mypos+n, npes)
2859  if(send_size(p) ==0) cycle
2860  msgsize = send_size(p)*nset
2861  call mpp_send(send_buf(pos+1), plen=msgsize, to_pe=pelist(p), tag=comm_tag_6 )
2862  pos = pos + msgsize
2863  enddo
2864 
2865  call mpp_sync_self(check=event_recv)
2866  nsend = count(recv_size>0)
2867  comm%nsend = nsend
2868  comm%sendsize = 0
2869 
2870  if(nsend >0) then
2871  if (associated(comm%send)) deallocate(comm%send) !< Check if allocated
2872  allocate(comm%send(nsend))
2873  comm%send(:)%count = 0
2874  pos = 0
2875  buffer_pos = 0
2876  do m=0,npes-1
2877  p = mod(mypos+npes-m, npes)
2878  if(recv_size(p)>0) then
2879  pos = pos + 1
2880  allocate(comm%send(pos)%i(recv_size(p)))
2881  allocate(comm%send(pos)%j(recv_size(p)))
2882  allocate(comm%send(pos)%tile(recv_size(p)))
2883  comm%send(pos)%pe = pelist(p)
2884  comm%send(pos)%count = recv_size(p)
2885  comm%sendsize = comm%sendsize + recv_size(p)
2886  if(monotonic_exchange) then
2887  allocate(comm%send(pos)%di(recv_size(p)))
2888  allocate(comm%send(pos)%dj(recv_size(p)))
2889  endif
2890  do n = 1, recv_size(p)
2891  comm%send(pos)%i(n) = int(recv_buf(buffer_pos+1) )- grid1%is_me + 1
2892  comm%send(pos)%j(n) = int(recv_buf(buffer_pos+2) )- grid1%js_me + 1
2893  comm%send(pos)%tile(n) = int(recv_buf(buffer_pos+3))
2894  if(monotonic_exchange) then
2895  comm%send(pos)%di(n) = recv_buf(buffer_pos+4)
2896  comm%send(pos)%dj(n) = recv_buf(buffer_pos+5)
2897  endif
2898  buffer_pos = buffer_pos + nset
2899  enddo
2900  endif
2901  enddo
2902  endif
2903 
2904  call mpp_sync_self()
2905  if(allocated(send_buf) ) deallocate(send_buf)
2906  if(allocated(recv_buf) ) deallocate(recv_buf)
2907  if(allocated(pelist) ) deallocate(pelist)
2908  if(allocated(pos_x) ) deallocate(pos_x)
2909  if(allocated(pelist) ) deallocate(pelist)
2910  if(allocated(iarray) ) deallocate(iarray)
2911  if(allocated(jarray) ) deallocate(jarray)
2912  if(allocated(tarray) ) deallocate(tarray)
2913  if(allocated(size_pe) ) deallocate(size_pe)
2914 
2915 end subroutine set_comm_put1
2916 
2917 
2918 !###############################################################################
2919 !> @brief Regenerate/Update the xmap
2920 !! @details This subroutine basically regenerates the exchange grid via updating the xmap.
2921 !! Practically xmap is the object specifying the exchange grid and has all the relevant information of Xgrid.
2922 !! Particularly note that regenerating the xmap/Xgrid accounts for dynamical changes of the subgrid parametrization
2923 !! of the side 2 components (land and ice-ocean).
2924 !! E.g., for when side 2 is the ice , the xgrid is regenrated so that
2925 !! OCN grid cells that are partially or totally open water contribute to (are side2 parent of) the Xgrid
2926 !! and conversely
2927 !! OCN grid cells that are totally ice covered do not contribute to (are kicked out of) the Xgrid.
2928 !! This makes xmap a dynamical object and a powerful tool for flux exchange calculations.
2929 !!
2930 !! Things to keep in mind about xmap/xgrid:
2931 !! xgrid contains two sides:
2932 !! side1: This is the side where 2d arrays are put to and get from the Xgrid
2933 !! side2: This is the side where 3d arrays are put to and get from the Xgrid.
2934 !! This was designed to enable exchange along sub-grid-scale (3rd dimension) for component models that have
2935 !! subgrid scale parametrization (e.g., seaice categories and land tiles).
2936 !! @param[inout] xmap exchange grid
2937 !!
2938 subroutine regen(xmap)
2939 type (xmap_type), intent(inout) :: xmap
2940 
2941  integer :: g, l, k, max_size
2942  integer :: i1, j1, i2, j2, p
2943  integer :: tile1
2944  integer :: ll, lll
2945  logical :: overlap_with_nest
2946  integer :: cnt(xmap%get1%nsend)
2947  integer :: i,j,n,xloc,pos,nsend,m,npes, mypos
2948  integer :: send_ind(0:xmap%npes-1)
2949 
2950  max_size = 0
2951 
2952  do g=2,size(xmap%grids(:))
2953  max_size = max_size + xmap%grids(g)%size * xmap%grids(g)%km
2954  end do
2955 
2956  if (max_size>size(xmap%x1(:))) then
2957  if (associated(xmap%x1)) deallocate(xmap%x1) !< Check x1 if allocated
2958  if (associated(xmap%x2)) deallocate(xmap%x2) !< Check x2 if allocated
2959  allocate( xmap%x1(1:max_size) )
2960  allocate( xmap%x2(1:max_size) )
2961  endif
2962 
2963 
2964  do g=2,size(xmap%grids(:))
2965  xmap%grids(g)%first = 1
2966  xmap%grids(g)%last = 0
2967  end do
2968 
2969  xmap%size = 0
2970  ll = 0
2971  do g=2,size(xmap%grids(:))
2972  xmap%grids(g)%first = xmap%size + 1;
2973 
2974  do l=1,xmap%grids(g)%size
2975  i1 = xmap%grids(g)%x(l)%i1
2976  j1 = xmap%grids(g)%x(l)%j1
2977  i2 = xmap%grids(g)%x(l)%i2
2978  j2 = xmap%grids(g)%x(l)%j2
2979  tile1 = xmap%grids(g)%x(l)%tile
2980  ll = ll + 1
2981  if(xmap%grids(g)%is_ug) then
2982  do k=1,xmap%grids(g)%km
2983  lll = xmap%grids(g)%l_index((j2-1)*xmap%grids(g)%im+i2)
2984  if (xmap%grids(g)%frac_area(lll,1,k)/=0.0_r8_kind) then
2985  xmap%size = xmap%size+1
2986  xmap%x1(xmap%size)%pos = xmap%ind_get1(ll)
2987  xmap%x1(xmap%size)%i = xmap%grids(g)%x(l)%i1
2988  xmap%x1(xmap%size)%j = xmap%grids(g)%x(l)%j1
2989  xmap%x1(xmap%size)%tile = xmap%grids(g)%x(l)%tile
2990  xmap%x1(xmap%size)%area = xmap%grids(g)%x(l)%area &
2991  *xmap%grids(g)%frac_area(lll,1,k)
2992  xmap%x1(xmap%size)%di = xmap%grids(g)%x(l)%di
2993  xmap%x1(xmap%size)%dj = xmap%grids(g)%x(l)%dj
2994  xmap%x2(xmap%size)%i = xmap%grids(g)%x(l)%i2
2995  xmap%x2(xmap%size)%j = xmap%grids(g)%x(l)%j2
2996  xmap%x2(xmap%size)%l = lll
2997  xmap%x2(xmap%size)%k = k
2998  xmap%x2(xmap%size)%area = xmap%grids(g)%x(l)%area * xmap%grids(g)%x(l)%scale
2999  endif
3000  enddo
3001  else
3002  do k=1,xmap%grids(g)%km
3003  if (xmap%grids(g)%frac_area(i2,j2,k)/=0.0_r8_kind) then
3004  xmap%size = xmap%size+1
3005  xmap%x1(xmap%size)%pos = xmap%ind_get1(ll)
3006  xmap%x1(xmap%size)%i = xmap%grids(g)%x(l)%i1
3007  xmap%x1(xmap%size)%j = xmap%grids(g)%x(l)%j1
3008  xmap%x1(xmap%size)%tile = xmap%grids(g)%x(l)%tile
3009  xmap%x1(xmap%size)%area = xmap%grids(g)%x(l)%area &
3010  *xmap%grids(g)%frac_area(i2,j2,k)
3011  xmap%x1(xmap%size)%di = xmap%grids(g)%x(l)%di
3012  xmap%x1(xmap%size)%dj = xmap%grids(g)%x(l)%dj
3013  xmap%x2(xmap%size)%i = xmap%grids(g)%x(l)%i2
3014  xmap%x2(xmap%size)%j = xmap%grids(g)%x(l)%j2
3015  xmap%x2(xmap%size)%k = k
3016  xmap%x2(xmap%size)%area = xmap%grids(g)%x(l)%area * xmap%grids(g)%x(l)%scale
3017  end if
3018  enddo
3019  end if
3020  end do
3021  xmap%grids(g)%last = xmap%size
3022  end do
3023 
3024 
3025  if (max_size>size(xmap%x1_put(:))) then
3026  if (associated(xmap%x1_put)) deallocate(xmap%x1_put) !< Check if allocated
3027  allocate( xmap%x1_put(1:max_size) )
3028  endif
3029  if (max_size>size(xmap%x2_get(:))) then
3030  if (associated(xmap%x2_get)) deallocate(xmap%x2_get) !< Check if allocated
3031  allocate( xmap%x2_get(1:max_size) )
3032  endif
3033 
3034  do g=2,size(xmap%grids(:))
3035  xmap%grids(g)%first_get = 1
3036  xmap%grids(g)%last_get = 0
3037  end do
3038 
3039  xmap%size_put1 = 0
3040  xmap%size_get2 = 0
3041  ll = 0
3042  do g=2,size(xmap%grids(:))
3043  xmap%grids(g)%first_get = xmap%size_get2 + 1;
3044 
3045  do l=1,xmap%grids(g)%size
3046  i1 = xmap%grids(g)%x(l)%i1
3047  j1 = xmap%grids(g)%x(l)%j1
3048  i2 = xmap%grids(g)%x(l)%i2
3049  j2 = xmap%grids(g)%x(l)%j2
3050  tile1 = xmap%grids(g)%x(l)%tile
3051  ll = ll + 1
3052  overlap_with_nest = .false.
3053  if( xmap%grids(1)%id == "ATM" .AND. tile1 == tile_parent .AND. &
3054  in_box(i1, j1, is_parent, ie_parent, js_parent, je_parent) ) overlap_with_nest = .true.
3055  if(xmap%grids(g)%is_ug) then
3056  do k=1,xmap%grids(g)%km
3057  lll = xmap%grids(g)%l_index((j2-1)*xmap%grids(g)%im+i2)
3058  if (xmap%grids(g)%frac_area(lll,1,k)/=0.0_r8_kind) then
3059  xmap%size_put1 = xmap%size_put1+1
3060  xmap%x1_put(xmap%size_put1)%pos = xmap%ind_put1(ll)
3061  xmap%x1_put(xmap%size_put1)%i = xmap%grids(g)%x(l)%i1
3062  xmap%x1_put(xmap%size_put1)%j = xmap%grids(g)%x(l)%j1
3063  xmap%x1_put(xmap%size_put1)%tile = xmap%grids(g)%x(l)%tile
3064  xmap%x1_put(xmap%size_put1)%area = xmap%grids(g)%x(l)%area &
3065  *xmap%grids(g)%frac_area(lll,1,k)
3066  xmap%x1_put(xmap%size_put1)%di = xmap%grids(g)%x(l)%di
3067  xmap%x1_put(xmap%size_put1)%dj = xmap%grids(g)%x(l)%dj
3068  if( .not. overlap_with_nest) then
3069  xmap%size_get2 = xmap%size_get2+1
3070  xmap%x2_get(xmap%size_get2)%i = xmap%grids(g)%x(l)%i2
3071  xmap%x2_get(xmap%size_get2)%j = xmap%grids(g)%x(l)%j2
3072  xmap%x2_get(xmap%size_get2)%l = lll
3073  xmap%x2_get(xmap%size_get2)%k = k
3074  xmap%x2_get(xmap%size_get2)%area = xmap%grids(g)%x(l)%area * xmap%grids(g)%x(l)%scale
3075  xmap%x2_get(xmap%size_get2)%pos = xmap%size_put1
3076  endif
3077  end if
3078  end do
3079  else
3080  do k=1,xmap%grids(g)%km
3081  if (xmap%grids(g)%frac_area(i2,j2,k)/=0.0_r8_kind) then
3082  xmap%size_put1 = xmap%size_put1+1
3083  xmap%x1_put(xmap%size_put1)%pos = xmap%ind_put1(ll)
3084  xmap%x1_put(xmap%size_put1)%i = xmap%grids(g)%x(l)%i1
3085  xmap%x1_put(xmap%size_put1)%j = xmap%grids(g)%x(l)%j1
3086  xmap%x1_put(xmap%size_put1)%tile = xmap%grids(g)%x(l)%tile
3087  xmap%x1_put(xmap%size_put1)%area = xmap%grids(g)%x(l)%area &
3088  *xmap%grids(g)%frac_area(i2,j2,k)
3089  xmap%x1_put(xmap%size_put1)%di = xmap%grids(g)%x(l)%di
3090  xmap%x1_put(xmap%size_put1)%dj = xmap%grids(g)%x(l)%dj
3091  if( .not. overlap_with_nest) then
3092  xmap%size_get2 = xmap%size_get2+1
3093  xmap%x2_get(xmap%size_get2)%i = xmap%grids(g)%x(l)%i2
3094  xmap%x2_get(xmap%size_get2)%j = xmap%grids(g)%x(l)%j2
3095  xmap%x2_get(xmap%size_get2)%k = k
3096  xmap%x2_get(xmap%size_get2)%area = xmap%grids(g)%x(l)%area * xmap%grids(g)%x(l)%scale
3097  xmap%x2_get(xmap%size_get2)%pos = xmap%size_put1
3098  endif
3099  end if
3100  end do
3101  endif
3102  end do
3103  xmap%grids(g)%last_get = xmap%size_get2
3104  end do
3105 
3106  !---set up information for get_1_from_xgrid_repro
3107  if (make_exchange_reproduce) then
3108  if (xmap%get1_repro%nsend > 0) then
3109  xloc = 0
3110  nsend = 0
3111  npes = xmap%npes
3112  mypos = mpp_pe() - mpp_root_pe()
3113  cnt(:) = 0
3114  do m=0,npes-1
3115  p = mod(mypos+m, npes)
3116  if( xmap%send_count_repro(p) > 0 ) then
3117  nsend = nsend + 1
3118  send_ind(p) = nsend
3119  endif
3120  enddo
3121  do g=2,size(xmap%grids(:))
3122  do l=1,xmap%grids(g)%size ! index into this side 2 grid's patterns
3123  p = xmap%grids(g)%x(l)%pe-xmap%root_pe
3124  n = send_ind(p)
3125  cnt(n) = cnt(n) + 1
3126  pos = cnt(n)
3127  xmap%get1_repro%send(n)%xLoc(pos) = xloc
3128  if( xmap%grids(g)%is_ug ) then
3129  i = xmap%grids(g)%x(l)%l2
3130  xloc = xloc + count(xmap%grids(g)%frac_area(i,1,:)/=0.0_r8_kind)
3131  else
3132  i = xmap%grids(g)%x(l)%i2
3133  j = xmap%grids(g)%x(l)%j2
3134  xloc = xloc + count(xmap%grids(g)%frac_area(i,j,:)/=0.0_r8_kind)
3135  endif
3136  enddo
3137  enddo
3138  endif
3139  endif
3140 
3141 end subroutine regen
3142 
3143 !#######################################################################
3144 !> @brief Changes sub-grid portion areas and/or number.
3145 !! @details (re)sets the "fraction area" of the side 2 component grid cell.
3146 !! "fraction area" is a dynamic property of the component model (seaice or land)
3147 !! that needs to be updated after each timestep of that component in order for the exhange mechanism to work properly.
3148 !! The input is a 3d array of numbers between 0 and 1. It signifies the
3149 !! fraction of the component grid cell area which has a model-specific property.
3150 !! This property is used for some sub-grid scale parametrization in the component model.
3151 !! E.g., for the seaice component model, the quantity of seaice in each grid cell (i,j)
3152 !! is distibuted into N=grid%km partitions (ice categories) each parametrized with a weight (part_size) that add to 1.
3153 !! E.g., for 6+2 thickness (h) categories used in GFDL seaice models we have
3154 !! given hlim(1, ..., 8) = [1.0e-10, 0.1, 0.3, 0.7, 1.1, 1.5, 2.0, 2.5] (meters)
3155 !! Caterory n=1 : h <= hlim(1), essentially no ice
3156 !! Caterory n=2...7 : hlim(n-1) < h <= hlim(n)
3157 !! Caterory n=8 : hlim(n-1) < h , unlimimitted ice thickness
3158 !! E.g., if seaice in grid cell (i,j) is parameterized as
3159 !! 10 % open water, 0% category 1, 40% category 2 , 50% category 3 then we have
3160 !! f(i,j,1:km) = part_size(i,j,1:8) = [0.1, 0.0, 0.4, 0.5, 0.0, 0.0, 0.0, 0.0]
3161 !!
3162 !! @param[in] f real(r8_kind) 3D array
3163 !! @param[in] grid_id 3 character grid ID
3164 !! @param[inout] xmap exchange grid
3165 !!
3166 !! <br>Example usage:
3167 !! @code{.F90}
3168 !! call fms_xgrid_set_frac_area (Ice%part_size(isc:iec,jsc:jec,:) , 'OCN', xmap_sfc)
3169 !! @endcode
3170 !!
3171 subroutine set_frac_area_sg(f, grid_id, xmap)
3172 real(r8_kind), dimension(:,:,:), intent(in) :: f !< fraction area to be set
3173 character(len=3), intent(in) :: grid_id !< 3 character grid ID
3174 type (xmap_type), intent(inout) :: xmap !< exchange grid with given grid ID
3175 
3176  integer :: g
3177  type(grid_type), pointer, save :: grid =>null()
3178 
3179  if (grid_id==xmap%grids(1)%id) call error_mesg ('xgrid_mod', &
3180  'set_frac_area called on side 1 grid', fatal)
3181  do g=2,size(xmap%grids(:))
3182  grid => xmap%grids(g)
3183  if (grid_id==grid%id) then
3184  if (size(f,3)/=size(grid%frac_area,3)) then
3185  if (associated(grid%frac_area)) deallocate (grid%frac_area) !< Check if allocated
3186  grid%km = size(f,3);
3187  allocate( grid%frac_area(grid%is_me:grid%ie_me, grid%js_me:grid%je_me, &
3188  grid%km) )
3189  end if
3190  grid%frac_area = f;
3191  call regen(xmap)
3192  return;
3193  end if
3194  end do
3195 
3196  call error_mesg ('xgrid_mod', 'set_frac_area: could not find grid id', fatal)
3197 
3198 end subroutine set_frac_area_sg
3199 
3200 !#######################################################################
3201 
3202 !> @brief Changes sub-grid portion areas and/or number.
3203 subroutine set_frac_area_ug(f, grid_id, xmap)
3204 real(r8_kind), dimension(:,:), intent(in) :: f !< fractional area to set
3205 character(len=3), intent(in) :: grid_id !< 3 character grid ID
3206 type (xmap_type), intent(inout) :: xmap !< exchange grid with given grid ID
3207 
3208  integer :: g
3209  type(grid_type), pointer, save :: grid =>null()
3210 
3211  if (grid_id==xmap%grids(1)%id) call error_mesg ('xgrid_mod', &
3212  'set_frac_area_ug called on side 1 grid', fatal)
3213  if (grid_id .NE. 'LND' ) call error_mesg ('xgrid_mod', &
3214  .NE.'set_frac_area_ug called for grid_id LND', fatal)
3215  do g=2,size(xmap%grids(:))
3216  grid => xmap%grids(g)
3217  if (grid_id==grid%id) then
3218  if (size(f,2)/=size(grid%frac_area,3)) then
3219  if (associated(grid%frac_area)) deallocate (grid%frac_area) !< Check if allocated
3220  grid%km = size(f,2);
3221  allocate( grid%frac_area(grid%ls_me:grid%le_me, 1, grid%km) )
3222  end if
3223  grid%frac_area(:,1,:) = f(:,:);
3224  call regen(xmap)
3225  return;
3226  end if
3227  end do
3228 
3229  call error_mesg ('xgrid_mod', 'set_frac_area_ug: could not find grid id', fatal)
3230 
3231 end subroutine set_frac_area_ug
3232 
3233 !#######################################################################
3234 
3235 !> @brief Returns current size of exchange grid variables.
3236 !! @return size of given exchange grid's variable
3237 integer function xgrid_count(xmap)
3238 type (xmap_type), intent(inout) :: xmap
3239 
3240  xgrid_count = xmap%size
3241 end function xgrid_count
3242 
3243 !#######################################################################
3244 
3245 !> Scatters data to exchange grid
3246 subroutine put_side1_to_xgrid(d, grid_id, x, xmap, remap_method, complete)
3247  real(r8_kind), dimension(:,:), intent(in) :: d !< data to send
3248  character(len=3), intent(in) :: grid_id !< 3 character grid ID
3249  real(r8_kind), dimension(:), intent(inout) :: x !< xgrid data
3250  type (xmap_type), intent(inout) :: xmap !< exchange grid
3251  integer, intent(in), optional :: remap_method !< exchange grid interpolation method can
3252  !! be FIRST_ORDER(=1) or SECOND_ORDER(=2)
3253  logical, intent(in), optional :: complete
3254 
3255  logical :: is_complete, set_mismatch
3256  integer :: g, method
3257  character(len=2) :: text
3258  integer, save :: isize=0
3259  integer, save :: jsize=0
3260  integer, save :: lsize=0
3261  integer, save :: xsize=0
3262  integer, save :: method_saved=0
3263  character(len=3), save :: grid_id_saved=""
3264  integer(i8_kind), dimension(MAX_FIELDS), save :: d_addrs = -9999_i8_kind
3265  integer(i8_kind), dimension(MAX_FIELDS), save :: x_addrs = -9999_i8_kind
3266 
3267  if (grid_id==xmap%grids(1)%id) then
3268  method = first_order ! default
3269  if(present(remap_method)) method = remap_method
3270  is_complete = .true.
3271  if(present(complete)) is_complete=complete
3272  lsize = lsize + 1
3273  if( lsize > max_fields ) then
3274  write( text,'(i2)' ) max_fields
3275  call error_mesg ('xgrid_mod', 'MAX_FIELDS='//trim(text)//' exceeded for group put_side1_to_xgrid', fatal)
3276  endif
3277  d_addrs(lsize) = loc(d)
3278  x_addrs(lsize) = loc(x)
3279 
3280  if(lsize == 1) then
3281  isize = size(d,1)
3282  jsize = size(d,2)
3283  xsize = size(x(:))
3284  method_saved = method
3285  grid_id_saved = grid_id
3286  else
3287  set_mismatch = .false.
3288  set_mismatch = set_mismatch .OR. (isize /= size(d,1))
3289  set_mismatch = set_mismatch .OR. (jsize /= size(d,2))
3290  set_mismatch = set_mismatch .OR. (xsize /= size(x(:)))
3291  set_mismatch = set_mismatch .OR. (method_saved /= method)
3292  set_mismatch = set_mismatch .OR. (grid_id_saved /= grid_id)
3293  if(set_mismatch)then
3294  write( text,'(i2)' ) lsize
3295  call error_mesg ('xgrid_mod', 'Incompatible field at count '//text//' for group put_side1_to_xgrid', fatal )
3296  endif
3297  endif
3298 
3299  if(is_complete) then
3300  !--- when exchange_monotonic is true and the side 1 ia atm, will always use monotonic
3301  !second order conservative.
3302  if(monotonic_exchange .AND. grid_id == 'ATM') then
3303  call put_1_to_xgrid_order_2(d_addrs, x_addrs, xmap, isize, jsize, xsize, lsize)
3304  else if(method == first_order) then
3305  call put_1_to_xgrid_order_1(d_addrs, x_addrs, xmap, isize, jsize, xsize, lsize)
3306  else
3307  if(grid_id .NE. 'ATM') call error_mesg ('xgrid_mod', &
3308  "second order put_to_xgrid should only be applied to 'ATM' model, "//&
3309  "contact developer", fatal)
3310  call put_1_to_xgrid_order_2(d_addrs, x_addrs, xmap, isize, jsize, xsize, lsize)
3311  endif
3312 
3313  d_addrs = -9999_i8_kind
3314  x_addrs = -9999_i8_kind
3315  isize = 0
3316  jsize = 0
3317  xsize = 0
3318  lsize = 0
3319  method_saved = 0
3320  grid_id_saved = ""
3321  endif
3322  return
3323  end if
3324 
3325  do g=2,size(xmap%grids(:))
3326  if (grid_id==xmap%grids(g)%id) &
3327  call error_mesg ('xgrid_mod', &
3328  'put_to_xgrid expects a 3D side 2 grid', fatal)
3329  end do
3330 
3331  call error_mesg ('xgrid_mod', 'put_to_xgrid: could not find grid id', fatal)
3332 
3333 end subroutine put_side1_to_xgrid
3334 
3335 !#######################################################################
3336 
3337 !> Scatters data to exchange grid
3338 subroutine put_side2_to_xgrid(d, grid_id, x, xmap)
3339 real(r8_kind), dimension(:,:,:), intent(in) :: d !< data to send
3340 character(len=3), intent(in) :: grid_id !< 3 character grid ID
3341 real(r8_kind), dimension(:), intent(inout) :: x !< xgrid data
3342 type (xmap_type), intent(inout) :: xmap !< exchange grid
3343 
3344  integer :: g
3345 
3346  if (grid_id==xmap%grids(1)%id) &
3347  call error_mesg ('xgrid_mod', &
3348  'put_side2_to_xgrid expects a 3D side 2 grid', fatal)
3349 
3350  do g=2,size(xmap%grids(:))
3351  if (grid_id==xmap%grids(g)%id) then
3352  call put_2_to_xgrid(d, xmap%grids(g), x, xmap)
3353  return;
3354  end if
3355  end do
3356 
3357  call error_mesg ('xgrid_mod', 'put_to_xgrid: could not find grid id', fatal)
3358 
3359 end subroutine put_side2_to_xgrid
3360 
3361 !#######################################################################
3362 
3363 subroutine get_side1_from_xgrid(d, grid_id, x, xmap, complete)
3364  real(r8_kind), dimension(:,:), intent(out) :: d !< received xgrid data
3365  character(len=3), intent(in) :: grid_id !< 3 character grid ID
3366  real(r8_kind), dimension(:), intent(in) :: x !< xgrid data
3367  type (xmap_type), intent(inout) :: xmap !< exchange grid
3368  logical, intent(in), optional :: complete
3369 
3370  logical :: is_complete, set_mismatch
3371  integer :: g
3372  character(len=2) :: text
3373  integer, save :: isize=0
3374  integer, save :: jsize=0
3375  integer, save :: lsize=0
3376  integer, save :: xsize=0
3377  character(len=3), save :: grid_id_saved=""
3378  integer(i8_kind), dimension(MAX_FIELDS), save :: d_addrs = -9999_i8_kind
3379  integer(i8_kind), dimension(MAX_FIELDS), save :: x_addrs = -9999_i8_kind
3380 
3381  d = 0.0_r8_kind
3382  if (grid_id==xmap%grids(1)%id) then
3383  is_complete = .true.
3384  if(present(complete)) is_complete=complete
3385  lsize = lsize + 1
3386  if( lsize > max_fields ) then
3387  write( text,'(i2)' ) max_fields
3388  call error_mesg ('xgrid_mod', 'MAX_FIELDS='//trim(text)//' exceeded for group get_side1_from_xgrid', fatal)
3389  endif
3390  d_addrs(lsize) = loc(d)
3391  x_addrs(lsize) = loc(x)
3392 
3393  if(lsize == 1) then
3394  isize = size(d,1)
3395  jsize = size(d,2)
3396  xsize = size(x(:))
3397  grid_id_saved = grid_id
3398  else
3399  set_mismatch = .false.
3400  set_mismatch = set_mismatch .OR. (isize /= size(d,1))
3401  set_mismatch = set_mismatch .OR. (jsize /= size(d,2))
3402  set_mismatch = set_mismatch .OR. (xsize /= size(x(:)))
3403  set_mismatch = set_mismatch .OR. (grid_id_saved /= grid_id)
3404  if(set_mismatch)then
3405  write( text,'(i2)' ) lsize
3406  call error_mesg ('xgrid_mod', 'Incompatible field at count '//text// &
3407  & ' for group get_side1_from_xgrid', fatal )
3408  endif
3409  endif
3410 
3411  if(is_complete) then
3412  if (make_exchange_reproduce) then
3413  call get_1_from_xgrid_repro(d_addrs, x_addrs, xmap, xsize, lsize)
3414  else
3415  call get_1_from_xgrid(d_addrs, x_addrs, xmap, isize, jsize, xsize, lsize)
3416  end if
3417  d_addrs(1:lsize) = -9999
3418  x_addrs(1:lsize) = -9999
3419  isize = 0
3420  jsize = 0
3421  xsize = 0
3422  lsize = 0
3423  grid_id_saved = ""
3424  endif
3425  return;
3426  end if
3427 
3428  do g=2,size(xmap%grids(:))
3429  if (grid_id==xmap%grids(g)%id) &
3430  call error_mesg ('xgrid_mod', &
3431  'get_from_xgrid expects a 3D side 2 grid', fatal)
3432  end do
3433 
3434  call error_mesg ('xgrid_mod', 'get_from_xgrid: could not find grid id', fatal)
3435 
3436 end subroutine get_side1_from_xgrid
3437 
3438 !#######################################################################
3439 
3440 subroutine get_side2_from_xgrid(d, grid_id, x, xmap)
3441 real(r8_kind), dimension(:,:,:), intent(out) :: d !< received xgrid data
3442 character(len=3), intent(in) :: grid_id !< 3 character grid ID
3443 real(r8_kind), dimension(:), intent(in) :: x !< xgrid data
3444 type (xmap_type), intent(in) :: xmap !< exchange grid
3445 
3446  integer :: g
3447 
3448  if (grid_id==xmap%grids(1)%id) &
3449  call error_mesg ('xgrid_mod', &
3450  'get_from_xgrid expects a 2D side 1 grid', fatal)
3451 
3452  do g=2,size(xmap%grids(:))
3453  if (grid_id==xmap%grids(g)%id) then
3454  call get_2_from_xgrid(d, xmap%grids(g), x, xmap)
3455  return;
3456  end if
3457  end do
3458 
3459  call error_mesg ('xgrid_mod', 'get_from_xgrid: could not find grid id', fatal)
3460 
3461 end subroutine get_side2_from_xgrid
3462 
3463 !#######################################################################
3464 
3465 !> @brief Returns logical associating exchange grid cells with given side two grid.
3466 subroutine some(xmap, some_arr, grid_id)
3467 type (xmap_type), intent(in) :: xmap
3468 character(len=3), optional, intent(in) :: grid_id
3469 logical, dimension(:), intent(out) :: some_arr !< logical associating exchange grid cells with given side 2 grid.
3470 
3471  integer :: g
3472 
3473  if (.not.present(grid_id)) then
3474 
3475  if(xmap%size > 0) then
3476  some_arr = .true.
3477  else
3478  some_arr = .false.
3479  end if
3480  return;
3481  end if
3482 
3483  if (grid_id==xmap%grids(1)%id) &
3484  call error_mesg ('xgrid_mod', 'some expects a side 2 grid id', fatal)
3485 
3486  do g=2,size(xmap%grids(:))
3487  if (grid_id==xmap%grids(g)%id) then
3488  some_arr = .false.
3489  some_arr(xmap%grids(g)%first:xmap%grids(g)%last) = .true.;
3490  return;
3491  end if
3492  end do
3493 
3494  call error_mesg ('xgrid_mod', 'some could not find grid id', fatal)
3495 
3496 end subroutine some
3497 
3498 !#######################################################################
3499 
3500 subroutine put_2_to_xgrid(d, grid, x, xmap)
3501 type (grid_type), intent(in) :: grid
3502 real(r8_kind), dimension(grid%is_me:grid%ie_me, & grid%js_me:grid%je_me, grid%km), intent(in) :: d
3503 real(r8_kind), dimension(:), intent(inout) :: x
3504 type (xmap_type), intent(in) :: xmap
3505 
3506  integer :: l
3507  call mpp_clock_begin(id_put_2_to_xgrid)
3508 
3509  do l=grid%first,grid%last
3510  x(l) = d(xmap%x2(l)%i,xmap%x2(l)%j,xmap%x2(l)%k)
3511  end do
3512 
3513  call mpp_clock_end(id_put_2_to_xgrid)
3514 end subroutine put_2_to_xgrid
3515 
3516 !#######################################################################
3517 
3518 subroutine get_2_from_xgrid(d, grid, x, xmap)
3519 type (grid_type), intent(in) :: grid
3520 real(r8_kind), dimension(grid%is_me:grid%ie_me, & grid%js_me:grid%je_me, grid%km), intent(out) :: d
3521 real(r8_kind), dimension(:), intent(in) :: x
3522 type (xmap_type), intent(in) :: xmap
3523 
3524  integer :: l, k
3525 
3526  call mpp_clock_begin(id_get_2_from_xgrid)
3527 
3528  d = 0.0_r8_kind
3529  do l=grid%first_get,grid%last_get
3530  d(xmap%x2_get(l)%i,xmap%x2_get(l)%j,xmap%x2_get(l)%k) = &
3531  d(xmap%x2_get(l)%i,xmap%x2_get(l)%j,xmap%x2_get(l)%k) + xmap%x2_get(l)%area*x(xmap%x2_get(l)%pos)
3532  end do
3533  !
3534  ! normalize with side 2 grid cell areas
3535  !
3536  do k=1,size(d,3)
3537  d(:,:,k) = d(:,:,k) * grid%area_inv
3538  end do
3539 
3540  call mpp_clock_end(id_get_2_from_xgrid)
3541 
3542 end subroutine get_2_from_xgrid
3543 
3544 !#######################################################################
3545 
3546 subroutine put_1_to_xgrid_order_1(d_addrs, x_addrs, xmap, isize, jsize, xsize, lsize)
3547  integer(i8_kind), dimension(:), intent(in) :: d_addrs
3548  integer(i8_kind), dimension(:), intent(in) :: x_addrs
3549  type (xmap_type), intent(inout) :: xmap
3550  integer, intent(in) :: isize, jsize, xsize, lsize
3551 
3552  integer :: i, j, p, buffer_pos, msgsize
3553  integer :: from_pe, to_pe, pos, n, l, count
3554  integer :: ibegin, istart, iend, start_pos
3555  type (comm_type), pointer, save :: comm =>null()
3556  real(r8_kind) :: recv_buffer(xmap%put1%recvsize*lsize)
3557  real(r8_kind) :: send_buffer(xmap%put1%sendsize*lsize)
3558  real(r8_kind) :: unpack_buffer(xmap%put1%recvsize)
3559 
3560  real(r8_kind), dimension(isize, jsize) :: d
3561  real(r8_kind), dimension(xsize) :: x
3562  pointer(ptr_d, d)
3563  pointer(ptr_x, x)
3564 
3565  call mpp_clock_begin(id_put_1_to_xgrid_order_1)
3566 
3567  !--- pre-post receiving
3568  comm => xmap%put1
3569  do p = 1, comm%nrecv
3570  msgsize = comm%recv(p)%count*lsize
3571  from_pe = comm%recv(p)%pe
3572  buffer_pos = comm%recv(p)%buffer_pos*lsize
3573  call mpp_recv(recv_buffer(buffer_pos+1), glen=msgsize, from_pe = from_pe, block=.false., tag=comm_tag_7)
3574  enddo
3575 
3576  !--- send the data
3577  buffer_pos = 0
3578  do p = 1, comm%nsend
3579  msgsize = comm%send(p)%count*lsize
3580  to_pe = comm%send(p)%pe
3581  pos = buffer_pos
3582  do l = 1, lsize
3583  ptr_d = d_addrs(l)
3584  do n = 1, comm%send(p)%count
3585  pos = pos + 1
3586  i = comm%send(p)%i(n)
3587  j = comm%send(p)%j(n)
3588  send_buffer(pos) = d(i,j)
3589  enddo
3590  enddo
3591  call mpp_send(send_buffer(buffer_pos+1), plen=msgsize, to_pe = to_pe, tag=comm_tag_7 )
3592  buffer_pos = buffer_pos + msgsize
3593  enddo
3594 
3595  call mpp_sync_self(check=event_recv)
3596 
3597  !--- unpack the buffer
3598  if( lsize == 1) then
3599  ptr_x = x_addrs(1)
3600  do l=1,xmap%size_put1
3601  x(l) = recv_buffer(xmap%x1_put(l)%pos)
3602  end do
3603  else
3604  start_pos = 0
3605 !$OMP parallel do default(none) shared(lsize,x_addrs,comm,recv_buffer,xmap) &
3606 !$OMP private(ptr_x,count,ibegin,istart,iend,pos,unpack_buffer)
3607  do l = 1, lsize
3608  ptr_x = x_addrs(l)
3609  do p = 1, comm%nrecv
3610  count = comm%recv(p)%count
3611  ibegin = comm%recv(p)%buffer_pos*lsize + 1
3612  istart = ibegin + (l-1)*count
3613  iend = istart + count - 1
3614  pos = comm%recv(p)%buffer_pos
3615  do n = istart, iend
3616  pos = pos + 1
3617  unpack_buffer(pos) = recv_buffer(n)
3618  enddo
3619  enddo
3620  do i=1,xmap%size_put1
3621  x(i) = unpack_buffer(xmap%x1_put(i)%pos)
3622  end do
3623  enddo
3624  endif
3625 
3626  call mpp_sync_self()
3627 
3628  call mpp_clock_end(id_put_1_to_xgrid_order_1)
3629 
3630 end subroutine put_1_to_xgrid_order_1
3631 
3632 !#######################################################################
3633 
3634 
3635 subroutine put_1_to_xgrid_order_2(d_addrs, x_addrs, xmap, isize, jsize, xsize, lsize)
3636  integer(i8_kind), dimension(:), intent(in) :: d_addrs
3637  integer(i8_kind), dimension(:), intent(in) :: x_addrs
3638  type (xmap_type), intent(inout) :: xmap
3639  integer, intent(in) :: isize, jsize, xsize, lsize
3640 
3641  !: NOTE: halo size is assumed to be 1 in setup_xmap
3642  real(r8_kind), dimension(0:isize+1, 0:jsize+1, lsize) :: tmp
3643  real(r8_kind), dimension(isize, jsize, lsize) :: tmpx, tmpy
3644  real(r8_kind), dimension(isize, jsize, lsize) :: d_bar_max, d_bar_min
3645  real(r8_kind), dimension(isize, jsize, lsize) :: d_max, d_min
3646  real(r8_kind) :: d_bar
3647  integer :: i, is, ie, j, js, je, ii, jj
3648  integer :: p, l, isd, jsd
3649  type (grid_type), pointer, save :: grid1 =>null()
3650  type (comm_type), pointer, save :: comm =>null()
3651  integer :: buffer_pos, msgsize, from_pe, to_pe, pos, n
3652  integer :: ibegin, count, istart, iend
3653  real(r8_kind) :: recv_buffer(xmap%put1%recvsize*lsize*3)
3654  real(r8_kind) :: send_buffer(xmap%put1%sendsize*lsize*3)
3655  real(r8_kind) :: unpack_buffer(xmap%put1%recvsize*3)
3656  logical :: on_west_edge, on_east_edge, on_south_edge, on_north_edge
3657  real(r8_kind), dimension(isize, jsize) :: d
3658  real(r8_kind), dimension(xsize) :: x
3659  pointer(ptr_d, d)
3660  pointer(ptr_x, x)
3661 
3662  call mpp_clock_begin(id_put_1_to_xgrid_order_2)
3663  grid1 => xmap%grids(1)
3664 
3665  is = grid1%is_me; ie = grid1%ie_me
3666  js = grid1%js_me; je = grid1%je_me
3667  isd = grid1%isd_me
3668  jsd = grid1%jsd_me
3669 
3670 !$OMP parallel do default(none) shared(lsize,tmp,d_addrs,isize,jsize) private(ptr_d)
3671  do l = 1, lsize
3672  tmp(:,:,l) = large_number
3673  ptr_d = d_addrs(l)
3674  tmp(1:isize,1:jsize,l) = d(:,:)
3675  enddo
3676 
3677  if(grid1%is_latlon) then
3678  call mpp_update_domains(tmp,grid1%domain_with_halo)
3679 !$OMP parallel do default(none) shared(lsize,tmp,grid1,is,ie,js,je,isd,jsd,tmpx,tmpy)
3680  do l = 1, lsize
3681  tmpy(:,:,l) = grad_merid_latlon(tmp(:,:,l), grid1%lat, is, ie, js, je, isd, jsd)
3682  tmpx(:,:,l) = grad_zonal_latlon(tmp(:,:,l), grid1%lon, grid1%lat, is, ie, js, je, isd, jsd)
3683  enddo
3684  else
3685  call mpp_update_domains(tmp,grid1%domain)
3686  on_west_edge = (is==1)
3687  on_east_edge = (ie==grid1%im)
3688  on_south_edge = (js==1)
3689  on_north_edge = (je==grid1%jm)
3690 !$OMP parallel do default(none) shared(lsize,tmp,grid1,tmpx,tmpy, &
3691 !$OMP on_west_edge,on_east_edge,on_south_edge,on_north_edge)
3692  do l = 1, lsize
3693  call gradient_cubic(tmp(:,:,l), grid1%box%dx, grid1%box%dy, grid1%box%area, &
3694  grid1%box%edge_w, grid1%box%edge_e, grid1%box%edge_s, &
3695  grid1%box%edge_n, grid1%box%en1, grid1%box%en2, &
3696  grid1%box%vlon, grid1%box%vlat, tmpx(:,:,l), tmpy(:,:,l), &
3697  on_west_edge, on_east_edge, on_south_edge, on_north_edge)
3698  enddo
3699  end if
3700 
3701  !--- pre-post receiving
3702  buffer_pos = 0
3703  comm => xmap%put1
3704  do p = 1, comm%nrecv
3705  msgsize = comm%recv(p)%count*lsize
3706  buffer_pos = comm%recv(p)%buffer_pos*lsize
3707  if(.NOT. monotonic_exchange) then
3708  msgsize = msgsize*3
3709  buffer_pos = buffer_pos*3
3710  endif
3711  from_pe = comm%recv(p)%pe
3712  call mpp_recv(recv_buffer(buffer_pos+1), glen=msgsize, from_pe = from_pe, block=.false., tag=comm_tag_8)
3713  enddo
3714 
3715  !--- compute d_bar_max and d_bar_min.
3716  if(monotonic_exchange) then
3717 !$OMP parallel do default(none) shared(lsize,isize,jsize,d_bar_max,d_bar_min,d_max,d_min,tmp)
3718  do l = 1, lsize
3719  do j = 1, jsize
3720  do i = 1, isize
3721  d_bar_max(i,j,l) = -large_number
3722  d_bar_min(i,j,l) = large_number
3723  d_max(i,j,l) = -large_number
3724  d_min(i,j,l) = large_number
3725  do jj = j-1, j+1
3726  do ii = i-1, i+1
3727  if(tmp(i,j,l) .NE. large_number) then
3728  if(tmp(i,j,l) > d_bar_max(i,j,l)) d_bar_max(i,j,l) = tmp(i,j,l)
3729  if(tmp(i,j,l) < d_bar_min(i,j,l)) d_bar_min(i,j,l) = tmp(i,j,l)
3730  endif
3731  enddo
3732  enddo
3733  enddo
3734  enddo
3735  enddo
3736  endif
3737 
3738  !--- send the data
3739  buffer_pos = 0
3740  if(monotonic_exchange) then
3741  pos = 0
3742  do p = 1, comm%nsend
3743  msgsize = comm%send(p)%count*lsize
3744  to_pe = comm%send(p)%pe
3745  do l = 1, lsize
3746  ptr_d = d_addrs(l)
3747  do n = 1, comm%send(p)%count
3748  pos = pos + 1
3749  i = comm%send(p)%i(n)
3750  j = comm%send(p)%j(n)
3751  send_buffer(pos) = d(i,j) + tmpy(i,j,l)*comm%send(p)%dj(n) + tmpx(i,j,l)*comm%send(p)%di(n)
3752  if(send_buffer(pos) > d_max(i,j,l)) d_max(i,j,l) = send_buffer(pos)
3753  if(send_buffer(pos) < d_min(i,j,l)) d_min(i,j,l) = send_buffer(pos)
3754  enddo
3755  enddo
3756  enddo
3757 
3758  do p = 1, comm%nsend
3759  msgsize = comm%send(p)%count*lsize
3760  to_pe = comm%send(p)%pe
3761  pos = buffer_pos
3762  do l = 1, lsize
3763  ptr_d = d_addrs(l)
3764  do n = 1, comm%send(p)%count
3765  pos = pos + 1
3766  i = comm%send(p)%i(n)
3767  j = comm%send(p)%j(n)
3768  d_bar = d(i,j)
3769  if( d_max(i,j,l) > d_bar_max(i,j,l) ) then
3770  send_buffer(pos) = d_bar + ((send_buffer(pos)-d_bar)/(d_max(i,j,l)-d_bar)) * (d_bar_max(i,j,l)-d_bar)
3771  else if( d_min(i,j,l) < d_bar_min(i,j,l) ) then
3772  send_buffer(pos) = d_bar + ((send_buffer(pos)-d_bar)/(d_min(i,j,l)-d_bar)) * (d_bar_min(i,j,l)-d_bar)
3773  endif
3774  enddo
3775  enddo
3776  call mpp_send(send_buffer(buffer_pos+1), plen=msgsize, to_pe = to_pe, tag=comm_tag_8 )
3777  buffer_pos = buffer_pos + msgsize
3778  enddo
3779  else
3780  do p = 1, comm%nsend
3781  msgsize = comm%send(p)%count*lsize*3
3782  to_pe = comm%send(p)%pe
3783  pos = buffer_pos
3784  do l = 1, lsize
3785  ptr_d = d_addrs(l)
3786  do n = 1, comm%send(p)%count
3787  pos = pos + 3
3788  i = comm%send(p)%i(n)
3789  j = comm%send(p)%j(n)
3790  send_buffer(pos-2) = d(i,j)
3791  send_buffer(pos-1) = tmpy(i,j,l)
3792  send_buffer(pos ) = tmpx(i,j,l)
3793  enddo
3794  enddo
3795  call mpp_send(send_buffer(buffer_pos+1), plen=msgsize, to_pe = to_pe, tag=comm_tag_8 )
3796  buffer_pos = buffer_pos + msgsize
3797  enddo
3798  endif
3799 
3800  call mpp_sync_self(check=event_recv)
3801 
3802  !--- unpack the buffer
3803  if(monotonic_exchange) then
3804  if( lsize == 1) then
3805  ptr_x = x_addrs(1)
3806  do l=1,xmap%size_put1
3807  pos = xmap%x1_put(l)%pos
3808  x(l) = recv_buffer(pos)
3809  end do
3810  else
3811  do l = 1, lsize
3812  ptr_x = x_addrs(l)
3813  pos = 0
3814  do p = 1, comm%nsend
3815  count = comm%send(p)%count
3816  ibegin = comm%recv(p)%buffer_pos*lsize + 1
3817  istart = ibegin + (l-1)*count
3818  iend = istart + count - 1
3819  pos = comm%recv(p)%buffer_pos
3820  do n = istart, iend
3821  pos = pos + 1
3822  unpack_buffer(pos) = recv_buffer(n)
3823  enddo
3824  enddo
3825  do i=1,xmap%size_put1
3826  pos = xmap%x1_put(i)%pos
3827  x(i) = unpack_buffer(pos)
3828  end do
3829  enddo
3830  endif
3831  else
3832  if( lsize == 1) then
3833  ptr_x = x_addrs(1)
3834 !$OMP parallel do default(none) shared(xmap,recv_buffer,ptr_x) private(pos)
3835  do l=1,xmap%size_put1
3836  pos = xmap%x1_put(l)%pos
3837  x(l) = recv_buffer(3*pos-2) + recv_buffer(3*pos-1)*xmap%x1_put(l)%dj + recv_buffer(3*pos)*xmap%x1_put(l)%di
3838  end do
3839  else
3840 !$OMP parallel do default(none) shared(lsize,comm,xmap,recv_buffer,x_addrs) &
3841 !$OMP private(ptr_x,pos,ibegin,istart,iend,count,unpack_buffer)
3842  do l = 1, lsize
3843  ptr_x = x_addrs(l)
3844  pos = 0
3845  ibegin = 1
3846  do p = 1, comm%nrecv
3847  count = comm%recv(p)%count*3
3848  ibegin = comm%recv(p)%buffer_pos*lsize*3 + 1
3849  istart = ibegin + (l-1)*count
3850  iend = istart + count - 1
3851  pos = comm%recv(p)%buffer_pos*3
3852  do n = istart, iend
3853  pos = pos + 1
3854  unpack_buffer(pos) = recv_buffer(n)
3855  enddo
3856  enddo
3857  do i=1,xmap%size_put1
3858  pos = xmap%x1_put(i)%pos
3859  x(i) = unpack_buffer(3*pos-2) + unpack_buffer(3*pos-1)*xmap%x1_put(i)%dj + unpack_buffer(3*pos) &
3860  & * xmap%x1_put(i)%di
3861  end do
3862  enddo
3863  endif
3864  endif
3865 
3866  call mpp_sync_self()
3867  call mpp_clock_end(id_put_1_to_xgrid_order_2)
3868 
3869 end subroutine put_1_to_xgrid_order_2
3870 
3871 !#######################################################################
3872 
3873 subroutine get_1_from_xgrid(d_addrs, x_addrs, xmap, isize, jsize, xsize, lsize)
3874  integer(i8_kind), dimension(:), intent(in) :: d_addrs
3875  integer(i8_kind), dimension(:), intent(in) :: x_addrs
3876  type (xmap_type), intent(inout) :: xmap
3877  integer, intent(in) :: isize, jsize, xsize, lsize
3878 
3879  real(r8_kind), dimension(xmap%size), target :: dg(xmap%size, lsize)
3880  integer :: i, j, l, p, n, m
3881  integer :: msgsize, buffer_pos, pos
3882  integer :: istart, iend, count
3883  real(r8_kind) , pointer, save :: dgp =>null()
3884  type(grid_type) , pointer, save :: grid1 =>null()
3885  type(comm_type) , pointer, save :: comm =>null()
3886  type(overlap_type), pointer, save :: send => null()
3887  type(overlap_type), pointer, save :: recv => null()
3888  real(r8_kind) :: recv_buffer(xmap%get1%recvsize*lsize*3)
3889  real(r8_kind) :: send_buffer(xmap%get1%sendsize*lsize*3)
3890  real(r8_kind) :: d(isize,jsize)
3891  real(r8_kind), dimension(xsize) :: x
3892  pointer(ptr_d, d)
3893  pointer(ptr_x, x)
3894 
3895  call mpp_clock_begin(id_get_1_from_xgrid)
3896 
3897  comm => xmap%get1
3898  grid1 => xmap%grids(1)
3899 
3900  do p = 1, comm%nrecv
3901  recv => comm%recv(p)
3902  msgsize = recv%count*lsize
3903  buffer_pos = recv%buffer_pos*lsize
3904  call mpp_recv(recv_buffer(buffer_pos+1), glen=msgsize, from_pe = recv%pe, block=.false., tag=comm_tag_9)
3905  enddo
3906 
3907  dg = 0.0_r8_kind;
3908 !$OMP parallel do default(none) shared(lsize,xmap,dg,x_addrs) private(dgp,ptr_x)
3909  do l = 1, lsize
3910  ptr_x = x_addrs(l)
3911  do i=1,xmap%size
3912  dgp => dg(xmap%x1(i)%pos,l)
3913  dgp = dgp + xmap%x1(i)%area*x(i)
3914  enddo
3915  enddo
3916 
3917 
3918  !--- send the data
3919  buffer_pos = 0
3920  istart = 1
3921  do p = 1, comm%nsend
3922  send => comm%send(p)
3923  msgsize = send%count*lsize
3924  pos = buffer_pos
3925  istart = send%buffer_pos+1
3926  iend = istart + send%count - 1
3927  do l = 1, lsize
3928  do n = istart, iend
3929  pos = pos + 1
3930  send_buffer(pos) = dg(n,l)
3931  enddo
3932  enddo
3933  call mpp_send(send_buffer(buffer_pos+1), plen=msgsize, to_pe = send%pe, tag=comm_tag_9 )
3934  buffer_pos = buffer_pos + msgsize
3935  istart = iend + 1
3936  enddo
3937 
3938  call mpp_sync_self(check=event_recv)
3939 
3940  !--- unpack the buffer
3941  do l = 1, lsize
3942  ptr_d = d_addrs(l)
3943  d = 0.0_r8_kind
3944  enddo
3945  !--- To bitwise reproduce old results, first copy the data onto its own pe.
3946 
3947  do p = 1, comm%nrecv
3948  recv => comm%recv(p)
3949  count = recv%count
3950  buffer_pos = recv%buffer_pos*lsize
3951  if( recv%pe == xmap%me ) then
3952 !$OMP parallel do default(none) shared(lsize,recv,recv_buffer,buffer_pos,d_addrs,count) &
3953 !$OMP private(ptr_d,i,j,pos)
3954  do l = 1, lsize
3955  pos = buffer_pos + (l-1)*count
3956  ptr_d = d_addrs(l)
3957  do n = 1,count
3958  i = recv%i(n)
3959  j = recv%j(n)
3960  pos = pos + 1
3961  d(i,j) = recv_buffer(pos)
3962  enddo
3963  enddo
3964  exit
3965  endif
3966  enddo
3967 
3968  pos = 0
3969  do m = 1, comm%nrecv
3970  p = comm%unpack_ind(m)
3971  recv => comm%recv(p)
3972  if( recv%pe == xmap%me ) then
3973  cycle
3974  endif
3975  buffer_pos = recv%buffer_pos*lsize
3976 !$OMP parallel do default(none) shared(lsize,recv,recv_buffer,buffer_pos,d_addrs) &
3977 !$OMP private(ptr_d,i,j,pos)
3978  do l = 1, lsize
3979  pos = buffer_pos + (l-1)*recv%count
3980  ptr_d = d_addrs(l)
3981  do n = 1, recv%count
3982  i = recv%i(n)
3983  j = recv%j(n)
3984  pos = pos + 1
3985  d(i,j) = d(i,j) + recv_buffer(pos)
3986  enddo
3987  enddo
3988  enddo
3989 
3990  !
3991  ! normalize with side 1 grid cell areas
3992  !
3993 !$OMP parallel do default(none) shared(lsize,d_addrs,grid1) private(ptr_d)
3994  do l = 1, lsize
3995  ptr_d = d_addrs(l)
3996  d = d * grid1%area_inv
3997  enddo
3998  call mpp_sync_self()
3999  call mpp_clock_end(id_get_1_from_xgrid)
4000 
4001 end subroutine get_1_from_xgrid
4002 
4003 !#######################################################################
4004 
4005 subroutine get_1_from_xgrid_repro(d_addrs, x_addrs, xmap, xsize, lsize)
4006  integer(i8_kind), dimension(:), intent(in) :: d_addrs
4007  integer(i8_kind), dimension(:), intent(in) :: x_addrs
4008  type (xmap_type), intent(inout) :: xmap
4009  integer, intent(in) :: xsize, lsize
4010 
4011  integer :: g, i, j, k, p, l, n, l2, l3
4012  integer :: msgsize, buffer_pos, pos
4013  type (grid_type), pointer, save :: grid =>null()
4014  type(comm_type), pointer, save :: comm => null()
4015  type(overlap_type), pointer, save :: send => null()
4016  type(overlap_type), pointer, save :: recv => null()
4017  integer, dimension(0:xmap%npes-1) :: pl, ml
4018  real(r8_kind) :: recv_buffer(xmap%recv_count_repro_tot*lsize)
4019  real(r8_kind) :: send_buffer(xmap%send_count_repro_tot*lsize)
4020  real(r8_kind) :: d(xmap%grids(1)%is_me:xmap%grids(1)%ie_me, &
4021  xmap%grids(1)%js_me:xmap%grids(1)%je_me)
4022  real(r8_kind), dimension(xsize) :: x
4023  pointer(ptr_d, d)
4024  pointer(ptr_x, x)
4025 
4026  call mpp_clock_begin(id_get_1_from_xgrid_repro)
4027  comm => xmap%get1_repro
4028  !--- pre-post receiving
4029  do p = 1, comm%nrecv
4030  recv => comm%recv(p)
4031  msgsize = recv%count*lsize
4032  buffer_pos = recv%buffer_pos*lsize
4033  call mpp_recv(recv_buffer(buffer_pos+1), glen=msgsize, from_pe = recv%pe, block=.false., tag=comm_tag_10)
4034  n = recv%pe -xmap%root_pe
4035  pl(n) = buffer_pos
4036  ml(n) = recv%count
4037  enddo
4038 
4039  !pack the data
4040  send_buffer(:) = 0.0_r8_kind
4041 !$OMP parallel do default(none) shared(lsize,x_addrs,comm,xmap,send_buffer) &
4042 !$OMP private(ptr_x,i,j,g,l2,pos,send)
4043  do p = 1, comm%nsend
4044  pos = comm%send(p)%buffer_pos*lsize
4045  send => comm%send(p)
4046  do l = 1,lsize
4047  ptr_x = x_addrs(l)
4048  do n = 1, send%count
4049  i = send%i(n)
4050  j = send%j(n)
4051  g = send%g(n)
4052  l2 = send%xloc(n)
4053  pos = pos + 1
4054  do k =1, xmap%grids(g)%km
4055  if(xmap%grids(g)%frac_area(i,j,k)/=0.0_r8_kind) then
4056  l2 = l2+1
4057  send_buffer(pos) = send_buffer(pos) + xmap%x1(l2)%area *x(l2)
4058  endif
4059  enddo
4060  enddo
4061  enddo
4062  enddo
4063 
4064  do p =1, comm%nsend
4065  buffer_pos = comm%send(p)%buffer_pos*lsize
4066  msgsize = comm%send(p)%count*lsize
4067  call mpp_send(send_buffer(buffer_pos+1), plen=msgsize, to_pe=comm%send(p)%pe, tag=comm_tag_10)
4068  enddo
4069 
4070  do l = 1, lsize
4071  ptr_d = d_addrs(l)
4072  d = 0
4073  enddo
4074 
4075  call mpp_sync_self(check=event_recv)
4076 
4077 !$OMP parallel do default(none) shared(lsize,d_addrs,xmap,recv_buffer,pl,ml) &
4078 !$OMP private(ptr_d,grid,i,j,p,pos)
4079  do l = 1, lsize
4080  ptr_d = d_addrs(l)
4081  do g=2,size(xmap%grids(:))
4082  grid => xmap%grids(g)
4083  do l3=1,grid%size_repro ! index into side1 grid's patterns
4084  i = grid%x_repro(l3)%i1
4085  j = grid%x_repro(l3)%j1
4086  p = grid%x_repro(l3)%pe-xmap%root_pe
4087  pos = pl(p) + (l-1)*ml(p) + grid%x_repro(l3)%recv_pos
4088  d(i,j) = d(i,j) + recv_buffer(pos)
4089  end do
4090  end do
4091  ! normalize with side 1 grid cell areas
4092  d = d * xmap%grids(1)%area_inv
4093  enddo
4094 
4095  call mpp_sync_self()
4096 
4097  call mpp_clock_end(id_get_1_from_xgrid_repro)
4098 
4099 end subroutine get_1_from_xgrid_repro
4100 
4101 !#######################################################################
4102 
4103 !> @brief conservation_check - returns three numbers which are the global sum of a
4104 !! variable (1) on its home model grid, (2) after interpolation to the other
4105 !! side grid(s), and (3) after re_interpolation back onto its home side grid(s).
4106 !! @return real(r8_kind) conservation_check_side1
4107 function conservation_check_side1(d, grid_id, xmap,remap_method) ! this one for 1->2->1
4108 real(r8_kind), dimension(:,:), intent(in) :: d !< model data to check
4109 character(len=3), intent(in) :: grid_id !< 3 character grid id
4110 type (xmap_type), intent(inout) :: xmap !< exchange grid
4111 real(r8_kind), dimension(3) :: conservation_check_side1
4112 integer, intent(in), optional :: remap_method
4113 
4114 
4115  real(r8_kind), dimension(xmap%size) :: x_over, x_back
4116  real(r8_kind), dimension(size(d,1),size(d,2)) :: d1
4117  real(r8_kind), dimension(:,:,:), allocatable :: d2
4118  integer :: g
4119  type (grid_type), pointer, save :: grid1 =>null(), grid2 =>null()
4120 
4121  grid1 => xmap%grids(1)
4122  conservation_check_side1 = 0.0_r8_kind
4123  if(grid1%tile_me .NE. tile_nest) conservation_check_side1(1) = sum(grid1%area*d)
4124 ! if(grid1%tile_me .NE. tile_parent .OR. grid1%id .NE. "ATM") &
4125 ! conservation_check_side1(1) = sum(grid1%area*d)
4126 
4127  call put_to_xgrid (d, grid1%id, x_over, xmap, remap_method) ! put from side 1
4128  do g=2,size(xmap%grids(:))
4129  grid2 => xmap%grids(g)
4130  if(grid2%on_this_pe) then
4131  allocate (d2(grid2%is_me:grid2%ie_me, grid2%js_me:grid2%je_me, grid2%km) )
4132  endif
4133  call get_from_xgrid (d2, grid2%id, x_over, xmap) ! get onto side 2's
4134  if(grid2%on_this_pe) then
4135  conservation_check_side1(2) = conservation_check_side1(2) + sum( grid2%area * sum(grid2%frac_area*d2,dim=3) )
4136  endif
4137  call put_to_xgrid (d2, grid2%id, x_back, xmap) ! put from side 2's
4138  if(allocated(d2))deallocate (d2)
4139  end do
4140  call get_from_xgrid(d1, grid1%id, x_back, xmap) ! get onto side 1
4141  if(grid1%tile_me .NE. tile_nest) conservation_check_side1(3) = sum(grid1%area*d1)
4142 ! if(grid1%tile_me .NE. tile_parent .OR. grid1%id .NE. "ATM") &
4143 ! conservation_check_side1(3) = sum(grid1%area*d1)
4144  call mpp_sum(conservation_check_side1,3)
4145 
4146 end function conservation_check_side1
4147 
4148 !#######################################################################
4149 
4150 !> @brief conservation_check - returns three numbers which are the global sum of a
4151 !! variable (1) on its home model grid, (2) after interpolation to the other
4152 !! side grid(s), and (3) after re_interpolation back onto its home side grid(s).
4153 !! @return real(r8_kind) conservation_check_side2
4154 function conservation_check_side2(d, grid_id, xmap,remap_method) ! this one for 2->1->2
4155 real(r8_kind), dimension(:,:,:), intent(in) :: d !< model data to check
4156 character(len=3), intent(in) :: grid_id !< 3 character grid ID
4157 type (xmap_type), intent(inout) :: xmap !< exchange grid
4158 real(r8_kind), dimension(3) :: conservation_check_side2
4159 integer, intent(in), optional :: remap_method
4160 
4161 
4162  real(r8_kind), dimension(xmap%size) :: x_over, x_back
4163  real(r8_kind), dimension(:,: ), allocatable :: d1
4164  real(r8_kind), dimension(:,:,:), allocatable :: d2
4165  integer :: g
4166  type (grid_type), pointer, save :: grid1 =>null(), grid2 =>null()
4167 
4168  grid1 => xmap%grids(1)
4169  conservation_check_side2 = 0.0_r8_kind
4170  do g = 2,size(xmap%grids(:))
4171  grid2 => xmap%grids(g)
4172  if (grid_id==grid2%id) then
4173  if(grid2%on_this_pe) then
4174  conservation_check_side2(1) = sum( grid2%area * sum(grid2%frac_area*d,dim=3) )
4175  endif
4176  call put_to_xgrid(d, grid_id, x_over, xmap) ! put from this side 2
4177  else
4178  call put_to_xgrid(0.0_r8_kind * grid2%frac_area, grid2%id, x_over, xmap) ! zero rest
4179  end if
4180  end do
4181 
4182  allocate ( d1(size(grid1%area,1),size(grid1%area,2)) )
4183  call get_from_xgrid(d1, grid1%id, x_over, xmap) ! get onto side 1
4184  if(grid1%tile_me .NE. tile_nest)conservation_check_side2(2) = sum(grid1%area*d1)
4185  call put_to_xgrid(d1, grid1%id, x_back, xmap,remap_method) ! put from side 1
4186  deallocate ( d1 )
4187 
4188  conservation_check_side2(3) = 0.0_r8_kind;
4189  do g = 2,size(xmap%grids(:))
4190  grid2 => xmap%grids(g)
4191  if(grid2%on_this_pe) then
4192  allocate ( d2( size(grid2%frac_area, 1), size(grid2%frac_area, 2), &
4193  size(grid2%frac_area, 3) ) )
4194  endif
4195  call get_from_xgrid(d2, grid2%id, x_back, xmap) ! get onto side 2's
4196  conservation_check_side2(3) = conservation_check_side2(3) + sum( grid2%area * sum(grid2%frac_area*d2,dim=3) )
4197  if(allocated(d2) )deallocate ( d2 )
4198  end do
4199  call mpp_sum(conservation_check_side2, 3)
4200 
4201 end function conservation_check_side2
4202 ! </FUNCTION>
4203 
4204 !#######################################################################
4205 
4206 !> @brief conservation_check_ug - returns three numbers which are the global sum of a
4207 !! variable (1) on its home model grid, (2) after interpolation to the other
4208 !! side grid(s), and (3) after re_interpolation back onto its home side grid(s).
4209 !! @return real(r8_kind) conservation_check_ug_side1
4210 function conservation_check_ug_side1(d, grid_id, xmap,remap_method) ! this one for 1->2->1
4211 real(r8_kind), dimension(:,:), intent(in) :: d !< model data to check
4212 character(len=3), intent(in) :: grid_id !< 3 character grid ID
4213 type (xmap_type), intent(inout) :: xmap !< exchange grid
4214 real(r8_kind), dimension(3) :: conservation_check_ug_side1
4215 integer, intent(in), optional :: remap_method
4216 
4217  real(r8_kind), dimension(xmap%size) :: x_over, x_back
4218  real(r8_kind), dimension(size(d,1),size(d,2)) :: d1
4219  real(r8_kind), dimension(:,:,:), allocatable :: d2
4220  real(r8_kind), dimension(: ), allocatable :: d_ug
4221  real(r8_kind), dimension(:,:), allocatable :: d2_ug
4222  integer :: g
4223  type (grid_type), pointer, save :: grid1 =>null(), grid2 =>null()
4224 
4225  grid1 => xmap%grids(1)
4226  conservation_check_ug_side1 = 0.0_r8_kind
4227 
4228 
4229  if(grid1%is_ug) then
4230  allocate(d_ug(grid1%ls_me:grid1%le_me))
4231  call mpp_pass_sg_to_ug(grid1%ug_domain, d, d_ug)
4232  if(grid1%tile_me .NE. tile_nest) conservation_check_ug_side1(1) = sum(grid1%area(:,1)*d_ug)
4233  call put_to_xgrid_ug (d_ug, grid1%id, x_over, xmap) ! put from side 1
4234  else
4235  if(grid1%tile_me .NE. tile_nest) conservation_check_ug_side1(1) = sum(grid1%area*d)
4236  call put_to_xgrid (d, grid1%id, x_over, xmap, remap_method) ! put from side 1
4237  endif
4238  do g=2,size(xmap%grids(:))
4239  grid2 => xmap%grids(g)
4240  if(grid2%is_ug) then
4241  if(grid2%on_this_pe) then
4242  allocate (d2_ug(grid2%ls_me:grid2%le_me, grid2%km) )
4243  d2_ug = 0
4244  endif
4245  call get_from_xgrid_ug (d2_ug, grid2%id, x_over, xmap) ! get onto side 2's
4246  if(grid2%on_this_pe) then
4248  sum( grid2%area(:,1) * sum(grid2%frac_area(:,1,:)*d2_ug,dim=2) )
4249  endif
4250  call put_to_xgrid_ug (d2_ug, grid2%id, x_back, xmap) ! put from side 2's
4251  if(allocated(d2_ug))deallocate (d2_ug)
4252  else
4253  if(grid2%on_this_pe) then
4254  allocate (d2(grid2%is_me:grid2%ie_me, grid2%js_me:grid2%je_me, grid2%km) )
4255  endif
4256  call get_from_xgrid (d2, grid2%id, x_over, xmap) ! get onto side 2's
4257  if(grid2%on_this_pe) then
4259  & + sum( grid2%area * sum(grid2%frac_area*d2,dim=3) )
4260  endif
4261  call put_to_xgrid (d2, grid2%id, x_back, xmap) ! put from side 2's
4262  if(allocated(d2))deallocate (d2)
4263  endif
4264  end do
4265  if(grid1%is_ug) then
4266 ! call get_from_xgrid_ug(d_ug, grid1%id, x_back, xmap) ! get onto side 1
4267  if(grid1%tile_me .NE. tile_nest) conservation_check_ug_side1(3) = sum(grid1%area(:,1)*d_ug)
4268  else
4269  call get_from_xgrid(d1, grid1%id, x_back, xmap) ! get onto side 1
4270  if(grid1%tile_me .NE. tile_nest) conservation_check_ug_side1(3) = sum(grid1%area*d1)
4271  endif
4272  if(allocated(d_ug)) deallocate(d_ug)
4273  call mpp_sum(conservation_check_ug_side1,3)
4274 
4275 end function conservation_check_ug_side1
4276 
4277 !#######################################################################
4278 
4279 !> @brief conservation_check_ug - returns three numbers which are the global sum of a
4280 !! variable (1) on its home model grid, (2) after interpolation to the other
4281 !! side grid(s), and (3) after re_interpolation back onto its home side grid(s).
4282 !! @return real(r8_kind) conservation_check_ug_side2
4283 function conservation_check_ug_side2(d, grid_id, xmap,remap_method) ! this one for 2->1->2
4284 real(r8_kind), dimension(:,:,:), intent(in) :: d !< model data to check
4285 character(len=3), intent(in) :: grid_id !< 3 character grid ID
4286 type (xmap_type), intent(inout) :: xmap !< exchange grid
4287 real(r8_kind), dimension(3) :: conservation_check_ug_side2
4288 integer, intent(in), optional :: remap_method
4289 
4290 
4291  real(r8_kind), dimension(xmap%size) :: x_over, x_back
4292  real(r8_kind), dimension(:,: ), allocatable :: d1, d_ug
4293  real(r8_kind), dimension(:,:,:), allocatable :: d2
4294  integer :: g
4295  type (grid_type), pointer, save :: grid1 =>null(), grid2 =>null()
4296 
4297  grid1 => xmap%grids(1)
4298  conservation_check_ug_side2 = 0.0_r8_kind
4299  do g = 2,size(xmap%grids(:))
4300  grid2 => xmap%grids(g)
4301  if (grid_id==grid2%id) then
4302  if(grid2%on_this_pe) then
4303  if(grid2%is_ug) then
4304  allocate(d_ug(grid2%ls_me:grid2%le_me,grid2%km))
4305  call mpp_pass_sg_to_ug(grid2%ug_domain, d, d_ug)
4306  conservation_check_ug_side2(1) = sum( grid2%area(:,1) * sum(grid2%frac_area(:,1,:)*d_ug,dim=2) )
4307  else
4308  conservation_check_ug_side2(1) = sum( grid2%area(:,:) * sum(grid2%frac_area(:,:,:)*d,dim=3) )
4309  endif
4310  endif
4311  if(grid2%is_ug) then
4312  call put_to_xgrid_ug(d_ug, grid_id, x_over, xmap) ! put from this side 2
4313  else
4314  call put_to_xgrid(d, grid_id, x_over, xmap) ! put from this side 2
4315  endif
4316  if(allocated(d_ug)) deallocate(d_ug)
4317  else
4318  if(grid2%is_ug) then
4319  call put_to_xgrid_ug(0.0_r8_kind * grid2%frac_area(:,1,:), grid2%id, x_over, xmap) ! zero rest
4320  else
4321  call put_to_xgrid(0.0_r8_kind * grid2%frac_area, grid2%id, x_over, xmap) ! zero rest
4322  endif
4323  end if
4324  end do
4325 
4326  allocate ( d1(size(grid1%area,1),size(grid1%area,2)) )
4327  if(grid1%is_ug) then
4328  call get_from_xgrid_ug(d1(:,1), grid1%id, x_over, xmap) ! get onto side 1
4329  else
4330  call get_from_xgrid(d1, grid1%id, x_over, xmap) ! get onto side 1
4331  endif
4332  if(grid1%tile_me .NE. tile_nest)conservation_check_ug_side2(2) = sum(grid1%area*d1)
4333  if(grid1%is_ug) then
4334  call put_to_xgrid_ug(d1(:,1), grid1%id, x_back, xmap) ! put from side 1
4335  else
4336  call put_to_xgrid(d1, grid1%id, x_back, xmap,remap_method) ! put from side 1
4337  endif
4338  deallocate ( d1 )
4339 
4340  conservation_check_ug_side2(3) = 0.0_r8_kind;
4341  do g = 2,size(xmap%grids(:))
4342  grid2 => xmap%grids(g)
4343  if(grid2%on_this_pe) then
4344  allocate ( d2( size(grid2%frac_area, 1), size(grid2%frac_area, 2), &
4345  size(grid2%frac_area, 3) ) )
4346  endif
4347  if(grid2%is_ug) then
4348  call get_from_xgrid_ug(d2(:,1,:), grid2%id, x_back, xmap) ! get onto side 2's
4349  else
4350  call get_from_xgrid(d2, grid2%id, x_back, xmap) ! get onto side 2's
4351  endif
4352  conservation_check_ug_side2(3) = conservation_check_ug_side2(3) + sum( grid2%area * sum(grid2%frac_area*d2,dim=3) )
4353  if(allocated(d2) )deallocate ( d2 )
4354  end do
4355  call mpp_sum(conservation_check_ug_side2, 3)
4356 
4357 end function conservation_check_ug_side2
4358 ! </FUNCTION>
4359 
4360 
4361 !******************************************************************************
4362 !> @brief This routine is used to get the grid area of component model with id.
4363 subroutine get_xmap_grid_area(id, xmap, area)
4364  character(len=3), intent(in) :: id
4365  type (xmap_type), intent(inout) :: xmap
4366  real(r8_kind), dimension(:,:), intent(out) :: area
4367  integer :: g
4368  logical :: found
4369 
4370  found = .false.
4371  do g = 1, size(xmap%grids(:))
4372  if (id==xmap%grids(g)%id ) then
4373  if(size(area,1) .NE. size(xmap%grids(g)%area,1) .OR. size(area,2) .NE. size(xmap%grids(g)%area,2) ) &
4374  call error_mesg("xgrid_mod", "size mismatch between area and xmap%grids(g)%area", fatal)
4375  area = xmap%grids(g)%area
4376  found = .true.
4377  exit
4378  end if
4379  end do
4380 
4381  if(.not. found) call error_mesg("xgrid_mod", id//" is not found in xmap%grids id", fatal)
4382 
4383 end subroutine get_xmap_grid_area
4384 
4385 !#######################################################################
4386 
4387 !> @brief This function is used to calculate the gradient along zonal direction.
4388 !! Maybe need to setup a limit for the gradient. The grid is assumeed
4389 !! to be regular lat-lon grid
4390 !! @return real(r8_kind) grad_zonal_latlon
4391 function grad_zonal_latlon(d, lon, lat, is, ie, js, je, isd, jsd)
4392 
4393  integer, intent(in) :: isd, jsd
4394  real(r8_kind), dimension(isd:,jsd:), intent(in) :: d
4395  real(r8_kind), dimension(:), intent(in) :: lon
4396  real(r8_kind), dimension(:), intent(in) :: lat
4397  integer, intent(in) :: is, ie, js, je
4398  real(r8_kind), dimension(is:ie,js:je) :: grad_zonal_latlon
4399  real(r8_kind) :: dx, costheta
4400  integer :: i, j, ip1, im1
4401 
4402  ! calculate the gradient of the data on each grid
4403  do i = is, ie
4404  if(i == 1) then
4405  ip1 = i+1; im1 = i
4406  else if(i==size(lon(:)) ) then
4407  ip1 = i; im1 = i-1
4408  else
4409  ip1 = i+1; im1 = i-1
4410  endif
4411  dx = lon(ip1) - lon(im1)
4412  if(abs(dx).lt.eps ) call error_mesg('xgrids_mod(grad_zonal_latlon)', 'Improper grid size in lontitude', fatal)
4413  if(dx .gt. pi) dx = dx - 2.0_r8_kind* pi
4414  if(dx .lt. -pi) dx = dx + 2.0_r8_kind* pi
4415  do j = js, je
4416  costheta = cos(lat(j))
4417  if(abs(costheta) .lt. eps) call error_mesg('xgrids_mod(grad_zonal_latlon)', 'Improper latitude grid', fatal)
4418  grad_zonal_latlon(i,j) = (d(ip1,j)-d(im1,j))/(dx*costheta)
4419  enddo
4420  enddo
4421 
4422  return
4423 
4424 end function grad_zonal_latlon
4425 
4426 !#######################################################################
4427 
4428 !> @brief This function is used to calculate the gradient along meridinal direction.
4429 !! Maybe need to setup a limit for the gradient. regular lat-lon grid are assumed
4430 !! @return grad_merid_latlon
4431 function grad_merid_latlon(d, lat, is, ie, js, je, isd, jsd)
4432  integer, intent(in) :: isd, jsd
4433  real(r8_kind), dimension(isd:,jsd:), intent(in) :: d
4434  real(r8_kind), dimension(:), intent(in) :: lat
4435  integer, intent(in) :: is, ie, js, je
4436  real(r8_kind), dimension(is:ie,js:je) :: grad_merid_latlon
4437  real(r8_kind) :: dy
4438  integer :: i, j, jp1, jm1
4439 
4440  ! calculate the gradient of the data on each grid
4441  do j = js, je
4442  if(j == 1) then
4443  jp1 = j+1; jm1 = j
4444  else if(j == size(lat(:)) ) then
4445  jp1 = j; jm1 = j-1
4446  else
4447  jp1 = j+1; jm1 = j-1
4448  endif
4449  dy = lat(jp1) - lat(jm1)
4450  if(abs(dy).lt.eps) call error_mesg('xgrids_mod(grad_merid_latlon)', 'Improper grid size in latitude', fatal)
4451 
4452  do i = is, ie
4453  grad_merid_latlon(i,j) = (d(i,jp1) - d(i,jm1))/dy
4454  enddo
4455  enddo
4456 
4457  return
4458 end function grad_merid_latlon
4459 
4460 !#######################################################################
4461 subroutine get_index_range(xmap, grid_index, is, ie, js, je, km)
4462 
4463  type(xmap_type), intent(in) :: xmap
4464  integer, intent(in) :: grid_index
4465  integer, intent(out) :: is, ie, js, je, km
4466 
4467  is = xmap % grids(grid_index) % is_me
4468  ie = xmap % grids(grid_index) % ie_me
4469  js = xmap % grids(grid_index) % js_me
4470  je = xmap % grids(grid_index) % je_me
4471  km = xmap % grids(grid_index) % km
4472 
4473 end subroutine get_index_range
4474 !#######################################################################
4475 
4476 !> @brief this version takes rank 3 data, it can be used to compute the flux on anything but the
4477 !! first grid, which typically is on the atmos side.
4478 !! note that "from" and "to" are optional, the stocks will be subtracted, resp. added, only
4479 !! if these are present.
4480 subroutine stock_move_3d(from, to, grid_index, stock_data3d, xmap, &
4481  & delta_t, from_side, to_side, radius, verbose, ier)
4483  ! this version takes rank 3 data, it can be used to compute the flux on anything but the
4484  ! first grid, which typically is on the atmos side.
4485  ! note that "from" and "to" are optional, the stocks will be subtracted, resp. added, only
4486  ! if these are present.
4487 
4488  use mpp_mod, only : mpp_sum
4489  use mpp_domains_mod, only : domain2d, mpp_redistribute, mpp_get_compute_domain
4490 
4491  type(stock_type), intent(inout), optional :: from, to
4492  integer, intent(in) :: grid_index !< grid index
4493  real(r8_kind), intent(in) :: stock_data3d(:,:,:) !< data array is 3d
4494  type(xmap_type), intent(in) :: xmap
4495  real(r8_kind), intent(in) :: delta_t
4496  integer, intent(in) :: from_side !< ISTOCK_TOP, ISTOCK_BOTTOM, or ISTOCK_SIDE
4497  integer, intent(in) :: to_side !< ISTOCK_TOP, ISTOCK_BOTTOM, or ISTOCK_SIDE
4498  real(r8_kind), intent(in) :: radius !< earth radius
4499  character(len=*), intent(in), optional :: verbose
4500  integer, intent(out) :: ier
4501 
4502  real(r8_kind) :: from_dq, to_dq
4503 
4504  ier = 0
4505  if(grid_index == 1) then
4506  ! data has rank 3 so grid index must be > 1
4507  ier = 1
4508  return
4509  endif
4510 
4511  if(.not. associated(xmap%grids) ) then
4512  ier = 2
4513  return
4514  endif
4515 
4516  from_dq = delta_t * 4.0_r8_kind * pi * radius**2 * sum( sum(xmap%grids(grid_index)%area * &
4517  & sum(xmap%grids(grid_index)%frac_area * stock_data3d, dim=3), dim=1))
4518  to_dq = from_dq
4519 
4520  ! update only if argument is present.
4521  if(present(to )) to % dq( to_side) = to % dq( to_side) + to_dq
4522  if(present(from)) from % dq(from_side) = from % dq(from_side) - from_dq
4523 
4524  if(present(verbose).and.debug_stocks) then
4525  call mpp_sum(from_dq)
4526  call mpp_sum(to_dq)
4527  from_dq = from_dq/(4.0_r8_kind*pi*radius**2)
4528  to_dq = to_dq /(4.0_r8_kind*pi*radius**2)
4529  if(mpp_pe()==mpp_root_pe()) then
4530  write(stocks_file,'(a,es19.12,a,es19.12,a)') verbose, from_dq,' [*/m^2]'
4531  endif
4532  endif
4533 
4534 end subroutine stock_move_3d
4535 
4536 !...................................................................
4537 !> @brief this version takes rank 2 data, it can be used to compute the flux on the atmos side
4538 !! note that "from" and "to" are optional, the stocks will be subtracted, resp. added, only
4539 !! if these are present.
4540 subroutine stock_move_2d(from, to, grid_index, stock_data2d, xmap, &
4541  & delta_t, from_side, to_side, radius, verbose, ier)
4543  ! this version takes rank 2 data, it can be used to compute the flux on the atmos side
4544  ! note that "from" and "to" are optional, the stocks will be subtracted, resp. added, only
4545  ! if these are present.
4546 
4547  use mpp_mod, only : mpp_sum
4548  use mpp_domains_mod, only : domain2d, mpp_redistribute, mpp_get_compute_domain
4549 
4550  type(stock_type), intent(inout), optional :: from, to
4551  integer, optional, intent(in) :: grid_index
4552  real(r8_kind), intent(in) :: stock_data2d(:,:) !< data array is 2d
4553  type(xmap_type), intent(in) :: xmap
4554  real(r8_kind), intent(in) :: delta_t
4555  integer, intent(in) :: from_side !< ISTOCK_TOP, ISTOCK_BOTTOM, or ISTOCK_SIDE
4556  integer, intent(in) :: to_side !< ISTOCK_TOP, ISTOCK_BOTTOM, or ISTOCK_SIDE
4557  real(r8_kind), intent(in) :: radius !< earth radius
4558  character(len=*), intent(in) :: verbose
4559  integer, intent(out) :: ier
4560 
4561  real(r8_kind) :: to_dq, from_dq
4562 
4563  ier = 0
4564 
4565  if(.not. associated(xmap%grids) ) then
4566  ier = 3
4567  return
4568  endif
4569 
4570  if( .not. present(grid_index) .or. grid_index==1 ) then
4571 
4572  ! only makes sense if grid_index == 1
4573  from_dq = delta_t * 4.0_r8_kind*pi*radius**2 * sum(sum(xmap%grids(1)%area * stock_data2d, dim=1))
4574  to_dq = from_dq
4575 
4576  else
4577 
4578  ier = 4
4579  return
4580 
4581  endif
4582 
4583  ! update only if argument is present.
4584  if(present(to )) to % dq( to_side) = to % dq( to_side) + to_dq
4585  if(present(from)) from % dq(from_side) = from % dq(from_side) - from_dq
4586 
4587  if(debug_stocks) then
4588  call mpp_sum(from_dq)
4589  call mpp_sum(to_dq)
4590  from_dq = from_dq/(4.0_r8_kind*pi*radius**2)
4591  to_dq = to_dq /(4.0_r8_kind*pi*radius**2)
4592  if(mpp_pe()==mpp_root_pe()) then
4593  write(stocks_file,'(a,es19.12,a,es19.12,a)') verbose, from_dq,' [*/m^2]'
4594  endif
4595  endif
4596 
4597 end subroutine stock_move_2d
4598 
4599 !#######################################################################
4600 !> @brief this version takes rank 3 data, it can be used to compute the flux on anything but the
4601 !! first grid, which typically is on the atmos side.
4602 !! note that "from" and "to" are optional, the stocks will be subtracted, resp. added, only
4603 !! if these are present.
4604 subroutine stock_move_ug_3d(from, to, grid_index, stock_ug_data3d, xmap, &
4605  & delta_t, from_side, to_side, radius, verbose, ier)
4607  ! this version takes rank 3 data, it can be used to compute the flux on anything but the
4608  ! first grid, which typically is on the atmos side.
4609  ! note that "from" and "to" are optional, the stocks will be subtracted, resp. added, only
4610  ! if these are present.
4611 
4612  use mpp_mod, only : mpp_sum
4613  use mpp_domains_mod, only : domain2d, mpp_redistribute, mpp_get_compute_domain
4614 
4615  type(stock_type), intent(inout), optional :: from, to
4616  integer, intent(in) :: grid_index !< grid index
4617  real(r8_kind), intent(in) :: stock_ug_data3d(:,:) !< data array is 3d
4618  type(xmap_type), intent(in) :: xmap
4619  real(r8_kind), intent(in) :: delta_t
4620  integer, intent(in) :: from_side !< ISTOCK_TOP, ISTOCK_BOTTOM, or ISTOCK_SIDE
4621  integer, intent(in) :: to_side !< ISTOCK_TOP, ISTOCK_BOTTOM, or ISTOCK_SIDE
4622  real(r8_kind), intent(in) :: radius !< earth radius
4623  character(len=*), intent(in), optional :: verbose
4624  integer, intent(out) :: ier
4625  real(r8_kind), dimension(size(stock_ug_data3d,1),size(stock_ug_data3d,2)) :: tmp
4626 
4627  real(r8_kind) :: from_dq, to_dq
4628 
4629  ier = 0
4630  if(grid_index == 1) then
4631  ! data has rank 3 so grid index must be > 1
4632  ier = 1
4633  return
4634  endif
4635 
4636  if(.not. associated(xmap%grids) ) then
4637  ier = 2
4638  return
4639  endif
4640 
4641  tmp = xmap%grids(grid_index)%frac_area(:,1,:) * stock_ug_data3d
4642  from_dq = delta_t * 4.0_r8_kind * pi * radius**2 * sum( xmap%grids(grid_index)%area(:,1) * &
4643  & sum(tmp, dim=2))
4644  to_dq = from_dq
4645 
4646  ! update only if argument is present.
4647  if(present(to )) to % dq( to_side) = to % dq( to_side) + to_dq
4648  if(present(from)) from % dq(from_side) = from % dq(from_side) - from_dq
4649 
4650  if(present(verbose).and.debug_stocks) then
4651  call mpp_sum(from_dq)
4652  call mpp_sum(to_dq)
4653  from_dq = from_dq/(4.0_r8_kind*pi*radius**2)
4654  to_dq = to_dq /(4.0_r8_kind*pi*radius**2)
4655  if(mpp_pe()==mpp_root_pe()) then
4656  write(stocks_file,'(a,es19.12,a,es19.12,a)') verbose, from_dq,' [*/m^2]'
4657  endif
4658  endif
4659 
4660 end subroutine stock_move_ug_3d
4661 
4662 
4663 
4664 !#######################################################################
4665 !> @brief surface/time integral of a 2d array
4666 subroutine stock_integrate_2d(integrate_data2d, xmap, delta_t, radius, res, ier)
4667 
4668  ! surface/time integral of a 2d array
4669 
4670  use mpp_mod, only : mpp_sum
4671 
4672  real(r8_kind), intent(in) :: integrate_data2d(:,:) !< data array is 2d
4673  type(xmap_type), intent(in) :: xmap
4674  real(r8_kind), intent(in) :: delta_t
4675  real(r8_kind), intent(in) :: radius !< earth radius
4676  real(r8_kind), intent(out) :: res
4677  integer, intent(out) :: ier
4678 
4679  ier = 0
4680  res = 0.0_r8_kind
4681 
4682  if(.not. associated(xmap%grids) ) then
4683  ier = 6
4684  return
4685  endif
4686 
4687  res = delta_t * 4.0_r8_kind * pi * radius**2 * sum(sum(xmap%grids(1)%area * integrate_data2d, dim=1))
4688 
4689 end subroutine stock_integrate_2d
4690 !#######################################################################
4691 
4692 !#######################################################################
4693 
4694 
4695 
4696 subroutine stock_print(stck, Time, comp_name, index, ref_value, radius, pelist)
4697 
4698  use mpp_mod, only : mpp_pe, mpp_root_pe, mpp_sum
4699  use time_manager_mod, only : time_type, get_time
4700  use diag_manager_mod, only : register_diag_field,send_data
4701 
4702  type(stock_type), intent(in) :: stck
4703  type(time_type), intent(in) :: Time
4704  character(len=*) :: comp_name
4705  integer, intent(in) :: index !< to map stock element (water, heat, ..) to a name
4706  real(r8_kind), intent(in) :: ref_value !< the stock value returned by the component per PE
4707  real(r8_kind), intent(in) :: radius
4708  integer, intent(in), optional :: pelist(:)
4709 
4710  integer, parameter :: initID = -2 !< initial value for diag IDs. Must not be equal to the value
4711  !! that register_diag_field returns when it can't register the filed -- otherwise the registration
4712  !! is attempted every time this subroutine is called
4713 
4714  real(r8_kind) :: f_value, c_value, planet_area
4715  character(len=80) :: formatString
4716  integer :: iday, isec, hours
4717  integer :: diagID, compInd
4718  integer, dimension(NELEMS,4), save :: f_valueDiagID = initid
4719  integer, dimension(NELEMS,4), save :: c_valueDiagID = initid
4720  integer, dimension(NELEMS,4), save :: fmc_valueDiagID = initid
4721 
4722  real(r8_kind) :: diagField
4723  logical :: used
4724  character(len=30) :: field_name, units
4725 
4726  f_value = sum(stck % dq)
4727  c_value = ref_value - stck % q_start
4728  if(present(pelist)) then
4729  call mpp_sum(f_value, pelist=pelist)
4730  call mpp_sum(c_value, pelist=pelist)
4731  else
4732  call mpp_sum(f_value)
4733  call mpp_sum(c_value)
4734  endif
4735 
4736  if(mpp_pe() == mpp_root_pe()) then
4737  ! normalize to 1 earth m^2
4738  planet_area = 4.0_r8_kind * pi * radius**2
4739  f_value = f_value / planet_area
4740  c_value = c_value / planet_area
4741 
4742  if(comp_name == 'ATM') compind = 1
4743  if(comp_name == 'LND') compind = 2
4744  if(comp_name == 'ICE') compind = 3
4745  if(comp_name == 'OCN') compind = 4
4746 
4747 
4748  if(f_valuediagid(index,compind) == initid) then
4749  field_name = trim(comp_name) // trim(stock_names(index))
4750  field_name = trim(field_name) // 'StocksChange_Flux'
4751  units = trim(stock_units(index))
4752  f_valuediagid(index,compind) = register_diag_field('stock_print', field_name, time, &
4753  units=units)
4754  endif
4755 
4756  if(c_valuediagid(index,compind) == initid) then
4757  field_name = trim(comp_name) // trim(stock_names(index))
4758  field_name = trim(field_name) // 'StocksChange_Comp'
4759  units = trim(stock_units(index))
4760  c_valuediagid(index,compind) = register_diag_field('stock_print', field_name, time, &
4761  units=units)
4762  endif
4763 
4764  if(fmc_valuediagid(index,compind) == initid) then
4765  field_name = trim(comp_name) // trim(stock_names(index))
4766  field_name = trim(field_name) // 'StocksChange_Diff'
4767  units = trim(stock_units(index))
4768  fmc_valuediagid(index,compind) = register_diag_field('stock_print', field_name, time, &
4769  units=units)
4770  endif
4771 
4772  diagid=f_valuediagid(index,compind)
4773  diagfield = f_value
4774  if (diagid > 0) used = send_data(diagid, diagfield, time = time)
4775  diagid=c_valuediagid(index,compind)
4776  diagfield = c_value
4777  if (diagid > 0) used = send_data(diagid, diagfield, time)
4778  diagid=fmc_valuediagid(index,compind)
4779  diagfield = f_value-c_value
4780  if (diagid > 0) used = send_data(diagid, diagfield, time=time)
4781 
4782 
4783  call get_time(time, isec, iday)
4784  hours = iday*24 + isec/3600
4785  formatstring = '(a,a,a,i16,2x,es22.15,2x,es22.15,2x,es22.15)'
4786  write(stocks_file,formatstring) trim(comp_name),stock_names(index),stock_units(index) &
4787  ,hours,f_value,c_value,f_value-c_value
4788 
4789  endif
4790 
4791 
4792 end subroutine stock_print
4793 
4794 
4795 !###############################################################################
4796  !> @return logical is_lat_lon
4797  function is_lat_lon(lon, lat)
4798  real(r8_kind), dimension(:,:), intent(in) :: lon, lat
4799  logical :: is_lat_lon
4800  integer :: i, j, nlon, nlat, num
4801 
4802  is_lat_lon = .true.
4803  nlon = size(lon,1)
4804  nlat = size(lon,2)
4805  loop_lat: do j = 1, nlat
4806  do i = 2, nlon
4807  if(lat(i,j) .NE. lat(1,j)) then
4808  is_lat_lon = .false.
4809  exit loop_lat
4810  end if
4811  end do
4812  end do loop_lat
4813 
4814  if(is_lat_lon) then
4815  loop_lon: do i = 1, nlon
4816  do j = 2, nlat
4817  if(lon(i,j) .NE. lon(i,1)) then
4818  is_lat_lon = .false.
4819  exit loop_lon
4820  end if
4821  end do
4822  end do loop_lon
4823  end if
4824 
4825  num = 0
4826  if(is_lat_lon) num = 1
4827  call mpp_min(num)
4828  if(num == 1) then
4829  is_lat_lon = .true.
4830  else
4831  is_lat_lon = .false.
4832  end if
4833 
4834  return
4835  end function is_lat_lon
4836 
4837 !#######################################################################
4838 
4839 ! <SUBROUTINE NAME="get_side1_from_xgrid_ug" INTERFACE="get_from_xgrid_ug">
4840 ! <IN NAME="x" TYPE="real(r8_kind)" DIM="(:)" > </IN>
4841 ! <IN NAME="grid_id" TYPE=" character(len=3)" > </IN>
4842 ! <OUT NAME="d" TYPE="real(r8_kind)" DIM="(:,:)" > </OUT>
4843 ! <INOUT NAME="xmap" TYPE="xmap_type" > </INOUT>
4844 
4845 subroutine get_side1_from_xgrid_ug(d, grid_id, x, xmap, complete)
4846  real(r8_kind), dimension(:), intent(out) :: d
4847  character(len=3), intent(in) :: grid_id
4848  real(r8_kind), dimension(:), intent(in) :: x
4849  type (xmap_type), intent(inout) :: xmap
4850  logical, intent(in), optional :: complete
4851 
4852  logical :: is_complete, set_mismatch
4853  integer :: g
4854  character(len=2) :: text
4855  integer, save :: isize=0
4856  integer, save :: lsize=0
4857  integer, save :: xsize=0
4858  character(len=3), save :: grid_id_saved=""
4859  integer(i8_kind), dimension(MAX_FIELDS), save :: d_addrs = -9999_i8_kind
4860  integer(i8_kind), dimension(MAX_FIELDS), save :: x_addrs = -9999_i8_kind
4861 
4862  d = 0.0_r8_kind
4863  if (grid_id==xmap%grids(1)%id) then
4864  is_complete = .true.
4865  if(present(complete)) is_complete=complete
4866  lsize = lsize + 1
4867  if( lsize > max_fields ) then
4868  write( text,'(i2)' ) max_fields
4869  call error_mesg ('xgrid_mod', 'MAX_FIELDS='//trim(text)//' exceeded for group get_side1_from_xgrid_ug', fatal)
4870  endif
4871  d_addrs(lsize) = loc(d)
4872  x_addrs(lsize) = loc(x)
4873 
4874  if(lsize == 1) then
4875  isize = size(d(:))
4876  xsize = size(x(:))
4877  grid_id_saved = grid_id
4878  else
4879  set_mismatch = .false.
4880  set_mismatch = set_mismatch .OR. (isize /= size(d(:)))
4881  set_mismatch = set_mismatch .OR. (xsize /= size(x(:)))
4882  set_mismatch = set_mismatch .OR. (grid_id_saved /= grid_id)
4883  if(set_mismatch)then
4884  write( text,'(i2)' ) lsize
4885  call error_mesg ('xgrid_mod', 'Incompatible field at count '//text// &
4886  & ' for group get_side1_from_xgrid_ug', fatal )
4887  endif
4888  endif
4889 
4890  if(is_complete) then
4891  if (make_exchange_reproduce) then
4892  call get_1_from_xgrid_ug_repro(d_addrs, x_addrs, xmap, xsize, lsize)
4893  else
4894  call get_1_from_xgrid_ug(d_addrs, x_addrs, xmap, isize, xsize, lsize)
4895  end if
4896  d_addrs(1:lsize) = -9999_i8_kind
4897  x_addrs(1:lsize) = -9999_i8_kind
4898  isize = 0
4899  xsize = 0
4900  lsize = 0
4901  grid_id_saved = ""
4902  endif
4903  return;
4904  end if
4905 
4906  do g=2,size(xmap%grids(:))
4907  if (grid_id==xmap%grids(g)%id) &
4908  call error_mesg ('xgrid_mod', &
4909  'get_from_xgrid_ug expects a 3D side 2 grid', fatal)
4910  end do
4911 
4912  call error_mesg ('xgrid_mod', 'get_from_xgrid_ug: could not find grid id', fatal)
4913 
4914 end subroutine get_side1_from_xgrid_ug
4915 ! </SUBROUTINE>
4916 
4917 !#######################################################################
4918 
4919 ! <SUBROUTINE NAME="put_side1_to_xgrid_ug" INTERFACE="put_to_xgrid_ug">
4920 ! <IN NAME="d" TYPE="real(r8_kind)" DIM="(:,:)" > </IN>
4921 ! <IN NAME="grid_id" TYPE=" character(len=3)" > </IN>
4922 ! <INOUT NAME="x" TYPE="real(r8_kind)" DIM="(:)" > </INOUT>
4923 ! <INOUT NAME="xmap" TYPE="xmap_type" > </INOUT>
4924 ! <IN NAME="remap_method" TYPE="integer,optional"></IN>
4925 
4926 !> @brief Currently only support first order.
4927 subroutine put_side1_to_xgrid_ug(d, grid_id, x, xmap, complete)
4928  real(r8_kind), dimension(:), intent(in) :: d !<
4929  character(len=3), intent(in) :: grid_id
4930  real(r8_kind), dimension(:), intent(inout) :: x
4931  type (xmap_type), intent(inout) :: xmap
4932  logical, intent(in), optional :: complete
4933 
4934  logical :: is_complete, set_mismatch
4935  integer :: g
4936  character(len=2) :: text
4937  integer, save :: dsize=0
4938  integer, save :: lsize=0
4939  integer, save :: xsize=0
4940  character(len=3), save :: grid_id_saved=""
4941  integer(i8_kind), dimension(MAX_FIELDS), save :: d_addrs = -9999_i8_kind
4942  integer(i8_kind), dimension(MAX_FIELDS), save :: x_addrs = -9999_i8_kind
4943 
4944  if (grid_id==xmap%grids(1)%id) then
4945  is_complete = .true.
4946  if(present(complete)) is_complete=complete
4947  lsize = lsize + 1
4948  if( lsize > max_fields ) then
4949  write( text,'(i2)' ) max_fields
4950  call error_mesg ('xgrid_mod', 'MAX_FIELDS='//trim(text)//' exceeded for group put_side1_to_xgrid_ug', fatal)
4951  endif
4952  d_addrs(lsize) = loc(d)
4953  x_addrs(lsize) = loc(x)
4954 
4955  if(lsize == 1) then
4956  dsize = size(d(:))
4957  xsize = size(x(:))
4958  grid_id_saved = grid_id
4959  else
4960  set_mismatch = .false.
4961  set_mismatch = set_mismatch .OR. (dsize /= size(d(:)))
4962  set_mismatch = set_mismatch .OR. (xsize /= size(x(:)))
4963  set_mismatch = set_mismatch .OR. (grid_id_saved /= grid_id)
4964  if(set_mismatch)then
4965  write( text,'(i2)' ) lsize
4966  call error_mesg ('xgrid_mod', 'Incompatible field at count '//text// &
4967  & ' for group put_side1_to_xgrid_ug', fatal )
4968  endif
4969  endif
4970 
4971  if(is_complete) then
4972  call put_1_to_xgrid_ug_order_1(d_addrs, x_addrs, xmap, dsize, xsize, lsize)
4973  d_addrs(1:lsize) = -9999_i8_kind
4974  x_addrs(1:lsize) = -9999_i8_kind
4975  dsize = 0
4976  xsize = 0
4977  lsize = 0
4978  grid_id_saved = ""
4979  endif
4980  return
4981  end if
4982 
4983  do g=2,size(xmap%grids(:))
4984  if (grid_id==xmap%grids(g)%id) &
4985  call error_mesg ('xgrid_mod', &
4986  'put_to_xgrid_ug expects a 2D side 2 grid', fatal)
4987  end do
4988 
4989  call error_mesg ('xgrid_mod', 'put_to_xgrid_ug: could not find grid id', fatal)
4990 
4991 end subroutine put_side1_to_xgrid_ug
4992 ! </SUBROUTINE>
4993 
4994 !#######################################################################
4995 
4996 ! <SUBROUTINE NAME="put_side2_to_xgrid_ug" INTERFACE="put_to_xgrid_ug">
4997 ! <IN NAME="d" TYPE="real(r8_kind)" DIM="(:,:)" > </IN>
4998 ! <IN NAME="grid_id" TYPE=" character(len=3)" > </IN>
4999 ! <INOUT NAME="x" TYPE="real(r8_kind)" DIM="(:)" > </INOUT>
5000 ! <INOUT NAME="xmap" TYPE="xmap_type" > </INOUT>
5001 
5002 subroutine put_side2_to_xgrid_ug(d, grid_id, x, xmap)
5003  real(r8_kind), dimension(:,:), intent(in) :: d
5004  character(len=3), intent(in) :: grid_id
5005  real(r8_kind), dimension(:), intent(inout) :: x
5006  type (xmap_type), intent(inout) :: xmap
5007 
5008  integer :: g
5009 
5010  if (grid_id==xmap%grids(1)%id) &
5011  call error_mesg ('xgrid_mod', &
5012  'put_to_xgrid_ug expects a 2D side 1 grid', fatal)
5013 
5014  do g=2,size(xmap%grids(:))
5015  if (grid_id==xmap%grids(g)%id) then
5016  call put_2_to_xgrid_ug(d, xmap%grids(g), x, xmap)
5017  return;
5018  end if
5019  end do
5020 
5021  call error_mesg ('xgrid_mod', 'put_to_xgrid_ug: could not find grid id', fatal)
5022 
5023 end subroutine put_side2_to_xgrid_ug
5024 ! </SUBROUTINE>
5025 
5026 !#######################################################################
5027 
5028 ! <SUBROUTINE NAME="get_side2_from_xgrid_ug" INTERFACE="get_from_xgrid_ug">
5029 ! <IN NAME="x" TYPE="real(r8_kind)" DIM="(:)" > </IN>
5030 ! <IN NAME="grid_id" TYPE=" character(len=3)" > </IN>
5031 ! <OUT NAME="d" TYPE="real(r8_kind)" DIM="(:,:)" > </OUT>
5032 ! <INOUT NAME="xmap" TYPE="xmap_type" > </INOUT>
5033 
5034 subroutine get_side2_from_xgrid_ug(d, grid_id, x, xmap)
5035  real(r8_kind), dimension(:,:), intent(out) :: d
5036  character(len=3), intent(in) :: grid_id
5037  real(r8_kind), dimension(:), intent(in) :: x
5038  type (xmap_type), intent(in) :: xmap
5039 
5040  integer :: g
5041 
5042  if (grid_id==xmap%grids(1)%id) &
5043  call error_mesg ('xgrid_mod', &
5044  'get_from_xgrid_ug expects a 2D side 1 grid', fatal)
5045 
5046  do g=2,size(xmap%grids(:))
5047  if (grid_id==xmap%grids(g)%id) then
5048  call get_2_from_xgrid_ug(d, xmap%grids(g), x, xmap)
5049  return;
5050  end if
5051  end do
5052 
5053  call error_mesg ('xgrid_mod', 'get_from_xgrid_ug: could not find grid id', fatal)
5054 
5055 end subroutine get_side2_from_xgrid_ug
5056 ! </SUBROUTINE>
5057 
5058 
5059 !#######################################################################
5060 
5061 subroutine put_1_to_xgrid_ug_order_1(d_addrs, x_addrs, xmap, dsize, xsize, lsize)
5062  integer(i8_kind), dimension(:), intent(in) :: d_addrs
5063  integer(i8_kind), dimension(:), intent(in) :: x_addrs
5064  type (xmap_type), intent(inout) :: xmap
5065  integer, intent(in) :: dsize, xsize, lsize
5066 
5067  integer :: i, p, buffer_pos, msgsize
5068  integer :: from_pe, to_pe, pos, n, l, count
5069  integer :: ibegin, istart, iend, start_pos
5070  type (comm_type), pointer, save :: comm =>null()
5071  real(r8_kind) :: recv_buffer(xmap%put1%recvsize*lsize)
5072  real(r8_kind) :: send_buffer(xmap%put1%sendsize*lsize)
5073  real(r8_kind) :: unpack_buffer(xmap%put1%recvsize)
5074 
5075  real(r8_kind), dimension(dsize) :: d
5076  real(r8_kind), dimension(xsize) :: x
5077  pointer(ptr_d, d)
5078  pointer(ptr_x, x)
5079  integer :: lll
5080 
5081  call mpp_clock_begin(id_put_1_to_xgrid_order_1)
5082 
5083  !--- pre-post receiving
5084  comm => xmap%put1
5085  do p = 1, comm%nrecv
5086  msgsize = comm%recv(p)%count*lsize
5087  from_pe = comm%recv(p)%pe
5088  buffer_pos = comm%recv(p)%buffer_pos*lsize
5089  call mpp_recv(recv_buffer(buffer_pos+1), glen=msgsize, from_pe = from_pe, block=.false., tag=comm_tag_7)
5090  enddo
5091 
5092  !--- send the data
5093  buffer_pos = 0
5094  do p = 1, comm%nsend
5095  msgsize = comm%send(p)%count*lsize
5096  to_pe = comm%send(p)%pe
5097  pos = buffer_pos
5098  do l = 1, lsize
5099  ptr_d = d_addrs(l)
5100  do n = 1, comm%send(p)%count
5101  pos = pos + 1
5102  lll = comm%send(p)%i(n)
5103  send_buffer(pos) = d(lll)
5104  enddo
5105  enddo
5106  call mpp_send(send_buffer(buffer_pos+1), plen=msgsize, to_pe = to_pe, tag=comm_tag_7 )
5107  buffer_pos = buffer_pos + msgsize
5108  enddo
5109 
5110  call mpp_sync_self(check=event_recv)
5111 
5112  !--- unpack the buffer
5113  if( lsize == 1) then
5114  ptr_x = x_addrs(1)
5115  do l=1,xmap%size_put1
5116  x(l) = recv_buffer(xmap%x1_put(l)%pos)
5117  end do
5118  else
5119  start_pos = 0
5120 !$OMP parallel do default(none) shared(lsize,x_addrs,comm,recv_buffer,xmap) &
5121 !$OMP private(ptr_x,count,ibegin,istart,iend,pos,unpack_buffer)
5122  do l = 1, lsize
5123  ptr_x = x_addrs(l)
5124  do p = 1, comm%nrecv
5125  count = comm%recv(p)%count
5126  ibegin = comm%recv(p)%buffer_pos*lsize + 1
5127  istart = ibegin + (l-1)*count
5128  iend = istart + count - 1
5129  pos = comm%recv(p)%buffer_pos
5130  do n = istart, iend
5131  pos = pos + 1
5132  unpack_buffer(pos) = recv_buffer(n)
5133  enddo
5134  enddo
5135  do i=1,xmap%size_put1
5136  x(i) = unpack_buffer(xmap%x1_put(i)%pos)
5137  end do
5138  enddo
5139  endif
5140 
5141  call mpp_sync_self()
5142 
5143  call mpp_clock_end(id_put_1_to_xgrid_order_1)
5144 
5145 end subroutine put_1_to_xgrid_ug_order_1
5146 
5147 !#######################################################################
5148 
5149 subroutine put_2_to_xgrid_ug(d, grid, x, xmap)
5150 type (grid_type), intent(in) :: grid
5151 real(r8_kind), dimension(grid%ls_me:grid%le_me, grid%km), intent(in) :: d
5152 real(r8_kind), dimension(:), intent(inout) :: x
5153 type (xmap_type), intent(in) :: xmap
5154 
5155  integer :: l
5156  call mpp_clock_begin(id_put_2_to_xgrid)
5157 
5158  do l=grid%first,grid%last
5159  x(l) = d(xmap%x2(l)%l,xmap%x2(l)%k)
5160  end do
5161 
5162  call mpp_clock_end(id_put_2_to_xgrid)
5163 end subroutine put_2_to_xgrid_ug
5164 
5165 
5166 subroutine get_1_from_xgrid_ug(d_addrs, x_addrs, xmap, isize, xsize, lsize)
5167  integer(i8_kind), dimension(:), intent(in) :: d_addrs
5168  integer(i8_kind), dimension(:), intent(in) :: x_addrs
5169  type (xmap_type), intent(inout) :: xmap
5170  integer, intent(in) :: isize, xsize, lsize
5171 
5172  real(r8_kind), dimension(xmap%size), target :: dg(xmap%size, lsize)
5173  integer :: i, j, l, p, n, m
5174  integer :: msgsize, buffer_pos, pos
5175  integer :: istart, iend, count
5176  real(r8_kind) , pointer, save :: dgp =>null()
5177  type (grid_type) , pointer, save :: grid1 =>null()
5178  type (comm_type) , pointer, save :: comm =>null()
5179  type(overlap_type), pointer, save :: send => null()
5180  type(overlap_type), pointer, save :: recv => null()
5181  real(r8_kind) :: recv_buffer(xmap%get1%recvsize*lsize*3)
5182  real(r8_kind) :: send_buffer(xmap%get1%sendsize*lsize*3)
5183  real(r8_kind) :: d(isize)
5184  real(r8_kind), dimension(xsize) :: x
5185  pointer(ptr_d, d)
5186  pointer(ptr_x, x)
5187 
5188  call mpp_clock_begin(id_get_1_from_xgrid)
5189 
5190  comm => xmap%get1
5191  grid1 => xmap%grids(1)
5192 
5193  do p = 1, comm%nrecv
5194  recv => comm%recv(p)
5195  msgsize = recv%count*lsize
5196  buffer_pos = recv%buffer_pos*lsize
5197  call mpp_recv(recv_buffer(buffer_pos+1), glen=msgsize, from_pe = recv%pe, block=.false., tag=comm_tag_9)
5198  enddo
5199 
5200  dg = 0.0_r8_kind;
5201 !$OMP parallel do default(none) shared(lsize,xmap,dg,x_addrs) private(dgp,ptr_x)
5202  do l = 1, lsize
5203  ptr_x = x_addrs(l)
5204  do i=1,xmap%size
5205  dgp => dg(xmap%x1(i)%pos,l)
5206  dgp = dgp + xmap%x1(i)%area*x(i)
5207  enddo
5208  enddo
5209 
5210 
5211  !--- send the data
5212  buffer_pos = 0
5213  istart = 1
5214  do p = 1, comm%nsend
5215  send => comm%send(p)
5216  msgsize = send%count*lsize
5217  pos = buffer_pos
5218  istart = send%buffer_pos+1
5219  iend = istart + send%count - 1
5220  do l = 1, lsize
5221  do n = istart, iend
5222  pos = pos + 1
5223  send_buffer(pos) = dg(n,l)
5224  enddo
5225  enddo
5226  call mpp_send(send_buffer(buffer_pos+1), plen=msgsize, to_pe = send%pe, tag=comm_tag_9 )
5227  buffer_pos = buffer_pos + msgsize
5228  istart = iend + 1
5229  enddo
5230 
5231  call mpp_sync_self(check=event_recv)
5232 
5233  !--- unpack the buffer
5234  do l = 1, lsize
5235  ptr_d = d_addrs(l)
5236  d = 0.0_r8_kind
5237  enddo
5238  !--- To bitwise reproduce old results, first copy the data onto its own pe.
5239 
5240  do p = 1, comm%nrecv
5241  recv => comm%recv(p)
5242  count = recv%count
5243  buffer_pos = recv%buffer_pos*lsize
5244  if( recv%pe == xmap%me ) then
5245 !$OMP parallel do default(none) shared(lsize,recv,recv_buffer,buffer_pos,d_addrs,count) &
5246 !$OMP private(ptr_d,i,pos)
5247  do l = 1, lsize
5248  pos = buffer_pos + (l-1)*count
5249  ptr_d = d_addrs(l)
5250  do n = 1,count
5251  i = recv%i(n)
5252  pos = pos + 1
5253  d(i) = recv_buffer(pos)
5254  enddo
5255  enddo
5256  exit
5257  endif
5258  enddo
5259 
5260  pos = 0
5261  do m = 1, comm%nrecv
5262  p = comm%unpack_ind(m)
5263  recv => comm%recv(p)
5264  if( recv%pe == xmap%me ) then
5265  cycle
5266  endif
5267  buffer_pos = recv%buffer_pos*lsize
5268 !$OMP parallel do default(none) shared(lsize,recv,recv_buffer,buffer_pos,d_addrs) &
5269 !$OMP private(ptr_d,i,j,pos)
5270  do l = 1, lsize
5271  pos = buffer_pos + (l-1)*recv%count
5272  ptr_d = d_addrs(l)
5273  do n = 1, recv%count
5274  i = recv%i(n)
5275  pos = pos + 1
5276  d(i) = d(i) + recv_buffer(pos)
5277  enddo
5278  enddo
5279  enddo
5280 
5281  !
5282  ! normalize with side 1 grid cell areas
5283  !
5284 !$OMP parallel do default(none) shared(lsize,d_addrs,grid1) private(ptr_d)
5285  do l = 1, lsize
5286  ptr_d = d_addrs(l)
5287  d = d * grid1%area_inv(:,1)
5288  enddo
5289  call mpp_sync_self()
5290  call mpp_clock_end(id_get_1_from_xgrid)
5291 
5292 end subroutine get_1_from_xgrid_ug
5293 
5294 !#######################################################################
5295 
5296 subroutine get_1_from_xgrid_ug_repro(d_addrs, x_addrs, xmap, xsize, lsize)
5297  integer(i8_kind), dimension(:), intent(in) :: d_addrs
5298  integer(i8_kind), dimension(:), intent(in) :: x_addrs
5299  type (xmap_type), intent(inout) :: xmap
5300  integer, intent(in) :: xsize, lsize
5301 
5302  integer :: g, i, j, k, p, l, n, l2, l3
5303  integer :: msgsize, buffer_pos, pos
5304  type (grid_type), pointer, save :: grid =>null()
5305  type(comm_type), pointer, save :: comm => null()
5306  type(overlap_type), pointer, save :: send => null()
5307  type(overlap_type), pointer, save :: recv => null()
5308  integer, dimension(0:xmap%npes-1) :: pl, ml
5309  real(r8_kind) :: recv_buffer(xmap%recv_count_repro_tot*lsize)
5310  real(r8_kind) :: send_buffer(xmap%send_count_repro_tot*lsize)
5311  real(r8_kind) :: d(xmap%grids(1)%ls_me:xmap%grids(1)%le_me)
5312  real(r8_kind), dimension(xsize) :: x
5313  pointer(ptr_d, d)
5314  pointer(ptr_x, x)
5315 
5316  call mpp_clock_begin(id_get_1_from_xgrid_repro)
5317  comm => xmap%get1_repro
5318  !--- pre-post receiving
5319  do p = 1, comm%nrecv
5320  recv => comm%recv(p)
5321  msgsize = recv%count*lsize
5322  buffer_pos = recv%buffer_pos*lsize
5323  call mpp_recv(recv_buffer(buffer_pos+1), glen=msgsize, from_pe = recv%pe, block=.false., tag=comm_tag_10)
5324  n = recv%pe -xmap%root_pe
5325  pl(n) = buffer_pos
5326  ml(n) = recv%count
5327  enddo
5328 
5329  !pack the data
5330  send_buffer(:) = 0.0_r8_kind
5331 !$OMP parallel do default(none) shared(lsize,x_addrs,comm,xmap,send_buffer) &
5332 !$OMP private(ptr_x,i,j,g,l2,pos,send)
5333  do p = 1, comm%nsend
5334  pos = comm%send(p)%buffer_pos*lsize
5335  send => comm%send(p)
5336  do l = 1,lsize
5337  ptr_x = x_addrs(l)
5338  do n = 1, send%count
5339  i = send%i(n)
5340  j = send%j(n)
5341  g = send%g(n)
5342  l2 = send%xloc(n)
5343  pos = pos + 1
5344  do k =1, xmap%grids(g)%km
5345  if(xmap%grids(g)%frac_area(i,j,k)/=0.0_r8_kind) then
5346  l2 = l2+1
5347  send_buffer(pos) = send_buffer(pos) + xmap%x1(l2)%area *x(l2)
5348  endif
5349  enddo
5350  enddo
5351  enddo
5352  enddo
5353 
5354  do p =1, comm%nsend
5355  buffer_pos = comm%send(p)%buffer_pos*lsize
5356  msgsize = comm%send(p)%count*lsize
5357  call mpp_send(send_buffer(buffer_pos+1), plen=msgsize, to_pe=comm%send(p)%pe, tag=comm_tag_10)
5358  enddo
5359 
5360  do l = 1, lsize
5361  ptr_d = d_addrs(l)
5362  d = 0
5363  enddo
5364 
5365  call mpp_sync_self(check=event_recv)
5366 
5367 !$OMP parallel do default(none) shared(lsize,d_addrs,xmap,recv_buffer,pl,ml) &
5368 !$OMP private(ptr_d,grid,i,j,p,pos)
5369  do l = 1, lsize
5370  ptr_d = d_addrs(l)
5371  do g=2,size(xmap%grids(:))
5372  grid => xmap%grids(g)
5373  do l3=1,grid%size_repro ! index into side1 grid's patterns
5374  i = grid%x_repro(l3)%l1
5375  p = grid%x_repro(l3)%pe-xmap%root_pe
5376  pos = pl(p) + (l-1)*ml(p) + grid%x_repro(l3)%recv_pos
5377  d(i) = d(i) + recv_buffer(pos)
5378  end do
5379  end do
5380  ! normalize with side 1 grid cell areas
5381  d = d * xmap%grids(1)%area_inv(:,1)
5382  enddo
5383 
5384  call mpp_sync_self()
5385 
5386  call mpp_clock_end(id_get_1_from_xgrid_repro)
5387 
5388 end subroutine get_1_from_xgrid_ug_repro
5389 
5390 !#######################################################################
5391 
5392 subroutine get_2_from_xgrid_ug(d, grid, x, xmap)
5393 type (grid_type), intent(in) :: grid
5394 real(r8_kind), dimension(grid%ls_me:grid%le_me, grid%km), intent(out) :: d
5395 real(r8_kind), dimension(:), intent(in) :: x
5396 type (xmap_type), intent(in) :: xmap
5397 
5398  integer :: l, k
5399 
5400  call mpp_clock_begin(id_get_2_from_xgrid)
5401 
5402  d = 0.0_r8_kind
5403  do l=grid%first_get,grid%last_get
5404  d(xmap%x2_get(l)%l,xmap%x2_get(l)%k) = &
5405  d(xmap%x2_get(l)%l,xmap%x2_get(l)%k) + xmap%x2_get(l)%area*x(xmap%x2_get(l)%pos)
5406  end do
5407  !
5408  ! normalize with side 2 grid cell areas
5409  !
5410  do k=1,size(d,2)
5411  d(:,k) = d(:,k) * grid%area_inv(:,1)
5412  end do
5413 
5414  call mpp_clock_end(id_get_2_from_xgrid)
5415 
5416 end subroutine get_2_from_xgrid_ug
5417 
5418 !######################################################################
5419 !> @return logical in_box_me
5420 logical function in_box_me(i, j, grid)
5421  integer, intent(in) :: i, j
5422  type (grid_type), intent(in) :: grid
5423  integer :: g
5424 
5425  if(grid%is_ug) then
5426  g = (j-1)*grid%ni + i
5427  in_box_me = (g>=grid%gs_me) .and. (g<=grid%ge_me)
5428  else
5429  in_box_me = (i>=grid%is_me) .and. (i<=grid%ie_me) .and. (j>=grid%js_me) .and. (j<=grid%je_me)
5430  endif
5431 
5432 end function in_box_me
5433 
5434 !######################################################################
5435 !> @return logical in_box_nbr
5436 logical function in_box_nbr(i, j, grid, p)
5437  integer, intent(in) :: i, j, p
5438  type (grid_type), intent(in) :: grid
5439  integer :: g
5440 
5441  if(grid%is_ug) then
5442  g = (j-1)*grid%ni + i
5443  in_box_nbr = (g>=grid%gs(p)) .and. (g<=grid%ge(p))
5444  else
5445  in_box_nbr = (i>=grid%is(p)) .and. (i<=grid%ie(p)) .and. (j>=grid%js(p)) .and. (j<=grid%je(p))
5446  endif
5447 
5448 end function in_box_nbr
5449 
5450 end module xgrid_mod
5451 !> @}
5452 ! close documentation grouping
5453 
Register a diagnostic field for a given module.
Send data over to output fields.
Close a netcdf or domain file opened with open_file or open_virtual_file.
Definition: fms2_io.F90:165
Opens a given netcdf or domain file.
Definition: fms2_io.F90:121
Read data from a defined field in a file.
Definition: fms2_io.F90:291
integer function, public check_nml_error(IOSTAT, NML_NAME)
Checks the iostat argument that is returned after reading a namelist and determines if the error code...
Definition: fms.F90:523
subroutine, public write_version_number(version, tag, unit)
Prints to the log file (or a specified unit) the version id string and tag name.
Definition: fms.F90:701
subroutine, public error_mesg(routine, message, level)
Print notes, warnings and error messages; terminates program for warning and error messages....
Definition: fms.F90:441
integer function, public get_mosaic_xgrid_size(fileobj)
return exchange grid size of mosaic xgrid file.
Definition: mosaic2.F90:118
subroutine, public get_mosaic_contact(fileobj, tile1, tile2, istart1, iend1, jstart1, jend1, istart2, iend2, jstart2, jend2)
Get contact information from mosaic_file Example usage: call get_mosaic_contact(mosaic_file,...
Definition: mosaic2.F90:222
integer function, public get_mosaic_ntiles(fileobj)
Get number of tiles in the mosaic_file.
Definition: mosaic2.F90:134
subroutine, public get_mosaic_tile_grid(grid_file, fileobj, domain, tile_count)
Gets the name of a mosaic tile grid file.
Definition: mosaic2.F90:428
integer function, public get_mosaic_ncontacts(fileobj)
Get number of contacts in the mosaic_file.
Definition: mosaic2.F90:150
subroutine, public get_mosaic_grid_sizes(fileobj, nx, ny)
Get grid size of each tile from mosaic_file.
Definition: mosaic2.F90:172
integer function mpp_get_domain_npes(domain)
Set user stack size.
integer function, dimension(size(domain%tile_id(:))) mpp_get_tile_id(domain)
Returns the tile_id on current pe.
integer function mpp_get_current_ntile(domain)
Returns number of tile on current pe.
integer function mpp_get_ntile_count(domain)
Returns number of tiles in mosaic.
subroutine mpp_compute_extent(isg, ieg, ndivs, ibegin, iend, extent)
Computes extents for a grid decomposition with the given indices and divisions.
subroutine mpp_get_tile_list(domain, tiles)
Return the tile_id on current pelist. one-tile-per-pe is assumed.
logical function mpp_domain_is_initialized(domain)
Set user stack size.
subroutine mpp_get_domain_pelist(domain, pelist)
Set user stack size.
integer function mpp_get_domain_root_pe(domain)
Set user stack size.
Broadcasts domain to every pe. Only useful outside the context of it's own pelist.
Deallocate given 1D or 2D domain.
Set up a domain decomposition.
These routines retrieve the axis specifications associated with the compute domains....
Retrieve the entire array of compute domain extents associated with a decomposition.
These routines retrieve the axis specifications associated with the data domains. The domain is a der...
These routines retrieve the axis specifications associated with the global domains....
Global sum of domain-decomposed arrays. mpp_global_sum is used to get the sum of a domain-decomposed...
Modifies the extents (compute, data and global) of a given domain.
Passes data from a structured grid to an unstructured grid Example usage:
Reorganization of distributed global arrays. mpp_redistribute is used to reorganize a distributed ar...
Performs halo updates for a given domain.
One dimensional domain used to manage shared data access between pes.
The domain2D type contains all the necessary information to define the global, compute and data domai...
Domain information for managing data on unstructured grids.
subroutine mpp_sync_self(pelist, check, request, msg_size, msg_type)
This is to check if current PE's outstanding puts are complete but we can't use shmem_fence because w...
integer function stdout()
This function returns the current standard fortran unit numbers for output.
Definition: mpp_util.inc:42
subroutine mpp_set_current_pelist(pelist, no_sync)
Set context pelist.
Definition: mpp_util.inc:498
integer function stdlog()
This function returns the current standard fortran unit numbers for log messages. Log messages,...
Definition: mpp_util.inc:58
integer function mpp_npes()
Returns processor count for current pelist.
Definition: mpp_util.inc:420
integer function mpp_pe()
Returns processor ID.
Definition: mpp_util.inc:406
subroutine mpp_sync(pelist, do_self)
Synchronize PEs in list.
integer function mpp_clock_id(name, flags, grain)
Return an ID for a new or existing clock.
Definition: mpp_util.inc:713
Scatter a vector across all PEs.
Definition: mpp.F90:775
Reduction operations. Find the max of scalar a from the PEs in pelist result is also automatically br...
Definition: mpp.F90:537
Reduction operations. Find the min of scalar a from the PEs in pelist result is also automatically br...
Definition: mpp.F90:559
Receive data from another PE.
Definition: mpp.F90:950
Send data to a receiving PE.
Definition: mpp.F90:1017
Reduction operation.
Definition: mpp.F90:596
Holds stocks amounts per PE values.
subroutine, public get_time(Time, seconds, days, ticks, err_msg)
Returns days and seconds ( < 86400 ) corresponding to a time. err_msg should be checked for any error...
Type to represent amounts of time. Implemented as seconds and days to allow for larger intervals.
subroutine get_area_elements_fms2_io(fileobj, name, get_area_data)
Read the area elements from NetCDF file.
Definition: xgrid.F90:1460
integer function, public xgrid_count(xmap)
Returns current size of exchange grid variables.
Definition: xgrid.F90:3238
subroutine, public some(xmap, some_arr, grid_id)
Returns logical associating exchange grid cells with given side two grid.
Definition: xgrid.F90:3467
subroutine regen(xmap)
Regenerate/Update the xmap.
Definition: xgrid.F90:2939
integer nsubset
Number of processors to read exchange grid information. Those processors that read the exchange grid ...
Definition: xgrid.F90:159
subroutine put_side1_to_xgrid(d, grid_id, x, xmap, remap_method, complete)
Scatters data to exchange grid.
Definition: xgrid.F90:3247
integer function get_nest_contact_fms2_io(fileobj, tile_nest_out, tile_parent_out, is_nest_out, ie_nest_out, js_nest_out, je_nest_out, is_parent_out, ie_parent_out, js_parent_out, je_parent_out)
currently we are assuming there is only one nest region
Definition: xgrid.F90:2120
subroutine, public get_ocean_model_area_elements(domain, grid_file)
Read Ocean area element data from netCDF file.
Definition: xgrid.F90:1483
subroutine, public set_frac_area_ug(f, grid_id, xmap)
Changes sub-grid portion areas and/or number.
Definition: xgrid.F90:3204
real(r8_kind) function, dimension(3) conservation_check_ug_side1(d, grid_id, xmap, remap_method)
conservation_check_ug - returns three numbers which are the global sum of a variable (1) on its home ...
Definition: xgrid.F90:4213
subroutine get_side1_from_xgrid(d, grid_id, x, xmap, complete)
Definition: xgrid.F90:3364
subroutine stock_move_3d(from, to, grid_index, stock_data3d, xmap, delta_t, from_side, to_side, radius, verbose, ier)
this version takes rank 3 data, it can be used to compute the flux on anything but the first grid,...
Definition: xgrid.F90:4484
real(r8_kind), dimension(:,:), allocatable, public area_atm_sphere
Area elements based on a the spherical model used by the ICE layer.
Definition: xgrid.F90:180
logical function in_box_me(i, j, grid)
Definition: xgrid.F90:5423
logical function in_box(i, j, is, ie, js, je)
Definition: xgrid.F90:511
logical use_mpp_io
use_mpp_io Default = .false. When true, uses mpp_io for IO. When false, uses fms2_io for IO.
Definition: xgrid.F90:168
integer remapping_method
xgrid nml
Definition: xgrid.F90:175
subroutine get_grid_version2(grid, grid_id, grid_file)
read the center point of the grid from version 1 grid file. only the grid at the side 1 is needed,...
Definition: xgrid.F90:1372
subroutine get_side2_from_xgrid(d, grid_id, x, xmap)
Definition: xgrid.F90:3441
real(r8_kind) function, dimension(3) conservation_check_ug_side2(d, grid_id, xmap, remap_method)
conservation_check_ug - returns three numbers which are the global sum of a variable (1) on its home ...
Definition: xgrid.F90:4286
real(r8_kind) function, dimension(is:ie, js:je) grad_merid_latlon(d, lat, is, ie, js, je, isd, jsd)
This function is used to calculate the gradient along meridinal direction. Maybe need to setup a limi...
Definition: xgrid.F90:4434
logical make_exchange_reproduce
Set to .true. to make xgrid_mod reproduce answers on different numbers of PEs. This option has a cons...
Definition: xgrid.F90:151
character(len=64) interp_method
Exchange grid interpolation method. It has two options: "first_order", "second_order".
Definition: xgrid.F90:154
integer, parameter version2
mosaic grid file
Definition: xgrid.F90:148
subroutine, public setup_xmap(xmap, grid_ids, grid_domains, grid_file, atm_grid, lnd_ug_domain)
Sets up exchange grid connectivity using grid specification file and processor domain decomposition.
Definition: xgrid.F90:1516
subroutine get_grid_version1(grid, grid_id, grid_file)
read the center point of the grid from version 1 grid file. only the grid at the side 1 is needed,...
Definition: xgrid.F90:1304
subroutine put_side1_to_xgrid_ug(d, grid_id, x, xmap, complete)
Currently only support first order.
Definition: xgrid.F90:4930
subroutine put_side2_to_xgrid(d, grid_id, x, xmap)
Scatters data to exchange grid.
Definition: xgrid.F90:3339
subroutine set_frac_area_sg(f, grid_id, xmap)
Changes sub-grid portion areas and/or number.
Definition: xgrid.F90:3172
logical function in_box_nbr(i, j, grid, p)
Definition: xgrid.F90:5439
integer, parameter version1
grid spec file
Definition: xgrid.F90:147
real(r8_kind) function, dimension(3) conservation_check_side2(d, grid_id, xmap, remap_method)
conservation_check - returns three numbers which are the global sum of a variable (1) on its home mod...
Definition: xgrid.F90:4157
real(r8_kind) function, dimension(3) conservation_check_side1(d, grid_id, xmap, remap_method)
conservation_check - returns three numbers which are the global sum of a variable (1) on its home mod...
Definition: xgrid.F90:4110
subroutine, public xgrid_init(remap_method)
Initialize the xgrid_mod.
Definition: xgrid.F90:522
real(r8_kind), dimension(:,:), allocatable, public area_atm_model
Area elements used inside each model.
Definition: xgrid.F90:178
real(r8_kind) function, dimension(is:ie, js:je) grad_zonal_latlon(d, lon, lat, is, ie, js, je, isd, jsd)
This function is used to calculate the gradient along zonal direction. Maybe need to setup a limit fo...
Definition: xgrid.F90:4394
subroutine, public get_xmap_grid_area(id, xmap, area)
This routine is used to get the grid area of component model with id.
Definition: xgrid.F90:4366
Returns three numbers which are the global sum of a variable.
Definition: xgrid.F90:251
For an unstructured grid, returns three numbers which are the global sum of a variable (1) on its hom...
Definition: xgrid.F90:260
Sums data from exchange grid to model grid.
Definition: xgrid.F90:204
get_from_xgrid for unstructured grids.
Definition: xgrid.F90:222
Scatters data from model grid onto exchange grid.
Definition: xgrid.F90:192
put_to_xgrid for unstructured grids.
Definition: xgrid.F90:213
Sets sub-grid area and numbering in the given exchange grid.
Definition: xgrid.F90:229
Private type used for exchange grid communication.
Definition: xgrid.F90:402
Type to hold pointers for grid boxes.
Definition: xgrid.F90:287
Private type to hold all data needed from given grid for an exchange grid.
Definition: xgrid.F90:303
Private type for overlap exchange grid data.
Definition: xgrid.F90:387
Private type for exchange grid data.
Definition: xgrid.F90:367
Private type for exchange grid data.
Definition: xgrid.F90:379
Private type for cell indices and data in the exchange grid.
Definition: xgrid.F90:268
Type for an exchange grid, holds pointers to included grids and any necessary data.
Definition: xgrid.F90:412