FMS  2024.03
Flexible Modeling System
xgrid.F90
1 !***********************************************************************
2 !* GNU Lesser General Public License
3 !*
4 !* This file is part of the GFDL Flexible Modeling System (FMS).
5 !*
6 !* FMS is free software: you can redistribute it and/or modify it under
7 !* the terms of the GNU Lesser General Public License as published by
8 !* the Free Software Foundation, either version 3 of the License, or (at
9 !* your option) any later version.
10 !*
11 !* FMS is distributed in the hope that it will be useful, but WITHOUT
12 !* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
13 !* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
14 !* for more details.
15 !*
16 !* You should have received a copy of the GNU Lesser General Public
17 !* License along with FMS. If not, see <http://www.gnu.org/licenses/>.
18 !***********************************************************************
19 !> @defgroup xgrid_mod xgrid_mod
20 !> @ingroup exchange
21 !> @brief Implements exchange grids for coupled models running on multiple processors
22 !> @author Michael Winton, Zhi Liang
23 !!
24 !! An exchange grid is formed from the union of
25 !! the bounding lines of the two (logically rectangular) participating
26 !! grids. The exchange grid is therefore the coarsest grid that is a
27 !! refinement of both participating grids. Exchange grids are used for
28 !! two purposes by coupled models:
29 !! 1. conservative interpolation of fields
30 !! between models uses the exchange grid cell areas as weights and
31 !! 2. the surface flux calculation takes place on the exchange grid thereby
32 !! using the finest scale data available.
33 !! <TT>xgrid_mod</TT> uses a NetCDF grid
34 !! specification file containing the grid cell overlaps in combination with
35 !! the @link ftp://ftp.gfdl.gov/pub/vb/mpp/mpp_domains.F90 @endlink domain
36 !! decomposition information to determine
37 !! the grid and processor connectivities.
38 !!
39 !!
40 !! xgrid_mod - implements exchange grids. An exchange grid is the grid whose
41 !! boundary set is the union of the boundaries of the participating
42 !! grids. The exchange grid is the coarsest grid that is a
43 !! refinement of each of the participating grids. Every exchange
44 !! grid cell is a subarea of one and only one cell in each of the
45 !! participating grids. The exchange grid has two purposes:
46 !!
47 !! (1) The exchange cell areas are used as weights for
48 !! conservative interpolation between model grids.
49 !!
50 !! (2) Computation of surface fluxes takes place on it,
51 !! thereby using the finest scale data obtainable.
52 !!
53 !! The exchange cells are the 2D intersections between cells of the
54 !! participating grids. They are computed elsewhere and are
55 !! read here from a NetCDF grid file as a sequence of quintuples
56 !! (i and j on each of two grids and the cell area).
57 !!
58 !! Each processing element (PE) computes a subdomain of each of the
59 !! participating grids as well as a subset of the exchange cells.
60 !! The geographic regions corresponding to these subdomains will,
61 !! in general, not be the same so communication must occur between
62 !! the PEs. The scheme for doing this is as follows. A distinction
63 !! is drawn between the participating grids. There is a single
64 !! "side 1" grid and it does not have partitions (sub-grid surface
65 !! types). There are one or more "side 2" grids and they may have
66 !! more than 1 partition. In standard usage, the atmosphere grid is
67 !! on side 1 and the land and sea ice grids are on side 2. The set
68 !! of exchange cells computed on a PE corresponds to its side 2
69 !! geographic region(s). Communication between the PEs takes place
70 !! on the side 1 grid. Note: this scheme does not generally allow
71 !! reproduction of answers across varying PE counts. This is
72 !! because, in the side 1 "get", exchange cells are first summed
73 !! locally onto a side 1 grid, then these side 1 contributions are
74 !! further summed after they have been communicated to their target
75 !! PE. For the make_exchange_reproduce option, a special side 1 get
76 !! is used. This get communicates individual exchange cells. The
77 !! cells are summed in the order they appear in the grid spec. file.
78 !!
79 !! <TT>xgrid_mod</TT> reads a NetCDF grid specification file to determine the
80 !! grid and processor connectivities. The exchange grids are defined
81 !! by a sequence of quintuples: the <TT>i/j</TT> indices of the intersecting
82 !! cells of the two participating grids and their areal overlap.
83 !! The names of the five fields are generated automatically from the
84 !! three character ids of the participating grids. For example, if
85 !! the side one grid id is "ATM" and the side two grid id is "OCN",
86 !! <TT>xgrid_mod</TT> expects to find the following five fields in the grid
87 !! specification file: <TT>I_ATM_ATMxOCN, J_ATM_ATMxOCN, I_OCN_ATMxOCN,
88 !! J_OCN_ATMxOCN, and AREA_ATMxOCN</TT>. These fields may be generated
89 !! by the <TT>make_xgrids</TT> utility.
90 
91 !> @addtogroup xgrid_mod
92 !> @{
93 module xgrid_mod
94 
95 
96 use fms_mod, only: check_nml_error, &
97  error_mesg, fatal, note, stdlog, &
98  write_version_number, lowercase, string
99 use mpp_mod, only: mpp_npes, mpp_pe, mpp_root_pe, mpp_send, mpp_recv, &
100  mpp_sync_self, stdout, mpp_max, event_recv, &
101  mpp_get_current_pelist, mpp_clock_id, mpp_min, &
102  mpp_alltoall, &
103  mpp_clock_begin, mpp_clock_end, mpp_clock_sync, &
104  comm_tag_1, comm_tag_2, comm_tag_3, comm_tag_4, &
105  comm_tag_5, comm_tag_6, comm_tag_7, comm_tag_8, &
106  comm_tag_9, comm_tag_10
107 use mpp_mod, only: input_nml_file, mpp_set_current_pelist, mpp_sum, mpp_sync
108 use mpp_domains_mod, only: mpp_get_compute_domain, mpp_get_compute_domains, &
118  domainug, mpp_get_ug_compute_domains, &
119  mpp_get_ug_domains_index, mpp_get_ug_domain_grid_index, &
120  mpp_get_ug_domain_tile_list, mpp_pass_sg_to_ug
121 use constants_mod, only: pi, radius
122 use mosaic2_mod, only: get_mosaic_xgrid, get_mosaic_xgrid_size, &
126 use stock_constants_mod, only: istock_top, istock_bottom, istock_side, stock_names, &
127  stock_units, nelems, stocks_file, stock_type
128 use gradient_mod, only: gradient_cubic
129 use fms2_io_mod, only: fmsnetcdffile_t, open_file, variable_exists, close_file
130 use fms2_io_mod, only: fmsnetcdfdomainfile_t, read_data, get_dimension_size
131 use fms2_io_mod, only: get_variable_units, dimension_exists
132 use platform_mod, only: r8_kind, i8_kind, fms_file_len
133 
134 implicit none
135 private
136 
139  area_atm_sphere, area_ocn_sphere, &
140  area_atm_model, area_ocn_model, &
144 
145 !--- paramters that determine the remapping method
146 integer, parameter :: FIRST_ORDER = 1
147 integer, parameter :: SECOND_ORDER = 2
148 integer, parameter :: version1 = 1 !< grid spec file
149 integer, parameter :: version2 = 2 !< mosaic grid file
150 integer, parameter :: max_fields = 80
151 
152 logical :: make_exchange_reproduce = .false. !< Set to .true. to make <TT>xgrid_mod</TT> reproduce answers on different
153  !! numbers of PEs. This option has a considerable performance impact.
154 !< exactly same on different # PEs
155 character(len=64) :: interp_method = 'first_order' !< Exchange grid interpolation method.
156  !! It has two options: "first_order", "second_order".
157 logical :: debug_stocks = .false.
158 logical :: xgrid_clocks_on = .false.
159 logical :: monotonic_exchange = .false.
160 integer :: nsubset = 0 !< Number of processors to read exchange grid information. Those processors
161  !! that read the exchange grid information will send data to other processors
162  !! to prepare for flux exchange. Default value is 0. When nsubset is 0, each
163  !! processor will read part of the exchange grid information. The purpose of
164  !! this namelist is to improve performance of setup_xmap when running on
165  !! higher processor count and solve receiving size mismatch issue on high
166  !! processor count. Try to set nsubset = mpp_npes/MPI_rank_per_node.
167 logical :: do_alltoall = .true.
168 logical :: do_alltoallv = .false.
169 logical :: use_mpp_io = .false.!< use_mpp_io Default = .false. When true, uses mpp_io for IO.
170  !! When false, uses fms2_io for IO.
171 !> @brief xgrid nml
172 namelist /xgrid_nml/ make_exchange_reproduce, interp_method, debug_stocks, xgrid_clocks_on, &
173  monotonic_exchange, nsubset, do_alltoall, do_alltoallv, &
174  use_mpp_io
175 
177 
178 !> Area elements used inside each model
179 real(r8_kind), allocatable, dimension(:,:) :: area_atm_model, area_lnd_model, area_ocn_model
180 !> Area elements based on a the spherical model used by the ICE layer
181 real(r8_kind), allocatable, dimension(:,:) :: area_atm_sphere, area_lnd_sphere, area_ocn_sphere
182 
183 !> @}
184 
185 !> @brief Scatters data from model grid onto exchange grid.
186 !!
187 !> Example usage:
188 !! @code{.F90}
189 !! call put_to_xgrid(d, grid_id, x, xmap, remap_order)
190 !! @endcode
191 !!
192 !> @ingroup xgrid_mod
193 interface put_to_xgrid
194  module procedure put_side1_to_xgrid
195  module procedure put_side2_to_xgrid
196 end interface
197 
198 !> @brief Sums data from exchange grid to model grid.
199 !!
200 !> <br>Example usage:
201 !! @code{.F90}
202 !! call get_from_xgrid(d, grid_id, x, xmap)
203 !! @endcode
204 !> @ingroup xgrid_mod
205 interface get_from_xgrid
206  module procedure get_side1_from_xgrid
207  module procedure get_side2_from_xgrid
208 end interface
209 
210 !> @brief @ref put_to_xgrid for unstructured grids.
211 !!
212 !> Scatters data from unstructured grid onto exchange grid.
213 !> @ingroup xgrid_mod
215  module procedure put_side1_to_xgrid_ug
216  module procedure put_side2_to_xgrid_ug
217 end interface
218 
219 !> @brief @ref get_from_xgrid for unstructured grids.
220 !!
221 !> Sums data from exchange grid to model grid.
222 !> @ingroup xgrid_mod
224  module procedure get_side2_from_xgrid_ug
225  module procedure get_side1_from_xgrid_ug
226 end interface
227 
228 !> @brief Sets sub-grid area and numbering in the given exchange grid.
229 !> @ingroup xgrid_mod
230 interface set_frac_area
231  module procedure set_frac_area_sg
232  module procedure set_frac_area_ug
233 end interface
234 
235 !> @brief Returns three numbers which are the global sum of a variable.
236 !! @details Returns three numbers which are the global sum of a
237 !! variable (1) on its home model grid, (2) after interpolation to the other
238 !! side grid(s), and (3) after re_interpolation back onto its home side grid(s).
239 !! Conservation_check must be called by all PEs to work properly.
240 !!
241 !! @param d real(r8_kind) data
242 !! @param grid_id 3 character grid ID
243 !! @param[inout] xmap exchange grid
244 !! @param[out] global sum of a variable on home model grid, after side grid interpolation and after
245 !! reinterpolation
246 !!
247 !! <br>Example usage:
248 !! @code{.F90}
249 !! call conservation_check(d, grid_id, xmap,remap_order)
250 !! @endcode
251 !> @ingroup xgrid_mod
253  module procedure conservation_check_side1
254  module procedure conservation_check_side2
255 end interface
256 
257 !> For an unstructured grid, returns three numbers which are the global sum of a
258 !! variable (1) on its home model grid, (2) after interpolation to the other
259 !! side grid(s), and (3) after re_interpolation back onto its home side grid(s).
260 !> @ingroup xgrid_mod
262  module procedure conservation_check_ug_side1
263  module procedure conservation_check_ug_side2
264 end interface
265 
266 
267 !> Private type for cell indices and data in the exchange grid
268 !> @ingroup xgrid_mod
270  integer :: i1 !< indices of cell in model arrays on both sides
271  integer :: j1 !< indices of cell in model arrays on both sides
272  integer :: i2 !< indices of cell in model arrays on both sides
273  integer :: j2 !< indices of cell in model arrays on both sides
274  integer :: l1, l2
275  integer :: recv_pos !< position in the receive buffer.
276  integer :: pe !< other side pe that has this cell
277  integer :: tile !< tile index of side 1 mosaic.
278  real(r8_kind) :: area !< geographic area of exchange cell
279 ! real(r8_kind) :: area1_ratio !(= x_area/grid1_area), will be added in the future to improve efficiency
280 ! real(r8_kind) :: area2_ratio !(= x_area/grid2_area), will be added in the future to improve efficiency
281  real(r8_kind) :: di !< Weight for the gradient of flux
282  real(r8_kind) :: dj !< Weight for the gradient of flux
283  real(r8_kind) :: scale
284 end type xcell_type
285 
286 !> Type to hold pointers for grid boxes
287 !> @ingroup xgrid_mod
289  real(r8_kind), dimension(:,:), pointer :: dx => null()
290  real(r8_kind), dimension(:,:), pointer :: dy => null()
291  real(r8_kind), dimension(:,:), pointer :: area => null()
292  real(r8_kind), dimension(:), pointer :: edge_w => null()
293  real(r8_kind), dimension(:), pointer :: edge_e => null()
294  real(r8_kind), dimension(:), pointer :: edge_s => null()
295  real(r8_kind), dimension(:), pointer :: edge_n => null()
296  real(r8_kind), dimension(:,:,:), pointer :: en1 => null()
297  real(r8_kind), dimension(:,:,:), pointer :: en2 => null()
298  real(r8_kind), dimension(:,:,:), pointer :: vlon => null()
299  real(r8_kind), dimension(:,:,:), pointer :: vlat => null()
300 end type grid_box_type
301 
302 !> Private type to hold all data needed from given grid for an exchange grid
303 !> @ingroup xgrid_mod
305  character(len=3) :: id !< grid identifier
306  integer :: npes !< number of processor on this grid.
307  logical :: on_this_pe !< indicate the domain is defined on this pe
308  integer :: root_pe !< indicate the root pe of the domain
309  integer, pointer, dimension(:) :: pelist !< pelist of the domain
310  integer :: ntile !< number of tiles in mosaic
311  integer :: ni !< max of global size of all the tiles
312  integer :: nj !< max of global size of all the tiles
313  integer, pointer, dimension(:) :: tile =>null() !< tile id ( pe index )
314  integer, pointer, dimension(:) :: is =>null() !< domain - i-range (pe index)
315  integer, pointer, dimension(:) :: ie =>null() !< domain - i-range (pe index)
316  integer, pointer, dimension(:) :: js =>null() !< domain - j-range (pe index)
317  integer, pointer, dimension(:) :: je =>null() !< domain - j-range (pe index)
318  integer, pointer :: is_me =>null() !< my domain - i-range
319  integer, pointer :: ie_me =>null() !< my domain - i-range
320  integer, pointer :: js_me =>null() !< my domain - j-range
321  integer, pointer :: je_me =>null() !< my domain - j-range
322  integer :: isd_me !< my data domain - i-range
323  integer :: ied_me !< my data domain - i-range
324  integer :: jsd_me !< my data domain - j-range
325  integer :: jed_me !< my data domain - j-range
326  integer :: nxd_me !< data domain size
327  integer :: nyd_me !< data domain size
328  integer :: nxc_me !< compute domain size
329  integer :: nyc_me !< compute domain size
330  integer, pointer :: tile_me !< my tile id
331  integer :: im !< global domain range
332  integer :: jm !< global domain range
333  integer :: km !< global domain range
334  real(r8_kind), pointer, dimension(:) :: lon =>null() !< center of global grids
335  real(r8_kind), pointer, dimension(:) :: lat =>null() !< center of global grids
336  real(r8_kind), pointer, dimension(:,:) :: geolon=>null() !< geographical grid center
337  real(r8_kind), pointer, dimension(:,:) :: geolat=>null() !< geographical grid center
338  real(r8_kind), pointer, dimension(:,:,:) :: frac_area =>null() !< partition fractions
339  real(r8_kind), pointer, dimension(:,:) :: area =>null() !< cell area
340  real(r8_kind), pointer, dimension(:,:) :: area_inv =>null() !< 1 / area for normalization
341  integer :: first !< xgrid index range
342  integer :: last !< xgrid index range
343  integer :: first_get !< xgrid index range for get_2_from_xgrid
344  integer :: last_get !< xgrid index range for get_2_from_xgrid
345  integer :: size !< # xcell patterns
346  type(xcell_type), pointer :: x(:) =>null() !< xcell patterns
347  integer :: size_repro !< # side 1 patterns for repro
348  type(xcell_type), pointer :: x_repro(:) =>null() !< side 1 patterns for repro
349  type(domain2d) :: domain !< used for conservation checks
350  type(domain2d) :: domain_with_halo !< used for second order remapping
351  logical :: is_latlon !< indicate if the grid is lat-lon grid or not.
352  type(grid_box_type) :: box !< used for second order remapping.
353  !--- The following is for land unstruct domain
354  logical :: is_ug
355  integer :: nxl_me
356  integer, pointer :: ls_me =>null() !< unstruct domain
357  integer, pointer :: le_me =>null() !< unstruct domain
358  integer, pointer, dimension(:) :: ls =>null(), le =>null()
359  integer, pointer :: gs_me =>null(), ge_me =>null()
360  integer, pointer, dimension(:) :: gs =>null(), ge =>null()
361  integer, pointer, dimension(:) :: l_index =>null()
362  type(domainug) :: ug_domain
363 
364 end type grid_type
365 
366 !> Private type for exchange grid data
367 !> @ingroup xgrid_mod
369  integer :: i, j
370  real(r8_kind) :: area !< (= geographic area * frac_area)
371 ! real(r8_kind) :: area_ratio !(= x1_area/grid1_area) ! will be added in the future to improve efficiency
372  real(r8_kind) :: di !< weight for the gradient of flux
373  real(r8_kind) :: dj !< weight for the gradient of flux
374  integer :: tile !< tile index of side 1 mosaic.
375  integer :: pos
376 end type x1_type
377 
378 !> Private type for exchange grid data
379 !> @ingroup xgrid_mod
381  integer :: i, j, l, k, pos
382  real(r8_kind) :: area !< geographic area of exchange cell
383 ! real(r8_kind) :: area_ratio !(=x2_area/grid2_area ) ! will be added in the future to improve efficiency
384 end type x2_type
385 
386 !> Private type for overlap exchange grid data
387 !> @ingroup xgrid_mod
389  integer :: count
390  integer :: pe
391  integer :: buffer_pos
392  integer, allocatable :: i(:)
393  integer, allocatable :: j(:)
394  integer, allocatable :: g(:)
395  integer, allocatable :: xLoc(:)
396  integer, allocatable :: tile(:)
397  real(r8_kind), allocatable :: di(:)
398  real(r8_kind), allocatable :: dj(:)
399 end type overlap_type
400 
401 !> Private type used for exchange grid communication
402 !> @ingroup xgrid_mod
404  integer :: nsend, nrecv
405  integer :: sendsize, recvsize
406  integer, pointer, dimension(:) :: unpack_ind=>null()
407  type(overlap_type), pointer, dimension(:) :: send=>null()
408  type(overlap_type), pointer, dimension(:) :: recv=>null()
409 end type comm_type
410 
411 !> @brief Type for an exchange grid, holds pointers to included grids and any necessary data.
412 !> @ingroup xgrid_mod
414  private
415  integer :: size !< # of exchange grid cells with area > 0 on this pe
416  integer :: size_put1 !< # of exchange grid cells for put_1_to_xgrid
417  integer :: size_get2 !< # of exchange grid cells for get_2_to_xgrid
418  integer :: me, npes, root_pe
419  logical, pointer, dimension(:) :: your1my2 =>null()!< true if side 1 domain on
420  !! indexed pe overlaps side 2
421  !! domain on this pe
422  logical, pointer, dimension(:) :: your2my1 =>null() !< true if a side 2 domain on
423  !! indexed pe overlaps side 1
424  !! domain on this pe
425  integer, pointer, dimension(:) :: your2my1_size=>null() !< number of exchange grid of
426  !! a side 2 domain on
427  !! indexed pe overlaps side 1
428  !! domain on this pe
429 
430  type (grid_type), pointer, dimension(:) :: grids =>null() !< 1st grid is side 1;
431  !! rest on side 2
432  !
433  ! Description of the individual exchange grid cells (index is cell #)
434  !
435  type(x1_type), pointer, dimension(:) :: x1 =>null() !< side 1 info
436  type(x1_type), pointer, dimension(:) :: x1_put =>null() !< side 1 info
437  type(x2_type), pointer, dimension(:) :: x2 =>null() !< side 2 info
438  type(x2_type), pointer, dimension(:) :: x2_get =>null() !< side 2 info
439 
440  integer, pointer, dimension(:) :: send_count_repro =>null()
441  integer, pointer, dimension(:) :: recv_count_repro =>null()
442  integer :: send_count_repro_tot !< sum(send_count_repro)
443  integer :: recv_count_repro_tot !< sum(recv_count_repro)
444  integer :: version !< version of xgrids. version=VERSION! is for grid_spec file
445  !! and version=VERSION2 is for mosaic grid.
446  integer, pointer, dimension(:) :: ind_get1 =>null() !< indx for side1 get and side2 put.
447  integer, pointer, dimension(:) :: ind_put1 =>null() !< indx for side1 put and side 2get.
448  type(comm_type), pointer :: put1 =>null() !< for put_1_to_xgrid
449  type(comm_type), pointer :: get1 =>null() !< for get_1_from_xgrid
450  type(comm_type), pointer :: get1_repro =>null()!< for get_1_from_xgrid_repro
451 end type xmap_type
452 
453 !> @addtogroup stock_constants_mod
454 !> @{
455 !-----------------------------------------------------------------------
456 ! Include variable "version" to be written to log file.
457 #include<file_version.h>
458 
459  real(r8_kind), parameter :: eps = 1.0e-10_r8_kind
460  real(r8_kind), parameter :: large_number = 1.e20_r8_kind
461  logical :: module_is_initialized = .false.
462  integer :: id_put_1_to_xgrid_order_1 = 0
463  integer :: id_put_1_to_xgrid_order_2 = 0
464  integer :: id_get_1_from_xgrid = 0
465  integer :: id_get_1_from_xgrid_repro = 0
466  integer :: id_get_2_from_xgrid = 0
467  integer :: id_put_2_to_xgrid = 0
468  integer :: id_setup_xmap = 0
469  integer :: id_load_xgrid1, id_load_xgrid2, id_load_xgrid3
470  integer :: id_load_xgrid4, id_load_xgrid5
471  integer :: id_load_xgrid, id_set_comm, id_regen, id_conservation_check
472 
473 
474  ! The following is for nested model
475  integer :: nnest=0, tile_nest, tile_parent
476  integer :: is_nest=0, ie_nest=0, js_nest=0, je_nest=0
477  integer :: is_parent=0, ie_parent=0, js_parent=0, je_parent=0
478 
479 !> @}
480  ! The following is required to compute stocks of water, heat, ...
481 
482  !> @ingroup xgrid_mod
483  interface stock_move
484  module procedure stock_move_3d, stock_move_2d
485  end interface
486 
487  !> @ingroup xgrid_mod
488  interface stock_move_ug
489  module procedure stock_move_ug_3d
490  end interface
491 
492  public stock_move, stock_type, stock_print, get_index_range, stock_integrate_2d
493  public first_order, second_order, stock_move_ug
494 
495  !> @ingroup xgrid_mod
497  module procedure get_area_elements_fms2_io
498  end interface
499  !> @ingroup xgrid_mod
501  module procedure get_nest_contact_fms2_io
502  end interface
503 
504 contains
505 
506 !> @addtogroup xgrid_mod
507 !> @{
508 
509 !#######################################################################
510 !> @return logical in_box
511 logical function in_box(i, j, is, ie, js, je)
512  integer, intent(in) :: i, j, is, ie, js, je
513 
514  in_box = (i>=is) .and. (i<=ie) .and. (j>=js) .and. (j<=je)
515 end function in_box
516 
517 !#######################################################################
518 
519 !> @brief Initialize the xgrid_mod.
520 !! @details Initialization routine for the xgrid module. It reads the xgrid_nml,
521 !! writes the version information and xgrid_nml to the log file.
522 subroutine xgrid_init(remap_method)
523  integer, intent(out) :: remap_method !< exchange grid interpolation method. It has four possible values:
524  !! FIRST_ORDER (=1), SECOND_ORDER(=2).
525 
526  integer :: iunit, ierr, io, out_unit
527 
528  if (module_is_initialized) return
529  module_is_initialized = .true.
530 
531  read (input_nml_file, xgrid_nml, iostat=io)
532  ierr = check_nml_error( io, 'xgrid_nml' )
533 
534 !--------- write version number and namelist ------------------
535  call write_version_number("XGRID_MOD", version)
536 
537  iunit = stdlog( )
538  out_unit = stdout()
539  if ( mpp_pe() == mpp_root_pe() ) write (iunit,nml=xgrid_nml)
540 
541  if (use_mpp_io) then
542  ! FATAL error if trying to use mpp_io
543  call error_mesg('xgrid_init', &
544  'MPP_IO is no longer supported. Please remove use_mpp_io from namelists',&
545  fatal)
546  endif
547 !--------- check interp_method has suitable value
548 !--- when monotonic_exchange is true, interp_method must be second order.
549 
550  select case(trim(interp_method))
551  case('first_order')
552  remap_method = first_order
553  if( monotonic_exchange ) call error_mesg('xgrid_mod', &
554  'xgrid_nml monotonic_exchange must be .false. when interp_method = first_order', fatal)
555  write(out_unit,*)"NOTE from xgrid_mod: use first_order conservative exchange"
556  case('second_order')
557  if(monotonic_exchange) then
558  write(out_unit,*)"NOTE from xgrid_mod: use monotonic second_order conservative exchange"
559  else
560  write(out_unit,*)"NOTE from xgrid_mod: use second_order conservative exchange"
561  endif
562  remap_method = second_order
563  case default
564  call error_mesg('xgrid_mod', ' nml interp_method = ' //trim(interp_method)// &
565  ' is not a valid namelist option', fatal)
566  end select
567 
568  if(xgrid_clocks_on) then
569  id_put_1_to_xgrid_order_1 = mpp_clock_id("put_1_to_xgrid_order_1", flags=mpp_clock_sync)
570  id_put_1_to_xgrid_order_2 = mpp_clock_id("put_1_to_xgrid_order_2", flags=mpp_clock_sync)
571  id_get_1_from_xgrid = mpp_clock_id("get_1_from_xgrid", flags=mpp_clock_sync)
572  id_get_1_from_xgrid_repro = mpp_clock_id("get_1_from_xgrid_repro", flags=mpp_clock_sync)
573  id_get_2_from_xgrid = mpp_clock_id("get_2_from_xgrid", flags=mpp_clock_sync)
574  id_put_2_to_xgrid = mpp_clock_id("put_2_to_xgrid", flags=mpp_clock_sync)
575  id_setup_xmap = mpp_clock_id("setup_xmap", flags=mpp_clock_sync)
576  id_set_comm = mpp_clock_id("set_comm")
577  id_regen = mpp_clock_id("regen")
578  id_conservation_check = mpp_clock_id("conservation_check")
579  id_load_xgrid = mpp_clock_id("load_xgrid")
580  id_load_xgrid1 = mpp_clock_id("load_xgrid1")
581  id_load_xgrid2 = mpp_clock_id("load_xgrid2")
582  id_load_xgrid3 = mpp_clock_id("load_xgrid3")
583  id_load_xgrid4 = mpp_clock_id("load_xgrid4")
584  id_load_xgrid5 = mpp_clock_id("load_xgrid5")
585  endif
586 
587  remapping_method = remap_method
588 
589 end subroutine xgrid_init
590 
591 !#######################################################################
592 
593 subroutine load_xgrid (xmap, grid, grid_file, grid1_id, grid_id, tile1, tile2, use_higher_order)
594 type(xmap_type), intent(inout) :: xmap
595 type(grid_type), intent(inout) :: grid
596 character(len=*), intent(in) :: grid_file
597 character(len=3), intent(in) :: grid1_id, grid_id
598 integer, intent(in) :: tile1, tile2
599 logical, intent(in) :: use_higher_order
600 
601  integer, pointer, dimension(:) :: i1=>null(), j1=>null()
602  integer, pointer, dimension(:) :: i2=>null(), j2=>null()
603  real(r8_kind), pointer, dimension(:) :: di=>null(), dj=>null()
604  real(r8_kind), pointer, dimension(:) :: area =>null()
605  integer, pointer, dimension(:) :: i1_tmp=>null(), j1_tmp=>null()
606  integer, pointer, dimension(:) :: i2_tmp=>null(), j2_tmp=>null()
607  real(r8_kind), pointer, dimension(:) :: di_tmp=>null(), dj_tmp=>null()
608  real(r8_kind), pointer, dimension(:) :: area_tmp =>null()
609  integer, pointer, dimension(:) :: i1_side1=>null(), j1_side1=>null()
610  integer, pointer, dimension(:) :: i2_side1=>null(), j2_side1=>null()
611  real(r8_kind), pointer, dimension(:) :: di_side1=>null(), dj_side1=>null()
612  real(r8_kind), pointer, dimension(:) :: area_side1 =>null()
613 
614  real(r8_kind), allocatable, dimension(:,:) :: tmp
615  real(r8_kind), allocatable, dimension(:) :: send_buffer, recv_buffer
616  type (grid_type), pointer, save :: grid1 =>null()
617  integer :: l, ll, ll_repro, p, nxgrid, size_prev
618  type(xcell_type), allocatable :: x_local(:)
619  integer :: size_repro, out_unit
620  logical :: scale_exist = .false.
621  logical :: is_distribute = .false.
622  real(r8_kind), allocatable, dimension(:) :: scale
623  real(r8_kind) :: garea
624  integer :: npes, isc, iec, nxgrid_local, pe, nxgrid_local_orig
625  integer :: nxgrid1, nxgrid2, nset1, nset2, ndivs, cur_ind
626  integer :: pos, nsend, nrecv, l1, l2, n, mypos
627  integer :: start(4), nread(4)
628  logical :: found
629  character(len=128) :: attvalue
630  integer, dimension(0:xmap%npes-1) :: pelist
631  logical, dimension(0:xmap%npes-1) :: subset_rootpe
632  integer, dimension(0:xmap%npes-1) :: nsend1, nsend2, nrecv1, nrecv2
633  integer, dimension(0:xmap%npes-1) :: send_cnt, recv_cnt
634  integer, dimension(0:xmap%npes-1) :: send_buffer_pos, recv_buffer_pos
635  integer, dimension(0:xmap%npes-1) :: ibegin, iend, pebegin, peend
636  integer, dimension(2*xmap%npes) :: ibuf1, ibuf2
637  integer, dimension(0:xmap%npes-1) :: pos_x, y2m1_size
638  integer, allocatable, dimension(:) :: y2m1_pe
639  integer, pointer, save :: iarray(:), jarray(:)
640  integer, allocatable, save :: pos_s(:)
641  integer, pointer, dimension(:) :: iarray2(:)=>null(), jarray2(:)=>null()
642  logical :: last_grid
643  integer :: nxgrid1_old
644  integer :: lll
645  type(fmsnetcdffile_t) :: fileobj
646 
647  if(.not. open_file(fileobj, grid_file, 'read' )) then
648  call error_mesg('xgrid_mod(load_xgrid)', 'Error in opening file '//trim(grid_file), fatal)
649  endif
650 
651  scale_exist = .false.
652  grid1 => xmap%grids(1)
653  out_unit = stdout()
654  npes = xmap%npes
655  pe = mpp_pe()
656  mypos = mpp_pe()-mpp_root_pe()
657 
658  call mpp_get_current_pelist(pelist)
659  !--- make sure npes = pelist(npes-1) - pelist(0) + 1
660  if( npes .NE. pelist(npes-1) - pelist(0) + 1 ) then
661  print*, "npes =", npes, ", pelist(npes-1)=", pelist(npes-1), ", pelist(0)=", pelist(0)
662  call error_mesg('xgrid_mod', .NE.'npes pelist(npes-1) - pelist(0)', fatal)
663  endif
664 
665  select case(xmap%version)
666  case(version1)
667  nxgrid = 0
668  if (dimension_exists(fileobj, 'i_'//lowercase(grid1_id)//'X'//lowercase(grid_id))) then
669  call get_dimension_size(fileobj, 'i_'//lowercase(grid1_id)//'X'//lowercase(grid_id), nxgrid)
670  endif
671  if(nxgrid .LE. 0) return
672  case(version2)
673  !--- max_size is the exchange grid size between super grid.
674  nxgrid = get_mosaic_xgrid_size(fileobj)
675  if(nxgrid .LE. 0) return
676  end select
677 
678  !--- define a domain to read exchange grid.
679  if(nxgrid > npes) then
680  ndivs = npes
681  if(nsubset >0 .AND. nsubset < npes) ndivs = nsubset
682  call mpp_compute_extent( 1, nxgrid, ndivs, ibegin, iend)
683  if(npes == ndivs) then
684  p = mpp_pe()-mpp_root_pe()
685  isc = ibegin(p)
686  iec = iend(p)
687  subset_rootpe(:) = .true.
688  else
689  isc = 0; iec = -1
690  call mpp_compute_extent(pelist(0), pelist(npes-1), ndivs, pebegin, peend)
691  do n = 0, ndivs-1
692  if(pe == pebegin(n)) then
693  isc = ibegin(n)
694  iec = iend(n)
695  exit
696  endif
697  enddo
698  cur_ind = 0
699  subset_rootpe(:) = .false.
700 
701  do n = 0, npes-1
702  if(pelist(n) == pebegin(cur_ind)) then
703  subset_rootpe(n) = .true.
704  cur_ind = cur_ind+1
705  if(cur_ind == ndivs) exit
706  endif
707  enddo
708  endif
709  is_distribute = .true.
710  else
711  is_distribute = .false.
712  isc = 1; iec = nxgrid
713  endif
714 
715  nset1 = 5
716  nset2 = 5
717  if(use_higher_order) then
718  nset1 = nset1 + 2
719  nset2 = nset2 + 2
720  end if
721  if(scale_exist) nset2 = nset1 + 1
722 
723  call mpp_clock_begin(id_load_xgrid1)
724  if(iec .GE. isc) then
725  nxgrid_local = iec - isc + 1
726  allocate(i1_tmp(isc:iec), j1_tmp(isc:iec), i2_tmp(isc:iec), j2_tmp(isc:iec), area_tmp(isc:iec) )
727  if(use_higher_order) allocate(di_tmp(isc:iec), dj_tmp(isc:iec))
728 
729  start = 1; nread = 1
730 
731  select case(xmap%version)
732  case(version1)
733  start(1) = isc; nread(1) = nxgrid_local
734  allocate(tmp(nxgrid_local,1))
735  call read_data(fileobj, 'I_'//grid1_id//'_'//grid1_id//'x'//grid_id, tmp, corner=start, edge_lengths=nread)
736  i1_tmp = int(tmp(:,1))
737  call read_data(fileobj, 'J_'//grid1_id//'_'//grid1_id//'x'//grid_id, tmp, corner=start, edge_lengths=nread)
738  j1_tmp = int(tmp(:,1))
739  call read_data(fileobj, 'I_'//grid_id//'_'//grid1_id//'x'//grid_id, tmp, corner=start, edge_lengths=nread)
740  i2_tmp = int(tmp(:,1))
741  call read_data(fileobj, 'J_'//grid_id//'_'//grid1_id//'x'//grid_id, tmp, corner=start, edge_lengths=nread)
742  j2_tmp = int(tmp(:,1))
743  call read_data(fileobj, 'AREA_'//grid1_id//'x'//grid_id, tmp, corner=start, edge_lengths=nread)
744  area_tmp = tmp(:,1)
745  if(use_higher_order) then
746  call read_data(fileobj, 'DI_'//grid1_id//'x'//grid_id, tmp, corner=start, edge_lengths=nread)
747  di_tmp = tmp(:,1)
748  call read_data(fileobj, 'DJ_'//grid1_id//'x'//grid_id, tmp, corner=start, edge_lengths=nread)
749  dj_tmp = tmp(:,1)
750  end if
751  deallocate(tmp)
752  case(version2)
753  nread(1) = 2; start(2) = isc; nread(2) = nxgrid_local
754  allocate(tmp(2, isc:iec))
755  call read_data(fileobj, "tile1_cell", tmp, corner=start, edge_lengths=nread)
756  i1_tmp(isc:iec) = int(tmp(1, isc:iec))
757  j1_tmp(isc:iec) = int(tmp(2, isc:iec))
758  call read_data(fileobj, "tile2_cell", tmp, corner=start, edge_lengths=nread)
759  i2_tmp(isc:iec) = int(tmp(1, isc:iec))
760  j2_tmp(isc:iec) = int(tmp(2, isc:iec))
761  if(use_higher_order) then
762  call read_data(fileobj, "tile1_distance", tmp, corner=start, edge_lengths=nread)
763  di_tmp(isc:iec) = tmp(1, isc:iec)
764  dj_tmp(isc:iec) = tmp(2, isc:iec)
765  end if
766  start = 1; nread = 1
767  start(1) = isc; nread(1) = nxgrid_local
768  deallocate(tmp)
769  allocate(tmp(isc:iec,1) )
770 
771  call read_data(fileobj, "xgrid_area", tmp(:,1:1), corner=start, edge_lengths=nread)
772  ! check the units of "xgrid_area
773  call get_variable_units(fileobj, "xgrid_area", attvalue)
774 
775  if( trim(attvalue) == 'm2' ) then
776  garea = 4.0_r8_kind * pi * radius * radius;
777  area_tmp = tmp(:,1)/garea
778  else if( trim(attvalue) == 'none' ) then
779  area_tmp = tmp(:,1)
780  else
781  call error_mesg('xgrid_mod', 'In file '//trim(grid_file)//', xgrid_area units = '// &
782  trim(attvalue)//' should be "m2" or "none"', fatal)
783  endif
784 
785  !--- if field "scale" exist, read this field. Normally this
786  !--- field only exist in landXocean exchange grid cell.
787  if(grid1_id == 'LND' .AND. grid_id == 'OCN') then
788  if(variable_exists(fileobj, "scale")) then
789  allocate(scale(isc:iec))
790  write(out_unit, *)"NOTE from load_xgrid(xgrid_mod): field 'scale' exist in the file "// &
791  & trim(grid_file)//", this field will be read and the exchange grid cell area will be"// &
792  & " multiplied by scale"
793  call read_data(fileobj, "scale", tmp, corner=start, edge_lengths=nread)
794  scale = tmp(:,1)
795  scale_exist = .true.
796  endif
797  endif
798  deallocate(tmp)
799  end select
800 
801  !---z1l: The following change is for the situation that some processor is masked out.
802  !---loop through all the pe to see if side 1 and side of each exchange grid is on some processor
803  nxgrid_local_orig = nxgrid_local
804  allocate(i1(isc:iec), j1(isc:iec), i2(isc:iec), j2(isc:iec), area(isc:iec) )
805  if(use_higher_order) allocate(di(isc:iec), dj(isc:iec))
806  pos = isc-1
807  do l = isc, iec
808  found = .false.
809  !--- first check if the exchange grid is on one of side 1 processor
810  do p = 0, npes - 1
811  if(grid1%tile(p) == tile1) then
812  if(in_box_nbr(i1_tmp(l), j1_tmp(l), grid1, p)) then
813  found = .true.
814  exit
815  endif
816  endif
817  enddo
818  !--- Then check if the exchange grid is on one of side 2 processor
819  if( found ) then
820  do p = 0, npes - 1
821  if(grid%tile(p) == tile2) then
822  if (in_box_nbr(i2_tmp(l), j2_tmp(l), grid, p)) then
823  pos = pos+1
824  i1(pos) = i1_tmp(l)
825  j1(pos) = j1_tmp(l)
826  i2(pos) = i2_tmp(l)
827  j2(pos) = j2_tmp(l)
828  area(pos) = area_tmp(l)
829  if(use_higher_order) then
830  di(pos) = di_tmp(l)
831  dj(pos) = dj_tmp(l)
832  endif
833  exit
834  endif
835  endif
836  enddo
837  endif
838  enddo
839 
840  deallocate(i1_tmp, i2_tmp, j1_tmp, j2_tmp, area_tmp)
841  if(use_higher_order) deallocate( di_tmp, dj_tmp)
842  iec = pos
843  if(iec .GE. isc) then
844  nxgrid_local = iec - isc + 1
845  else
846  nxgrid_local = 0
847  endif
848  else
849  nxgrid_local = 0
850  nxgrid_local_orig = 0
851  endif
852 
853  call close_file(fileobj)
854 
855  call mpp_clock_end(id_load_xgrid1)
856 
857  if(is_distribute) then
858  !--- Since the xgrid is distributed according to side 2 grid. Send all the xgrid to its own side 2.
859  !--- Also need to send the xgrid to its own side 1 for the reproducing ability between processor count.
860  !--- first find out number of points need to send to other pe and fill the send buffer.
861  nsend1(:) = 0; nrecv1(:) = 0
862  nsend2(:) = 0; nrecv2(:) = 0
863  ibuf1(:)= 0; ibuf2(:)= 0
864 
865  call mpp_clock_begin(id_load_xgrid2)
866  if(nxgrid_local>0) then
867  allocate( send_buffer(nxgrid_local * (nset1+nset2)) )
868  pos = 0
869  do p = 0, npes - 1
870  send_buffer_pos(p) = pos
871  if(grid%tile(p) == tile2) then
872  do l = isc, iec
873  if(in_box_nbr(i2(l), j2(l), grid, p) ) then
874  nsend2(p) = nsend2(p) + 1
875  send_buffer(pos+1) = real(i1(l), r8_kind)
876  send_buffer(pos+2) = real(j1(l), r8_kind)
877  send_buffer(pos+3) = real(i2(l), r8_kind)
878  send_buffer(pos+4) = real(j2(l), r8_kind)
879  send_buffer(pos+5) = area(l)
880  if(use_higher_order) then
881  send_buffer(pos+6) = di(l)
882  send_buffer(pos+7) = dj(l)
883  endif
884  if(scale_exist) send_buffer(pos+nset2) = scale(l)
885  pos = pos + nset2
886  endif
887  enddo
888  endif
889  if(grid1%tile(p) == tile1) then
890  do l = isc, iec
891  if(in_box_nbr(i1(l), j1(l), grid1, p)) then
892  nsend1(p) = nsend1(p) + 1
893  send_buffer(pos+1) = real(i1(l), r8_kind)
894  send_buffer(pos+2) = real(j1(l), r8_kind)
895  send_buffer(pos+3) = real(i2(l), r8_kind)
896  send_buffer(pos+4) = real(j2(l), r8_kind)
897  send_buffer(pos+5) = area(l)
898  if(use_higher_order) then
899  send_buffer(pos+6) = di(l)
900  send_buffer(pos+7) = dj(l)
901  endif
902  pos = pos + nset1
903  endif
904  enddo
905  endif
906  enddo
907  endif
908  call mpp_clock_end(id_load_xgrid2)
909 
910  !--- send the size of the data on side 1 to be sent over.
911  call mpp_clock_begin(id_load_xgrid3)
912 
913  if (do_alltoall) then
914  do p = 0, npes-1
915  ibuf1(2*p+1) = nsend1(p)
916  ibuf1(2*p+2) = nsend2(p)
917  enddo
918  call mpp_alltoall(ibuf1, 2, ibuf2, 2)
919  else
920  do n = 0, npes-1
921  p = mod(mypos+npes-n, npes)
922  if(.not. subset_rootpe(p)) cycle
923  call mpp_recv( ibuf2(2*p+1), glen=2, from_pe=pelist(p), block=.false., tag=comm_tag_1)
924  enddo
925 
926  if(nxgrid_local_orig>0) then
927  do n = 0, npes-1
928  p = mod(mypos+n, npes)
929  ibuf1(2*p+1) = nsend1(p)
930  ibuf1(2*p+2) = nsend2(p)
931  call mpp_send( ibuf1(2*p+1), plen=2, to_pe=pelist(p), tag=comm_tag_1)
932  enddo
933  endif
934  call mpp_sync_self(check=event_recv)
935  endif
936  do p = 0, npes-1
937  nrecv1(p) = ibuf2(2*p+1)
938  nrecv2(p) = ibuf2(2*p+2)
939  enddo
940 
941  if(.not. do_alltoall) call mpp_sync_self()
942  call mpp_clock_end(id_load_xgrid3)
943  call mpp_clock_begin(id_load_xgrid4)
944  pos = 0
945  do p = 0, npes - 1
946  recv_buffer_pos(p) = pos
947  pos = pos + nrecv1(p) * nset1 + nrecv2(p) * nset2
948  end do
949 
950  !--- now get the data
951  nxgrid1 = sum(nrecv1)
952  nxgrid2 = sum(nrecv2)
953  if(nxgrid1>0 .OR. nxgrid2>0) allocate(recv_buffer(nxgrid1*nset1+nxgrid2*nset2))
954 
955  if (do_alltoallv) then
956  ! Construct the send and receive counters
957  send_cnt(:) = nset1 * nsend1(:) + nset2 * nsend2(:)
958  recv_cnt(:) = nset1 * nrecv1(:) + nset2 * nrecv2(:)
959 
960  call mpp_alltoall(send_buffer, send_cnt, send_buffer_pos, &
961  recv_buffer, recv_cnt, recv_buffer_pos)
962  else
963  do n = 0, npes-1
964  p = mod(mypos+npes-n, npes)
965  nrecv = nrecv1(p)*nset1+nrecv2(p)*nset2
966  if(nrecv==0) cycle
967  pos = recv_buffer_pos(p)
968  call mpp_recv(recv_buffer(pos+1), glen=nrecv, from_pe=pelist(p), &
969  block=.false., tag=comm_tag_2)
970  end do
971 
972  do n = 0, npes-1
973  p = mod(mypos+n, npes)
974  nsend = nsend1(p)*nset1 + nsend2(p)*nset2
975  if(nsend==0) cycle
976  pos = send_buffer_pos(p)
977  call mpp_send(send_buffer(pos+1), plen=nsend, to_pe=pelist(p), &
978  tag=comm_tag_2)
979  end do
980  call mpp_sync_self(check=event_recv)
981  end if
982  call mpp_clock_end(id_load_xgrid4)
983  !--- unpack buffer.
984  if( nxgrid_local>0) then
985  deallocate(i1,j1,i2,j2,area)
986  endif
987 
988  allocate(i1(nxgrid2), j1(nxgrid2))
989  allocate(i2(nxgrid2), j2(nxgrid2))
990  allocate(area(nxgrid2))
991  allocate(i1_side1(nxgrid1), j1_side1(nxgrid1))
992  allocate(i2_side1(nxgrid1), j2_side1(nxgrid1))
993  allocate(area_side1(nxgrid1))
994  if(use_higher_order) then
995  if(nxgrid_local>0) deallocate(di,dj)
996  allocate(di(nxgrid2), dj(nxgrid2))
997  allocate(di_side1(nxgrid1), dj_side1(nxgrid1))
998  endif
999  if(scale_exist) then
1000  if(nxgrid_local>0)deallocate(scale)
1001  allocate(scale(nxgrid2))
1002  endif
1003  pos = 0
1004  l1 = 0; l2 = 0
1005  do p = 0,npes-1
1006  do n = 1, nrecv2(p)
1007  l2 = l2+1
1008  i1(l2) = int(recv_buffer(pos+1))
1009  j1(l2) = int(recv_buffer(pos+2))
1010  i2(l2) = int(recv_buffer(pos+3))
1011  j2(l2) = int(recv_buffer(pos+4))
1012  area(l2) = recv_buffer(pos+5)
1013  if(use_higher_order) then
1014  di(l2) = recv_buffer(pos+6)
1015  dj(l2) = recv_buffer(pos+7)
1016  endif
1017  if(scale_exist)scale(l2) = recv_buffer(pos+nset2)
1018  pos = pos + nset2
1019  enddo
1020  do n = 1, nrecv1(p)
1021  l1 = l1+1
1022  i1_side1(l1) = int(recv_buffer(pos+1))
1023  j1_side1(l1) = int(recv_buffer(pos+2))
1024  i2_side1(l1) = int(recv_buffer(pos+3))
1025  j2_side1(l1) = int(recv_buffer(pos+4))
1026  area_side1(l1) = recv_buffer(pos+5)
1027  if(use_higher_order) then
1028  di_side1(l1) = recv_buffer(pos+6)
1029  dj_side1(l1) = recv_buffer(pos+7)
1030  endif
1031  pos = pos + nset1
1032  enddo
1033  enddo
1034  call mpp_sync_self()
1035  if(allocated(send_buffer)) deallocate(send_buffer)
1036  if(allocated(recv_buffer)) deallocate(recv_buffer)
1037 
1038  else
1039  nxgrid1 = nxgrid
1040  nxgrid2 = nxgrid
1041  i1_side1 => i1; j1_side1 => j1
1042  i2_side1 => i2; j2_side1 => j2
1043  area_side1 => area
1044  if(use_higher_order) then
1045  di_side1 => di
1046  dj_side1 => dj
1047  endif
1048  endif
1049 
1050  call mpp_clock_begin(id_load_xgrid5)
1051 
1052 
1053  size_prev = grid%size
1054 
1055  if(grid%tile_me == tile2) then
1056  do l=1,nxgrid2
1057  if (in_box_me(i2(l), j2(l), grid) ) then
1058  grid%size = grid%size + 1
1059  ! exclude the area overlapped with parent grid
1060  if( grid1_id .NE. "ATM" .OR. tile1 .NE. tile_parent .OR. &
1061  .NOT. in_box(i1(l), j1(l), is_parent, ie_parent, js_parent, je_parent) ) then
1062  if(grid%is_ug) then
1063  lll = grid%l_index((j2(l)-1)*grid%im+i2(l))
1064  grid%area(lll,1) = grid%area(lll,1)+area(l)
1065  else
1066  grid%area(i2(l),j2(l)) = grid%area(i2(l),j2(l))+area(l)
1067  endif
1068  endif
1069  do p=0,xmap%npes-1
1070  if(grid1%tile(p) == tile1) then
1071  if (in_box_nbr(i1(l), j1(l), grid1, p)) then
1072  xmap%your1my2(p) = .true.
1073  end if
1074  end if
1075  end do
1076  end if
1077  end do
1078  end if
1079 
1080  if(grid%size > size_prev) then
1081  if(size_prev > 0) then ! need to extend data
1082  allocate(x_local(size_prev))
1083  x_local = grid%x
1084  if(ASSOCIATED(grid%x)) deallocate(grid%x)
1085  allocate( grid%x( grid%size ) )
1086  grid%x(1:size_prev) = x_local
1087  deallocate(x_local)
1088  else
1089  if(ASSOCIATED(grid%x)) deallocate(grid%x) !< Check if allocated
1090  allocate( grid%x( grid%size ) )
1091  grid%x%di = 0.0_r8_kind; grid%x%dj = 0.0_r8_kind
1092  end if
1093  end if
1094 
1095  ll = size_prev
1096  if( grid%tile_me == tile2 ) then ! me is tile2
1097  do l=1,nxgrid2
1098  if (in_box_me(i2(l), j2(l), grid)) then
1099  ! insert in this grids cell pattern list and add area to side 2 area
1100  ll = ll + 1
1101  grid%x(ll)%i1 = i1(l); grid%x(ll)%i2 = i2(l)
1102  grid%x(ll)%j1 = j1(l); grid%x(ll)%j2 = j2(l)
1103  if(grid%is_ug) then
1104  grid%x(ll)%l2 = grid%l_index((j2(l)-1)*grid%im + i2(l))
1105  endif
1106 ! if(grid1%is_ug) then
1107 ! grid1%x(ll)%l1 = grid1%l_index((j1(l)-1)*grid1%im + i1(l))
1108 ! endif
1109  grid%x(ll)%tile = tile1
1110  grid%x(ll)%area = area(l)
1111  if(scale_exist) then
1112  grid%x(ll)%scale = scale(l)
1113  else
1114  grid%x(ll)%scale = 1.0_r8_kind
1115  endif
1116  if(use_higher_order) then
1117  grid%x(ll)%di = di(l)
1118  grid%x(ll)%dj = dj(l)
1119  end if
1120 
1121  if (make_exchange_reproduce) then
1122  do p=0,xmap%npes-1
1123  if(grid1%tile(p) == tile1) then
1124  if (in_box_nbr(i1(l), j1(l), grid1, p)) then
1125  grid%x(ll)%pe = p + xmap%root_pe
1126  end if
1127  end if
1128  end do
1129  end if ! make_exchange reproduce
1130  end if
1131  end do
1132  end if
1133 
1134  if(grid%id == xmap%grids(size(xmap%grids(:)))%id) then
1135  last_grid = .true.
1136  else
1137  last_grid = .false.
1138  endif
1139 
1140  size_repro = 0
1141  if(grid1%tile_me == tile1) then
1142  if(associated(iarray)) then
1143  nxgrid1_old = size(iarray(:))
1144  else
1145  nxgrid1_old = 0
1146  endif
1147 
1148  allocate(y2m1_pe(nxgrid1))
1149  if(.not. last_grid ) allocate(pos_s(0:xmap%npes-1))
1150  y2m1_pe = -1
1151  if(nxgrid1_old > 0) then
1152  do p=0,xmap%npes-1
1153  y2m1_size(p) = xmap%your2my1_size(p)
1154  enddo
1155  else
1156  y2m1_size = 0
1157  endif
1158 
1159  do l=1,nxgrid1
1160  if (in_box_me(i1_side1(l), j1_side1(l), grid1) ) then
1161  if(grid1%is_ug) then
1162  lll = grid1%l_index((j1_side1(l)-1)*grid1%im+i1_side1(l))
1163  grid1%area(lll,1) = grid1%area(lll,1) + area_side1(l)
1164  else
1165  grid1%area(i1_side1(l),j1_side1(l)) = grid1%area(i1_side1(l),j1_side1(l))+area_side1(l)
1166  endif
1167  do p=0,xmap%npes-1
1168  if (grid%tile(p) == tile2) then
1169  if (in_box_nbr(i2_side1(l), j2_side1(l), grid, p)) then
1170  xmap%your2my1(p) = .true.
1171  y2m1_pe(l) = p
1172  y2m1_size(p) = y2m1_size(p) + 1
1173  endif
1174  endif
1175  enddo
1176  size_repro = size_repro + 1
1177  endif
1178  enddo
1179  pos_x = 0
1180  do p = 1, npes-1
1181  pos_x(p) = pos_x(p-1) + y2m1_size(p-1)
1182  enddo
1183 
1184  if(.not. last_grid) pos_s(:) = pos_x(:)
1185 
1186  if(nxgrid1_old > 0) then
1187  y2m1_size(:) = xmap%your2my1_size(:)
1188  iarray2 => iarray
1189  jarray2 => jarray
1190  allocate(iarray(nxgrid1+nxgrid1_old), jarray(nxgrid1+nxgrid1_old))
1191  ! copy the i-j index
1192  do p=0,xmap%npes-1
1193  do n = 1, xmap%your2my1_size(p)
1194  iarray(pos_x(p)+n) = iarray2(pos_s(p)+n)
1195  jarray(pos_x(p)+n) = jarray2(pos_s(p)+n)
1196  enddo
1197  enddo
1198  deallocate(iarray2, jarray2)
1199  else
1200  allocate(iarray(nxgrid1), jarray(nxgrid1))
1201  iarray(:) = 0
1202  jarray(:) = 0
1203  y2m1_size(:) = 0
1204  endif
1205 
1206  do l=1,nxgrid1
1207  p = y2m1_pe(l)
1208  if(p<0) cycle
1209  found = .false.
1210  if(y2m1_size(p) > 0) then
1211  pos = pos_x(p)+y2m1_size(p)
1212  if( i1_side1(l) == iarray(pos) .AND. j1_side1(l) == jarray(pos) ) then
1213  found = .true.
1214  else
1215  !---may need to replace with a fast search algorithm
1216  do n = 1, y2m1_size(p)
1217  pos = pos_x(p)+n
1218  if(i1_side1(l) == iarray(pos) .AND. j1_side1(l) == jarray(pos)) then
1219  found = .true.
1220  exit
1221  endif
1222  enddo
1223  endif
1224  endif
1225  if( (.NOT. found) .OR. monotonic_exchange ) then
1226  y2m1_size(p) = y2m1_size(p)+1
1227  pos = pos_x(p)+y2m1_size(p)
1228  iarray(pos) = i1_side1(l)
1229  jarray(pos) = j1_side1(l)
1230  endif
1231  end do
1232  xmap%your2my1_size(:) = y2m1_size(:)
1233  deallocate(y2m1_pe)
1234  if(last_grid) then
1235  deallocate(iarray, jarray)
1236  if(allocated(pos_s)) deallocate(pos_s)
1237  end if
1238  end if
1239 
1240  if (grid1%tile_me == tile1 .and. size_repro > 0) then
1241  ll_repro = grid%size_repro
1242  grid%size_repro = ll_repro + size_repro
1243  if(ll_repro > 0) then ! extend data
1244  allocate(x_local(ll_repro))
1245  x_local = grid%x_repro
1246  if(ASSOCIATED(grid%x_repro)) deallocate(grid%x_repro)
1247  allocate( grid%x_repro(grid%size_repro ) )
1248  grid%x_repro(1:ll_repro) = x_local
1249  deallocate(x_local)
1250  else
1251  if(ASSOCIATED(grid%x_repro)) deallocate(grid%x_repro) !< Check if allocated
1252  allocate( grid%x_repro( grid%size_repro ) )
1253  grid%x_repro%di = 0.0_r8_kind; grid%x_repro%dj = 0.0_r8_kind
1254  end if
1255  do l=1,nxgrid1
1256  if (in_box_me(i1_side1(l),j1_side1(l), grid1) ) then
1257  ll_repro = ll_repro + 1
1258  grid%x_repro(ll_repro)%i1 = i1_side1(l); grid%x_repro(ll_repro)%i2 = i2_side1(l)
1259  grid%x_repro(ll_repro)%j1 = j1_side1(l); grid%x_repro(ll_repro)%j2 = j2_side1(l)
1260  if(grid1%is_ug) then
1261  grid%x_repro(ll_repro)%l1 = grid1%l_index((j1_side1(l)-1)*grid1%im+i1_side1(l))
1262  endif
1263  if(grid%is_ug) then
1264 ! grid%x_repro(ll_repro)%l2 = grid%l_index((j2_side1(l)-1)*grid%im+i2_side1(l))
1265  endif
1266  grid%x_repro(ll_repro)%tile = tile1
1267  grid%x_repro(ll_repro)%area = area_side1(l)
1268  if(use_higher_order) then
1269  grid%x_repro(ll_repro)%di = di_side1(l)
1270  grid%x_repro(ll_repro)%dj = dj_side1(l)
1271  end if
1272 
1273  do p=0,xmap%npes-1
1274  if(grid%tile(p) == tile2) then
1275  if (in_box_nbr(i2_side1(l), j2_side1(l), grid, p)) then
1276  grid%x_repro(ll_repro)%pe = p + xmap%root_pe
1277  end if
1278  end if
1279  end do
1280  end if ! make_exchange_reproduce
1281  end do
1282  end if
1283 
1284  deallocate(i1, j1, i2, j2, area)
1285  if(use_higher_order) deallocate(di, dj)
1286  if(scale_exist) deallocate(scale)
1287  if(is_distribute) then
1288  deallocate(i1_side1, j1_side1, i2_side1, j2_side1, area_side1)
1289  if(use_higher_order) deallocate(di_side1, dj_side1)
1290  endif
1291 
1292  i1=>null(); j1=>null(); i2=>null(); j2=>null()
1293  call mpp_clock_end(id_load_xgrid5)
1294 
1295 
1296 
1297 end subroutine load_xgrid
1298 
1299 !#######################################################################
1300 
1301 !> @brief read the center point of the grid from version 1 grid file.
1302 !! only the grid at the side 1 is needed, so we only read
1303 !! atm and land grid.
1304 subroutine get_grid_version1(grid, grid_id, grid_file)
1305  type(grid_type), intent(inout) :: grid
1306  character(len=3), intent(in) :: grid_id
1307  character(len=*), intent(in) :: grid_file
1308 
1309  real(r8_kind), dimension(grid%im) :: lonb
1310  real(r8_kind), dimension(grid%jm) :: latb
1311  real(r8_kind) :: d2r
1312  integer :: is, ie, js, je
1313  type(fmsnetcdfdomainfile_t) :: fileobj
1314 
1315  d2r = pi / 180.0_r8_kind
1316 
1317  if(.not. open_file(fileobj, grid_file, 'read', grid%domain) ) then
1318  call error_mesg('xgrid_mod(get_grid_version1)', 'Error in opening file '//trim(grid_file), fatal)
1319  endif
1320 
1321  call mpp_get_compute_domain(grid%domain, is, ie, js, je)
1322  if (associated(grid%lon)) deallocate(grid%lon) !< Check if allocated
1323  if (associated(grid%lat)) deallocate(grid%lat) !< Check if allocated
1324  allocate(grid%lon(grid%im), grid%lat(grid%jm))
1325  if(grid_id == 'ATM') then
1326  call read_data(fileobj, 'xta', lonb)
1327  call read_data(fileobj, 'yta', latb)
1328 
1329  if(.not. allocated(area_atm_model)) then
1330  allocate(area_atm_model(is:ie, js:je))
1331  call get_area_elements(fileobj, 'AREA_ATM_MODEL', area_atm_model)
1332  endif
1333  if(.not. allocated(area_atm_sphere)) then
1334  allocate(area_atm_sphere(is:ie, js:je))
1335  call get_area_elements(fileobj, 'AREA_ATM', area_atm_sphere)
1336  endif
1337  else if(grid_id == 'LND') then
1338  call read_data(fileobj, 'xtl', lonb)
1339  call read_data(fileobj, 'ytl', latb)
1340  if(.not. allocated(area_lnd_model)) then
1341  allocate(area_lnd_model(is:ie, js:je))
1342  call get_area_elements(fileobj, 'AREA_LND_MODEL', area_lnd_model)
1343  endif
1344  if(.not. allocated(area_lnd_sphere)) then
1345  allocate(area_lnd_sphere(is:ie, js:je))
1346  call get_area_elements(fileobj, 'AREA_LND', area_lnd_sphere)
1347  endif
1348  else if(grid_id == 'OCN' ) then
1349  if(.not. allocated(area_ocn_sphere)) then
1350  allocate(area_ocn_sphere(is:ie, js:je))
1351  call get_area_elements(fileobj, 'AREA_OCN', area_ocn_sphere)
1352  endif
1353  endif
1354  !--- second order remapping suppose second order
1355  if(grid_id == 'LND' .or. grid_id == 'ATM') then
1356  grid%lon = lonb * d2r
1357  grid%lat = latb * d2r
1358  endif
1359  grid%is_latlon = .true.
1360 
1361  call close_file(fileobj)
1362 
1363  return
1364 
1365 end subroutine get_grid_version1
1366 
1367 !#######################################################################
1368 
1369 !> @brief read the center point of the grid from version 1 grid file.
1370 !! only the grid at the side 1 is needed, so we only read
1371 !! atm and land grid
1372 subroutine get_grid_version2(grid, grid_id, grid_file)
1373  type(grid_type), intent(inout) :: grid
1374  character(len=3), intent(in) :: grid_id
1375  character(len=*), intent(in) :: grid_file
1376 
1377  real(r8_kind), allocatable :: tmpx(:,:), tmpy(:,:)
1378  real(r8_kind) :: d2r
1379  integer :: is, ie, js, je, nlon, nlat, i, j
1380  integer :: start(4), nread(4), isc2, iec2, jsc2, jec2
1381  type(fmsnetcdffile_t) :: fileobj
1382 
1383  if(.not. open_file(fileobj, grid_file, 'read') ) then
1384  call error_mesg('xgrid_mod(get_grid_version2)', 'Error in opening file '//trim(grid_file), fatal)
1385  endif
1386 
1387  d2r = pi / 180.0_r8_kind
1388 
1389  call mpp_get_compute_domain(grid%domain, is, ie, js, je)
1390 
1391  call get_dimension_size(fileobj, "nx", nlon)
1392  call get_dimension_size(fileobj, "ny", nlat)
1393  if( mod(nlon,2) .NE. 0) call error_mesg('xgrid_mod', &
1394  'flux_exchange_mod: atmos supergrid longitude size can not be divided by 2', fatal)
1395  if( mod(nlat,2) .NE. 0) call error_mesg('xgrid_mod', &
1396  'flux_exchange_mod: atmos supergrid latitude size can not be divided by 2', fatal)
1397  nlon = nlon/2
1398  nlat = nlat/2
1399  if(nlon .NE. grid%im .OR. nlat .NE. grid%jm) call error_mesg('xgrid_mod', &
1400  'grid size in tile_file does not match the global grid size', fatal)
1401 
1402  if( grid_id == 'LND' .or. grid_id == 'ATM' .or. grid_id == 'WAV' ) then
1403  isc2 = 2*grid%is_me-1; iec2 = 2*grid%ie_me+1
1404  jsc2 = 2*grid%js_me-1; jec2 = 2*grid%je_me+1
1405  allocate(tmpx(isc2:iec2, jsc2:jec2) )
1406  allocate(tmpy(isc2:iec2, jsc2:jec2) )
1407  start = 1; nread = 1
1408  start(1) = isc2; nread(1) = iec2 - isc2 + 1
1409  start(2) = jsc2; nread(2) = jec2 - jsc2 + 1
1410  call read_data(fileobj, 'x', tmpx, corner=start, edge_lengths=nread)
1411  call read_data(fileobj, 'y', tmpy, corner=start, edge_lengths=nread)
1412  if(is_lat_lon(tmpx, tmpy) ) then
1413  deallocate(tmpx, tmpy)
1414  start = 1; nread = 1
1415  start(2) = 2; nread(1) = nlon*2+1
1416  allocate(tmpx(nlon*2+1, 1), tmpy(1, nlat*2+1))
1417  call read_data(fileobj, "x", tmpx, corner=start, edge_lengths=nread)
1418  if (associated(grid%lon)) deallocate(grid%lon) !< Check if allocated
1419  if (associated(grid%lat)) deallocate(grid%lat) !< Check if allocated
1420  allocate(grid%lon(grid%im), grid%lat(grid%jm))
1421  do i = 1, grid%im
1422  grid%lon(i) = tmpx(2*i,1) * d2r
1423  end do
1424  start = 1; nread = 1
1425  start(1) = 2; nread(2) = nlat*2+1
1426  call read_data(fileobj, "y", tmpy, corner=start, edge_lengths=nread)
1427  do j = 1, grid%jm
1428  grid%lat(j) = tmpy(1, 2*j) * d2r
1429  end do
1430  grid%is_latlon = .true.
1431  else
1432  if (associated(grid%geolon)) deallocate(grid%geolon) !< Check if allocated
1433  if (associated(grid%geolat)) deallocate(grid%geolat) !< Check if allocated
1434  allocate(grid%geolon(grid%isd_me:grid%ied_me, grid%jsd_me:grid%jed_me))
1435  allocate(grid%geolat(grid%isd_me:grid%ied_me, grid%jsd_me:grid%jed_me))
1436  grid%geolon = 1.0e10_r8_kind
1437  grid%geolat = 1.0e10_r8_kind
1438  !--- area_ocn_sphere, area_lnd_sphere, area_atm_sphere is not been defined.
1439  do j = grid%js_me,grid%je_me
1440  do i = grid%is_me,grid%ie_me
1441  grid%geolon(i, j) = tmpx(i*2,j*2)*d2r
1442  grid%geolat(i, j) = tmpy(i*2,j*2)*d2r
1443  end do
1444  end do
1445  call mpp_update_domains(grid%geolon, grid%domain)
1446  call mpp_update_domains(grid%geolat, grid%domain)
1447  grid%is_latlon = .false.
1448  end if
1449  deallocate(tmpx, tmpy)
1450  end if
1451 
1452  call close_file(fileobj)
1453 
1454 return
1455 
1456 end subroutine get_grid_version2
1457 
1458 !#######################################################################
1459 !> @brief Read the area elements from NetCDF file
1460 subroutine get_area_elements_fms2_io(fileobj, name, get_area_data)
1461  type(fmsnetcdfdomainfile_t), intent(in) :: fileobj
1462  character(len=*), intent(in) :: name
1463  real(r8_kind), intent(out) :: get_area_data(:,:)
1464 
1465  if(variable_exists(fileobj, name)) then
1466  call read_data(fileobj, name, get_area_data)
1467  else
1468  call error_mesg('xgrid_mod', 'no field named '//trim(name)//' in grid file '//trim(fileobj%path)// &
1469  ' Will set data to negative values...', note)
1470  ! area elements no present in grid_spec file, set to negative values....
1471  get_area_data = -1.0_r8_kind
1472  endif
1473 
1474 end subroutine get_area_elements_fms2_io
1475 
1476 !#######################################################################
1477 
1478 !> @brief Read Ocean area element data from netCDF file.
1479 !! @details If available in the NetCDF file, this routine will read the
1480 !! AREA_OCN_MODEL field and load the data into global AREA_OCN_MODEL.
1481 !! If not available, then the array AREA_OCN_MODEL will be left
1482 !! unallocated. Must be called by all PEs.
1483 subroutine get_ocean_model_area_elements(domain, grid_file)
1484 
1485  type(domain2d), intent(in) :: domain
1486  character(len=*), intent(in) :: grid_file
1487  integer :: is, ie, js, je
1488  type(fmsnetcdffile_t) :: fileobj
1489 
1490  if(allocated(area_ocn_model)) return
1491 
1492  call mpp_get_compute_domain(domain, is, ie, js, je)
1493  ! allocate even if ie<is, ... in which case the array will have zero size
1494  ! but will still return .T. for allocated(...)
1495  allocate(area_ocn_model(is:ie, js:je))
1496  if(ie < is .or. je < js ) return
1497 
1498  if(.not. open_file(fileobj, grid_file, 'read') ) then
1499  call error_mesg('xgrid_mod(get_ocean_model_area_elements)', 'Error in opening file '//trim(grid_file), fatal)
1500  endif
1501 
1502  if(variable_exists(fileobj, 'AREA_OCN_MODEL') )then
1503  call read_data(fileobj, 'AREA_OCN_MODEL', area_ocn_model)
1504  else
1505  deallocate(area_ocn_model)
1506  endif
1507  call close_file(fileobj)
1508 
1509 
1510 end subroutine get_ocean_model_area_elements
1511 
1512 !#######################################################################
1513 
1514 !> @brief Sets up exchange grid connectivity using grid specification file and
1515 !! processor domain decomposition.
1516 subroutine setup_xmap(xmap, grid_ids, grid_domains, grid_file, atm_grid, lnd_ug_domain)
1517  type(xmap_type), intent(inout) :: xmap
1518  character(len=3), dimension(:), intent(in ) :: grid_ids
1519  type(domain2d), dimension(:), intent(in ) :: grid_domains
1520  character(len=*), intent(in ) :: grid_file
1521  type(grid_box_type), optional, intent(in ) :: atm_grid
1522  type(domainug), optional, intent(in ) :: lnd_ug_domain
1523 
1524  integer :: g, p, i
1525  integer :: nxgrid_file, i1, i2, i3, tile1, tile2, j
1526  integer :: nxc, nyc, out_unit
1527  type(grid_type), pointer :: grid => null()!< pointer to loop through grid_type's in list
1528  type(grid_type), pointer, save :: grid1 => null() !< saved pointer to the first grid in the list
1529  real(r8_kind), dimension(3) :: xxx
1530  real(r8_kind), dimension(:,:), allocatable :: check_data
1531  real(r8_kind), dimension(:,:,:), allocatable :: check_data_3d
1532  real(r8_kind), allocatable :: tmp_2d(:,:), tmp_3d(:,:,:)
1533  character(len=FMS_FILE_LEN) :: xgrid_file, xgrid_name
1534  character(len=FMS_FILE_LEN) :: tile_file, mosaic_file
1535  character(len=256) :: mosaic1, mosaic2, contact, xgrid_dimname
1536  character(len=256) :: tile1_name, tile2_name
1537  character(len=256), allocatable :: tile1_list(:), tile2_list(:)
1538  character(len=FMS_FILE_LEN), allocatable :: xgrid_filelist(:)
1539  integer :: npes, npes2
1540  integer, allocatable :: pelist(:)
1541  type(domain2d), save :: domain2
1542  logical :: use_higher_order = .false.
1543  integer :: lnd_ug_id, l
1544  integer, allocatable :: grid_index(:)
1545  type(fmsnetcdffile_t) :: gridfileobj, mosaicfileobj, fileobj
1546  type(grid_type), allocatable, target :: grids_tmp(:) !< added for nvhpc workaround, stores xmap's
1547  !! grid_type array so we can safely point to it
1548 
1549  call mpp_clock_begin(id_setup_xmap)
1550 
1551  if(interp_method .ne. 'first_order') use_higher_order = .true.
1552 
1553  out_unit = stdout()
1554  xmap%me = mpp_pe()
1555  xmap%npes = mpp_npes()
1556  xmap%root_pe = mpp_root_pe()
1557 
1558  if (associated(xmap%grids)) deallocate(xmap%grids) !< Check if allocated
1559  allocate( xmap%grids(1:size(grid_ids(:))) )
1560 
1561  if (associated(xmap%your1my2)) deallocate(xmap%your1my2) !< Check if allocated
1562  if (associated(xmap%your2my1)) deallocate(xmap%your2my1) !< Check if allocated
1563  if (associated(xmap%your2my1_size)) deallocate(xmap%your2my1_size) !< Check if allocated
1564  allocate ( xmap%your1my2(0:xmap%npes-1), xmap%your2my1(0:xmap%npes-1) )
1565  allocate ( xmap%your2my1_size(0:xmap%npes-1) )
1566 
1567  xmap%your1my2 = .false.; xmap%your2my1 = .false.;
1568  xmap%your2my1_size = 0
1569 
1570  if(.not. open_file(gridfileobj,trim(grid_file), "read")) then
1571  call error_mesg('xgrid_mod', 'Error when opening file'//trim(grid_file), fatal)
1572  endif
1573 
1574 ! check the exchange grid file version to be used by checking the field in the file
1575  if(variable_exists(gridfileobj, "AREA_ATMxOCN" ) ) then
1576  call close_file(gridfileobj)
1577  xmap%version = version1
1578  else if(variable_exists(gridfileobj, "ocn_mosaic_file" ) ) then
1579  xmap%version = version2
1580  else
1581  call error_mesg('xgrid_mod', 'both AREA_ATMxOCN and ocn_mosaic_file does not exist in '//trim(grid_file), fatal)
1582  end if
1583 
1584 
1585  if(xmap%version==version1) then
1586  call error_mesg('xgrid_mod', 'reading exchange grid information from grid spec file', note)
1587  else
1588  call error_mesg('xgrid_mod', 'reading exchange grid information from mosaic grid file', note)
1589  end if
1590 
1591  ! check to see the id of lnd.
1592  lnd_ug_id = 0
1593  if(present(lnd_ug_domain)) then
1594  do g=1,size(grid_ids(:))
1595  if(grid_ids(g) == 'LND') lnd_ug_id = g
1596  enddo
1597  endif
1598 
1599  call mpp_clock_begin(id_load_xgrid)
1600 
1601  ! nvhpc compiler workaround
1602  ! saves grid array as an allocatable and points to that to avoid error from pointing to xmap%grids in loop
1603  grids_tmp = xmap%grids
1604 
1605  grid1 => xmap%grids(1)
1606 
1607  do g=1, size(grid_ids(:))
1608 
1609  grid => grids_tmp(g)
1610 
1611  grid%id = grid_ids(g)
1612  grid%domain = grid_domains(g)
1613  grid%on_this_pe = mpp_domain_is_initialized(grid_domains(g))
1614  if (associated(grid%is)) deallocate(grid%is) !< Check if allocated
1615  if (associated(grid%ie)) deallocate(grid%ie) !< Check if allocated
1616  if (associated(grid%js)) deallocate(grid%js) !< Check if allocated
1617  if (associated(grid%je)) deallocate(grid%je) !< Check if allocated
1618  if (associated(grid%tile)) deallocate(grid%tile) !< Check if allocated
1619  allocate ( grid%is(0:xmap%npes-1), grid%ie(0:xmap%npes-1) )
1620  allocate ( grid%js(0:xmap%npes-1), grid%je(0:xmap%npes-1) )
1621  allocate ( grid%tile(0:xmap%npes-1) )
1622  grid%npes = 0
1623  grid%ni = 0
1624  grid%nj = 0
1625  grid%is = 0
1626  grid%ie = -1
1627  grid%js = 0
1628  grid%je = -1
1629  grid%tile = -1
1630 
1631  select case(xmap%version)
1632  case(version1)
1633  grid%ntile = 1
1634  case(version2)
1635  call read_data(gridfileobj, lowercase(grid_ids(g))//'_mosaic_file', mosaic_file)
1636  if(.not. open_file(mosaicfileobj,'INPUT/'//trim(mosaic_file), "read")) then
1637  call error_mesg('xgrid_mod', 'Error when opening solo mosaic file INPUT/'//trim(mosaic_file), fatal)
1638  endif
1639  call get_dimension_size(mosaicfileobj, 'ntiles', grid%ntile)
1640  end select
1641 
1642  if( g == 1 .AND. grid_ids(1) == 'ATM' ) then
1643  if( .NOT. grid%on_this_pe ) call error_mesg('xgrid_mod', 'ATM domain is not defined on some processor' ,fatal)
1644  endif
1645  grid%npes = mpp_get_domain_npes(grid%domain)
1646  if( xmap%npes > grid%npes .AND. g == 1 .AND. grid_ids(1) == 'ATM' ) then
1647  call mpp_broadcast_domain(grid%domain, domain2)
1648  else if(xmap%npes > grid%npes) then
1649  call mpp_broadcast_domain(grid%domain)
1650  grid%npes = mpp_get_domain_npes(grid%domain)
1651  endif
1652 
1653  npes = grid%npes
1654  allocate(grid%pelist(0:npes-1))
1655  call mpp_get_domain_pelist(grid%domain, grid%pelist)
1656  grid%root_pe = mpp_get_domain_root_pe(grid%domain)
1657 
1658  call mpp_get_data_domain(grid%domain, grid%isd_me, grid%ied_me, grid%jsd_me, grid%jed_me, &
1659  xsize=grid%nxd_me, ysize=grid%nyd_me)
1660  call mpp_get_global_domain(grid%domain, xsize=grid%ni, ysize=grid%nj)
1661 
1662  if( grid%root_pe == xmap%root_pe ) then
1663  call mpp_get_compute_domains(grid%domain, xbegin=grid%is(0:npes-1), xend=grid%ie(0:npes-1), &
1664  ybegin=grid%js(0:npes-1), yend=grid%je(0:npes-1) )
1665  call mpp_get_tile_list(grid%domain, grid%tile(0:npes-1))
1666  if( xmap%npes > npes .AND. g == 1 .AND. grid_ids(1) == 'ATM' ) then
1667  call mpp_get_compute_domains(domain2, xbegin=grid%is(npes:xmap%npes-1), xend=grid%ie(npes:xmap%npes-1), &
1668  ybegin=grid%js(npes:xmap%npes-1), yend=grid%je(npes:xmap%npes-1) )
1669  call mpp_get_tile_list(domain2, grid%tile(npes:xmap%npes-1))
1670  endif
1671  else
1672  npes2 = xmap%npes-npes
1673  call mpp_get_compute_domains(domain2, xbegin=grid%is(0:npes2-1), xend=grid%ie(0:npes2-1), &
1674  ybegin=grid%js(0:npes2-1), yend=grid%je(0:npes2-1) )
1675  call mpp_get_compute_domains(grid%domain, xbegin=grid%is(npes2:xmap%npes-1), xend=grid%ie(npes2:xmap%npes-1), &
1676  ybegin=grid%js(npes2:xmap%npes-1), yend=grid%je(npes2:xmap%npes-1) )
1677  call mpp_get_tile_list(domain2, grid%tile(0:npes2-1))
1678  call mpp_get_tile_list(grid%domain, grid%tile(npes2:xmap%npes-1))
1679  endif
1680  if( xmap%npes > grid%npes .AND. g == 1 .AND. grid_ids(1) == 'ATM' ) then
1681  call mpp_deallocate_domain(domain2)
1682  endif
1683  npes = grid%npes
1684  if( g == 1 .AND. grid_ids(1) == 'ATM' ) npes = xmap%npes
1685  do p = 0, npes-1
1686  if(grid%tile(p) > grid%ntile .or. grid%tile(p) < 1) call error_mesg('xgrid_mod', &
1687  'tile id should between 1 and ntile', fatal)
1688  end do
1689 
1690  grid%im = grid%ni
1691  grid%jm = grid%nj
1692  call mpp_max(grid%ni)
1693  call mpp_max(grid%nj)
1694 
1695  grid%is_me => grid%is(xmap%me-xmap%root_pe); grid%ie_me => grid%ie(xmap%me-xmap%root_pe)
1696  grid%js_me => grid%js(xmap%me-xmap%root_pe); grid%je_me => grid%je(xmap%me-xmap%root_pe)
1697  grid%nxc_me = grid%ie_me - grid%is_me + 1
1698  grid%nyc_me = grid%je_me - grid%js_me + 1
1699  grid%tile_me => grid%tile(xmap%me-xmap%root_pe)
1700 
1701  grid%km = 1
1702  grid%is_ug = .false.
1703  !--- setup for land unstructure grid
1704  if( g == lnd_ug_id ) then
1705  if(xmap%version == version1) call error_mesg('xgrid_mod', &
1706  'does not support unstructured grid for VERSION1 grid' ,fatal)
1707  grid%is_ug = .true.
1708  grid%ug_domain = lnd_ug_domain
1709  if (associated(grid%ls)) deallocate(grid%ls) !< Check if allocated
1710  if (associated(grid%le)) deallocate(grid%le) !< Check if allocated
1711  if (associated(grid%gs)) deallocate(grid%gs) !< Check if allocated
1712  if (associated(grid%ge)) deallocate(grid%ge) !< Check if allocated
1713  allocate ( grid%ls(0:xmap%npes-1), grid%le(0:xmap%npes-1) )
1714  allocate ( grid%gs(0:xmap%npes-1), grid%ge(0:xmap%npes-1) )
1715  grid%ls = 0
1716  grid%le = -1
1717  grid%gs = 0
1718  grid%ge = -1
1719  if(xmap%npes > grid%npes) then
1720  call mpp_broadcast_domain(grid%ug_domain)
1721  endif
1722  call mpp_get_ug_compute_domains(grid%ug_domain, begin=grid%ls(0:npes-1), end=grid%le(0:npes-1) )
1723  call mpp_get_ug_domains_index(grid%ug_domain, grid%gs(0:npes-1), grid%ge(0:npes-1) )
1724  call mpp_get_ug_domain_tile_list(grid%ug_domain, grid%tile(0:npes-1))
1725  grid%ls_me => grid%ls(xmap%me-xmap%root_pe); grid%le_me => grid%le(xmap%me-xmap%root_pe)
1726  grid%gs_me => grid%gs(xmap%me-xmap%root_pe); grid%ge_me => grid%ge(xmap%me-xmap%root_pe)
1727  grid%tile_me => grid%tile(xmap%me-xmap%root_pe)
1728  grid%nxl_me = grid%le_me - grid%ls_me + 1
1729  if (associated(grid%l_index)) deallocate(grid%l_index) !< Check if allocated
1730  allocate(grid%l_index(grid%gs_me:grid%ge_me))
1731  allocate(grid_index(grid%ls_me:grid%le_me))
1732  call mpp_get_ug_domain_grid_index(grid%ug_domain, grid_index)
1733 
1734  grid%l_index = 0
1735  do l = grid%ls_me,grid%le_me
1736  grid%l_index(grid_index(l)) = l
1737  enddo
1738 
1739  if( grid%on_this_pe ) then
1740  if (associated(grid%area)) deallocate(grid%area) !< Check if allocated
1741  if (associated(grid%area_inv)) deallocate(grid%area_inv) !< Check if allocated
1742  allocate( grid%area (grid%ls_me:grid%le_me,1) )
1743  allocate( grid%area_inv(grid%ls_me:grid%le_me,1) )
1744  grid%area = 0.0_r8_kind
1745  grid%size = 0
1746  grid%size_repro = 0
1747  endif
1748  else if( grid%on_this_pe ) then
1749  if (associated(grid%area)) deallocate(grid%area) !< Check if allocated
1750  if (associated(grid%area_inv)) deallocate(grid%area_inv) !< Check if allocated
1751  allocate( grid%area (grid%is_me:grid%ie_me, grid%js_me:grid%je_me) )
1752  allocate( grid%area_inv(grid%is_me:grid%ie_me, grid%js_me:grid%je_me) )
1753  grid%area = 0.0_r8_kind
1754  grid%size = 0
1755  grid%size_repro = 0
1756  endif
1757 
1758  ! get the center point of the grid box
1759  if(.not. grid%is_ug) then
1760  select case(xmap%version)
1761  case(version1)
1762  if( grid%npes .NE. xmap%npes ) then
1763  call error_mesg('xgrid_mod', .NE.' grid%npes xmap%npes ', fatal)
1764  endif
1765  call get_grid_version1(grid, grid_ids(g), grid_file)
1766  case(version2)
1767  allocate(pelist(0:xmap%npes-1))
1768  call mpp_get_current_pelist(pelist)
1769  if( grid%on_this_pe ) then
1770  call mpp_set_current_pelist(grid%pelist)
1771  call get_mosaic_tile_grid(tile_file, mosaicfileobj, grid%domain)
1772  call get_grid_version2(grid, grid_ids(g), tile_file)
1773  endif
1774  call mpp_set_current_pelist(pelist)
1775  deallocate(pelist)
1776  ! read the contact information from mosaic_file to check if atmosphere is nested model
1777  if( g == 1 .AND. grid_ids(1) == 'ATM' ) then
1778  nnest = get_nest_contact(mosaicfileobj, tile_nest, tile_parent, is_nest, &
1779  ie_nest, js_nest, je_nest, is_parent, ie_parent, js_parent, je_parent)
1780 
1781  endif
1782  end select
1783 
1784  if( use_higher_order .AND. grid%id == 'ATM') then
1785  if( nnest > 0 ) call error_mesg('xgrid_mod', 'second_order is not supported for nested coupler', fatal)
1786  if( grid%is_latlon ) then
1787  call mpp_modify_domain(grid%domain, grid%domain_with_halo, whalo=1, ehalo=1, shalo=1, nhalo=1)
1788  call mpp_get_data_domain(grid%domain_with_halo, grid%isd_me, grid%ied_me, grid%jsd_me, grid%jed_me, &
1789  xsize=grid%nxd_me, ysize=grid%nyd_me)
1790  else
1791  if(.NOT. present(atm_grid)) call error_mesg('xgrid_mod', &
1792  'when first grid is "ATM", atm_grid should be present', fatal)
1793  if(grid%is_me-grid%isd_me .NE. 1 .or. grid%ied_me-grid%ie_me .NE. 1 .or. &
1794  grid%js_me-grid%jsd_me .NE. 1 .or. grid%jed_me-grid%je_me .NE. 1 ) &
1795  & call error_mesg('xgrid_mod', 'for non-latlon grid (cubic grid), '//&
1796  & 'the halo size should be 1 in all four direction', fatal)
1797  if(.NOT.( ASSOCIATED(atm_grid%dx) .AND. ASSOCIATED(atm_grid%dy) .AND. ASSOCIATED(atm_grid%edge_w) .AND. &
1798  ASSOCIATED(atm_grid%edge_e) .AND. ASSOCIATED(atm_grid%edge_s) .AND.ASSOCIATED(atm_grid%edge_n).AND.&
1799  ASSOCIATED(atm_grid%en1) .AND. ASSOCIATED(atm_grid%en2) .AND. ASSOCIATED(atm_grid%vlon) .AND. &
1800  ASSOCIATED(atm_grid%vlat) ) ) call error_mesg( 'xgrid_mod', &
1801  'for non-latlon grid (cubic grid), all the fields in atm_grid data type should be allocated', fatal)
1802  nxc = grid%ie_me - grid%is_me + 1
1803  nyc = grid%je_me - grid%js_me + 1
1804  if(size(atm_grid%dx,1) .NE. nxc .OR. size(atm_grid%dx,2) .NE. nyc+1) &
1805  call error_mesg('xgrid_mod', 'incorrect dimension size of atm_grid%dx', fatal)
1806  if(size(atm_grid%dy,1) .NE. nxc+1 .OR. size(atm_grid%dy,2) .NE. nyc) &
1807  call error_mesg('xgrid_mod', 'incorrect dimension sizeof atm_grid%dy', fatal)
1808  if(size(atm_grid%area,1) .NE. nxc .OR. size(atm_grid%area,2) .NE. nyc) &
1809  call error_mesg('xgrid_mod', 'incorrect dimension size of atm_grid%area', fatal)
1810  if(size(atm_grid%edge_w(:)) .NE. nyc+1 .OR. size(atm_grid%edge_e(:)) .NE. nyc+1) &
1811  call error_mesg('xgrid_mod', 'incorrect dimension size of atm_grid%edge_w/edge_e', fatal)
1812  if(size(atm_grid%edge_s(:)) .NE. nxc+1 .OR. size(atm_grid%edge_n(:)) .NE. nxc+1) &
1813  call error_mesg('xgrid_mod', 'incorrect dimension size of atm_grid%edge_s/edge_n', fatal)
1814  if(size(atm_grid%en1,1) .NE. 3 .OR. size(atm_grid%en1,2) .NE. nxc .OR. size(atm_grid%en1,3) .NE. nyc+1) &
1815  call error_mesg( 'xgrid_mod', 'incorrect dimension size of atm_grid%en1', fatal)
1816  if(size(atm_grid%en2,1) .NE. 3 .OR. size(atm_grid%en2,2) .NE. nxc+1 .OR. size(atm_grid%en2,3) .NE. nyc) &
1817  call error_mesg( 'xgrid_mod', 'incorrect dimension size of atm_grid%en2', fatal)
1818  if(size(atm_grid%vlon,1) .NE. 3 .OR. size(atm_grid%vlon,2) .NE. nxc .OR. size(atm_grid%vlon,3) .NE. nyc)&
1819  call error_mesg('xgrid_mod', 'incorrect dimension size of atm_grid%vlon', fatal)
1820  if(size(atm_grid%vlat,1) .NE. 3 .OR. size(atm_grid%vlat,2) .NE. nxc .OR. size(atm_grid%vlat,3) .NE. nyc)&
1821  call error_mesg('xgrid_mod', 'incorrect dimension size of atm_grid%vlat', fatal)
1822  if (associated(grid%box%dx)) deallocate(grid%box%dx) !< Check if allocated
1823  if (associated(grid%box%dy)) deallocate(grid%box%dy) !< Check if allocated
1824  if (associated(grid%box%area)) deallocate(grid%box%area) !< Check if allocated
1825  if (associated(grid%box%edge_w)) deallocate(grid%box%edge_w) !< Check if allocated
1826  if (associated(grid%box%edge_e)) deallocate(grid%box%edge_e) !< Check if allocated
1827  if (associated(grid%box%edge_s)) deallocate(grid%box%edge_s) !< Check if allocated
1828  if (associated(grid%box%edge_n)) deallocate(grid%box%edge_n) !< Check if allocated
1829  if (associated(grid%box%en1)) deallocate(grid%box%en1) !< Check if allocated
1830  if (associated(grid%box%en2)) deallocate(grid%box%en2) !< Check if allocated
1831  if (associated(grid%box%vlon)) deallocate(grid%box%vlon) !< Check if allocated
1832  if (associated(grid%box%vlat)) deallocate(grid%box%vlat) !< Check if allocated
1833  allocate(grid%box%dx (grid%is_me:grid%ie_me, grid%js_me:grid%je_me+1 ))
1834  allocate(grid%box%dy (grid%is_me:grid%ie_me+1, grid%js_me:grid%je_me ))
1835  allocate(grid%box%area (grid%is_me:grid%ie_me, grid%js_me:grid%je_me ))
1836  allocate(grid%box%edge_w(grid%js_me:grid%je_me+1))
1837  allocate(grid%box%edge_e(grid%js_me:grid%je_me+1))
1838  allocate(grid%box%edge_s(grid%is_me:grid%ie_me+1))
1839  allocate(grid%box%edge_n(grid%is_me:grid%ie_me+1))
1840  allocate(grid%box%en1 (3, grid%is_me:grid%ie_me, grid%js_me:grid%je_me+1 ))
1841  allocate(grid%box%en2 (3, grid%is_me:grid%ie_me+1, grid%js_me:grid%je_me ))
1842  allocate(grid%box%vlon (3, grid%is_me:grid%ie_me, grid%js_me:grid%je_me ))
1843  allocate(grid%box%vlat (3, grid%is_me:grid%ie_me, grid%js_me:grid%je_me ))
1844  grid%box%dx = atm_grid%dx
1845  grid%box%dy = atm_grid%dy
1846  grid%box%area = atm_grid%area
1847  grid%box%edge_w = atm_grid%edge_w
1848  grid%box%edge_e = atm_grid%edge_e
1849  grid%box%edge_s = atm_grid%edge_s
1850  grid%box%edge_n = atm_grid%edge_n
1851  grid%box%en1 = atm_grid%en1
1852  grid%box%en2 = atm_grid%en2
1853  grid%box%vlon = atm_grid%vlon
1854  grid%box%vlat = atm_grid%vlat
1855  end if
1856  end if
1857  end if
1858  if(xmap%version==version2) call close_file(mosaicfileobj)
1859  if (g>1) then
1860  if(grid%on_this_pe) then
1861  if (associated(grid%frac_area)) deallocate(grid%frac_area) !< Check if allocated
1862  if(grid%is_ug) then
1863  allocate( grid%frac_area(grid%ls_me:grid%le_me, 1, grid%km) )
1864  else
1865  allocate( grid%frac_area(grid%is_me:grid%ie_me, grid%js_me:grid%je_me, grid%km) )
1866  endif
1867  grid%frac_area = 1.0_r8_kind
1868  endif
1869 
1870  ! nvhpc workaround, needs to save the grid pointer since its allocatable
1871  xmap%grids(g) = grid
1872 
1873  ! load exchange cells, sum grid cell areas, set your1my2/your2my1
1874  select case(xmap%version)
1875  case(version1)
1876  call load_xgrid (xmap, grid, grid_file, grid_ids(1), grid_ids(g), 1, 1, use_higher_order)
1877  case(version2)
1878  select case(grid_ids(1))
1879  case( 'ATM' )
1880  xgrid_name = 'a'
1881  case( 'LND' )
1882  xgrid_name = 'l'
1883  case( 'WAV' )
1884  xgrid_name = 'w'
1885  case default
1886  call error_mesg('xgrid_mod', 'grid_ids(1) should be ATM, LND or WAV', fatal)
1887  end select
1888  select case(grid_ids(g))
1889  case( 'LND' )
1890  xgrid_dimname = 'nfile_'//trim(xgrid_name)//'Xl'
1891  xgrid_name = trim(xgrid_name)//'Xl_file'
1892  case( 'OCN' )
1893  xgrid_dimname = 'nfile_'//trim(xgrid_name)//'Xo'
1894  xgrid_name = trim(xgrid_name)//'Xo_file'
1895  case( 'WAV' )
1896  xgrid_dimname = 'nfile_'//trim(xgrid_name)//'Xw'
1897  xgrid_name = trim(xgrid_name)//'Xw_file'
1898  case default
1899  call error_mesg('xgrid_mod', 'grid_ids(g) should be LND, OCN or WAV', fatal)
1900  end select
1901  ! get the tile list for each mosaic
1902 
1903  call read_data(gridfileobj, lowercase(grid_ids(1))//'_mosaic_file', mosaic1)
1904  call read_data(gridfileobj, lowercase(grid_ids(g))//'_mosaic_file', mosaic2)
1905 
1906  mosaic1 = 'INPUT/'//trim(mosaic1)
1907  mosaic2 = 'INPUT/'//trim(mosaic2)
1908 
1909  allocate(tile1_list(grid1%ntile), tile2_list(grid%ntile) )
1910  if(.not. open_file(fileobj,mosaic1, "read")) then
1911  call error_mesg('xgrid_mod(setup_xmap)', 'Error when opening mosaic1 file '//trim(mosaic1), fatal)
1912  endif
1913  call read_data(fileobj, 'gridtiles', tile1_list)
1914  call close_file(fileobj)
1915 
1916  if(.not. open_file(fileobj,mosaic2, "read")) then
1917  call error_mesg('xgrid_mod(setup_xmap)', 'Error when opening mosaic2 file '//trim(mosaic2), fatal)
1918  endif
1919  call read_data(fileobj, 'gridtiles', tile2_list)
1920  call close_file(fileobj)
1921 
1922  if(variable_exists(gridfileobj, xgrid_name)) then
1923  call get_dimension_size(gridfileobj, xgrid_dimname, nxgrid_file)
1924  if(nxgrid_file>0) then
1925  allocate(xgrid_filelist(nxgrid_file))
1926  call read_data(gridfileobj, xgrid_name, xgrid_filelist)
1927  endif
1928  ! loop through all the exchange grid file
1929  do i = 1, nxgrid_file
1930  xgrid_file = 'INPUT/'//trim(xgrid_filelist(i))
1931  if(.not. open_file(fileobj,xgrid_file, "read")) then
1932  call error_mesg('xgrid_mod(setup_xmap)', 'Error when opening xgrid file '// &
1933  & trim(xgrid_file), fatal)
1934  endif
1935 
1936  ! find the tile number of side 1 and side 2 mosaic, which is contained in field contact
1937  call read_data(fileobj, "contact", contact)
1938  i1 = index(contact, ":")
1939  i2 = index(contact, "::")
1940  i3 = index(contact, ":", back=.true. )
1941  if(i1 == 0 .OR. i2 == 0) call error_mesg('xgrid_mod', &
1942  'field contact in file '//trim(xgrid_file)//' should contains ":" and "::" ', fatal)
1943  if(i1 == i3) call error_mesg('xgrid_mod', &
1944  'field contact in file '//trim(xgrid_file)//' should contains two ":"', fatal)
1945  tile1_name = contact(i1+1:i2-1)
1946  tile2_name = contact(i3+1:len_trim(contact))
1947  tile1 = 0; tile2 = 0
1948  do j = 1, grid1%ntile
1949  if( trim(tile1_name) == trim(tile1_list(j)) ) then
1950  tile1 = j
1951  exit
1952  end if
1953  end do
1954  do j = 1, grid%ntile
1955  if( tile2_name == tile2_list(j) ) then
1956  tile2 = j
1957  exit
1958  end if
1959  end do
1960 
1961  if(tile1 == 0) call error_mesg('xgrid_mod', &
1962  trim(tile1_name)//' is not a tile of mosaic '//trim(mosaic1), fatal)
1963  if(tile2 == 0) call error_mesg('xgrid_mod', &
1964  trim(tile2_name)//' is not a tile of mosaic '//trim(mosaic2), fatal)
1965  call close_file(fileobj)
1966  call load_xgrid (xmap, grid, xgrid_file, grid_ids(1), grid_ids(g), tile1, tile2, &
1967  use_higher_order)
1968  end do
1969  deallocate(xgrid_filelist)
1970  endif
1971  deallocate(tile1_list, tile2_list)
1972  end select
1973  if(grid%on_this_pe) then
1974  grid%area_inv = 0.0_r8_kind;
1975  where (grid%area>0.0_r8_kind) grid%area_inv = 1.0_r8_kind/grid%area
1976  endif
1977  end if
1978 
1979  ! nvhpc workaround, needs to save the grid pointer since its allocatable
1980  xmap%grids(g) = grid
1981  end do
1982 
1983  if(xmap%version == version2) call close_file(gridfileobj)
1984 
1985  call mpp_clock_end(id_load_xgrid)
1986 
1987  grid1%area_inv = 0.0_r8_kind;
1988  where (grid1%area>0.0_r8_kind)
1989  grid1%area_inv = 1.0_r8_kind/grid1%area
1990  end where
1991 
1992  xmap%your1my2(xmap%me-xmap%root_pe) = .false. ! this is not necessarily true but keeps
1993  xmap%your2my1(xmap%me-xmap%root_pe) = .false. ! a PE from communicating with itself
1994 
1995  if (make_exchange_reproduce) then
1996  if (associated(xmap%send_count_repro)) deallocate(xmap%send_count_repro) !< Check if allocated
1997  if (associated(xmap%recv_count_repro)) deallocate(xmap%recv_count_repro) !< Check if allocated
1998  allocate( xmap%send_count_repro(0:xmap%npes-1) )
1999  allocate( xmap%recv_count_repro(0:xmap%npes-1) )
2000  xmap%send_count_repro = 0
2001  xmap%recv_count_repro = 0
2002  do g=2,size(xmap%grids(:))
2003  do p=0,xmap%npes-1
2004  if(xmap%grids(g)%size >0) &
2005  xmap%send_count_repro(p) = xmap%send_count_repro(p) &
2006  +count(xmap%grids(g)%x (:)%pe==p+xmap%root_pe)
2007  if(xmap%grids(g)%size_repro >0) &
2008  xmap%recv_count_repro(p) = xmap%recv_count_repro(p) &
2009  +count(xmap%grids(g)%x_repro(:)%pe==p+xmap%root_pe)
2010  end do
2011  end do
2012  xmap%send_count_repro_tot = sum(xmap%send_count_repro)
2013  xmap%recv_count_repro_tot = sum(xmap%recv_count_repro)
2014  else
2015  xmap%send_count_repro_tot = 0
2016  xmap%recv_count_repro_tot = 0
2017  end if
2018 
2019  if (associated(xmap%x1)) deallocate(xmap%x1) !< Check if allocated
2020  if (associated(xmap%x2)) deallocate(xmap%x2) !< Check if allocated
2021  if (associated(xmap%x1_put)) deallocate(xmap%x1_put) !< Check if allocated
2022  if (associated(xmap%x2_get)) deallocate(xmap%x2_get) !< Check if allocated
2023  allocate( xmap%x1(1:sum(xmap%grids(2:size(xmap%grids(:)))%size)) )
2024  allocate( xmap%x2(1:sum(xmap%grids(2:size(xmap%grids(:)))%size)) )
2025  allocate( xmap%x1_put(1:sum(xmap%grids(2:size(xmap%grids(:)))%size)) )
2026  allocate( xmap%x2_get(1:sum(xmap%grids(2:size(xmap%grids(:)))%size)) )
2027 
2028  !--- The following will setup indx to be used in regen
2029  if (associated(xmap%get1)) deallocate(xmap%get1) !< Check if allocated
2030  if (associated(xmap%put1)) deallocate(xmap%put1) !< Check if allocated
2031  allocate(xmap%get1, xmap%put1)
2032  call mpp_clock_begin(id_set_comm)
2033 
2034  call set_comm_get1(xmap)
2035 
2036  call set_comm_put1(xmap)
2037 
2038  if(make_exchange_reproduce) then
2039  if (associated(xmap%get1_repro)) deallocate(xmap%get1_repro) !< Check if allocated
2040  allocate(xmap%get1_repro)
2041  call set_comm_get1_repro(xmap)
2042  endif
2043 
2044  call mpp_clock_end(id_set_comm)
2045 
2046  call mpp_clock_begin(id_regen)
2047  call regen(xmap)
2048  call mpp_clock_end(id_regen)
2049 
2050  call mpp_clock_begin(id_conservation_check)
2051 
2052  if(lnd_ug_id ==0) then
2053  xxx = conservation_check(grid1%area*0.0_r8_kind+1.0_r8_kind, grid1%id, xmap)
2054  else
2055  allocate(tmp_2d(grid1%is_me:grid1%ie_me, grid1%js_me:grid1%je_me))
2056  tmp_2d = 1.0_r8_kind
2057  xxx = conservation_check_ug(tmp_2d, grid1%id, xmap)
2058  deallocate(tmp_2d)
2059  endif
2060  write(out_unit,* )"Checked data is array of constant 1"
2061  write(out_unit,* )grid1%id,'(',xmap%grids(:)%id,')=', xxx
2062 
2063  if(lnd_ug_id == 0) then
2064  do g=2,size(xmap%grids(:))
2065  xxx = conservation_check(xmap%grids(g)%frac_area*0.0_r8_kind+1.0_r8_kind, xmap%grids(g)%id, xmap )
2066  write( out_unit,* )xmap%grids(g)%id,'(',xmap%grids(:)%id,')=', xxx
2067  enddo
2068  else
2069  do g=2,size(xmap%grids(:))
2070  grid => xmap%grids(g)
2071  allocate(tmp_3d(grid%is_me:grid%ie_me, grid%js_me:grid%je_me,grid%km))
2072  tmp_3d = 1.0_r8_kind
2073  xxx = conservation_check_ug(tmp_3d, xmap%grids(g)%id, xmap )
2074  write( out_unit,* )xmap%grids(g)%id,'(',xmap%grids(:)%id,')=', xxx
2075  deallocate(tmp_3d)
2076  enddo
2077  endif
2078  ! create an random number 2d array
2079  if(grid1%id == "ATM") then
2080  allocate(check_data(size(grid1%area,1), size(grid1%area,2)))
2081  call random_number(check_data)
2082 
2083  !--- second order along both zonal and meridinal direction
2084  if(lnd_ug_id ==0) then
2085  xxx = conservation_check(check_data, grid1%id, xmap, remap_method = remapping_method )
2086  else
2087  xxx = conservation_check_ug(check_data, grid1%id, xmap, remap_method = remapping_method )
2088  endif
2089  write( out_unit,* ) &
2090  "Checked data is array of random number between 0 and 1 using "//trim(interp_method)
2091  write( out_unit,* )grid1%id,'(',xmap%grids(:)%id,')=', xxx
2092 
2093  deallocate(check_data)
2094  do g=2,size(xmap%grids(:))
2095  allocate(check_data_3d(xmap%grids(g)%is_me:xmap%grids(g)%ie_me, &
2096  xmap%grids(g)%js_me:xmap%grids(g)%je_me, grid1%km))
2097  call random_number(check_data_3d)
2098  if(lnd_ug_id ==0) then
2099  xxx = conservation_check(check_data_3d, xmap%grids(g)%id, xmap, remap_method = remapping_method )
2100  else
2101  xxx = conservation_check_ug(check_data_3d, xmap%grids(g)%id, xmap, remap_method = remapping_method )
2102  endif
2103  write( out_unit,* )xmap%grids(g)%id,'(',xmap%grids(:)%id,')=', xxx
2104  deallocate( check_data_3d)
2105  end do
2106  endif
2107  call mpp_clock_end(id_conservation_check)
2108 
2109  call mpp_clock_end(id_setup_xmap)
2110 
2111 end subroutine setup_xmap
2112 
2113 !----------------------------------------------------------------------------
2114 
2115 !> @brief currently we are assuming there is only one nest region
2116 !! @return integer get_nest_contact
2117 function get_nest_contact_fms2_io(fileobj, tile_nest_out, tile_parent_out, is_nest_out, &
2118  ie_nest_out, js_nest_out, je_nest_out, is_parent_out, &
2119  ie_parent_out, js_parent_out, je_parent_out) &
2120  result(get_nest_contact) !< This is needed for use_mpp_io
2121 type(fmsnetcdffile_t), intent(in) :: fileobj
2122 integer, intent(out) :: tile_nest_out, tile_parent_out
2123 integer, intent(out) :: is_nest_out, ie_nest_out
2124 integer, intent(out) :: js_nest_out, je_nest_out
2125 integer, intent(out) :: is_parent_out, ie_parent_out
2126 integer, intent(out) :: js_parent_out, je_parent_out
2127 integer :: get_nest_contact
2128 !--- local variables
2129 integer :: ntiles, ncontacts, n, t1, t2
2130 integer :: nx1_contact, ny1_contact
2131 integer :: nx2_contact, ny2_contact
2132 integer, allocatable, dimension(:) :: nx, ny
2133 integer, allocatable, dimension(:) :: tile1, tile2
2134 integer, allocatable, dimension(:) :: istart1, iend1, jstart1, jend1
2135 integer, allocatable, dimension(:) :: istart2, iend2, jstart2, jend2
2136 
2137  tile_nest_out = 0; tile_parent_out = 0
2138  is_nest_out = 0; ie_nest_out = 0
2139  js_nest_out = 0; je_nest_out = 0
2140  is_parent_out = 0; ie_parent_out = 0
2141  js_parent_out = 0; je_parent_out = 0
2142  get_nest_contact = 0
2143 
2144  ! first read the contact information
2145  ntiles = get_mosaic_ntiles(fileobj)
2146  if( ntiles == 1 ) return
2147 
2148  allocate(nx(ntiles), ny(ntiles))
2149  call get_mosaic_grid_sizes(fileobj, nx, ny)
2150 
2151  ncontacts = get_mosaic_ncontacts(fileobj)
2152  if(ncontacts == 0) return
2153  allocate(tile1(ncontacts), tile2(ncontacts))
2154  allocate(istart1(ncontacts), iend1(ncontacts))
2155  allocate(jstart1(ncontacts), jend1(ncontacts))
2156  allocate(istart2(ncontacts), iend2(ncontacts))
2157  allocate(jstart2(ncontacts), jend2(ncontacts))
2158 
2159  call get_mosaic_contact( fileobj, tile1, tile2, istart1, iend1, jstart1, jend1, &
2160  istart2, iend2, jstart2, jend2)
2161 
2162  do n = 1, ncontacts
2163  if( tile1(n) == tile2(n) ) cycle ! same tile could not be nested
2164 
2165  nx1_contact = iend1(n)-istart1(n)+1
2166  ny1_contact = jend1(n)-jstart1(n)+1
2167  nx2_contact = iend2(n)-istart2(n)+1
2168  ny2_contact = jend2(n)-jstart2(n)+1
2169  t1 = tile1(n);
2170  t2 = tile2(n);
2171  ! For nesting, the contact index of one tile must match its global domain
2172  if( (nx(t1) .NE. nx1_contact .OR. ny(t1) .NE. ny1_contact ) .AND. &
2173  (nx(t2) .NE. nx2_contact .OR. ny(t2) .NE. ny2_contact ) ) cycle
2174  if(nx1_contact == nx2_contact .AND. ny1_contact == ny2_contact) then
2175  call error_mesg('xgrid_mod', 'There is no refinement for the overlapping region', fatal)
2176  endif
2177 
2179  if(get_nest_contact>1) then
2180  call error_mesg('xgrid_mod', 'only support one nest region, contact developer' ,fatal)
2181  endif
2182  if(nx2_contact*ny2_contact > nx1_contact*ny1_contact) then
2183  is_nest_out = istart2(n);
2184  ie_nest_out = iend2(n);
2185  js_nest_out = jstart2(n);
2186  je_nest_out = jend2(n);
2187  tile_nest_out = tile2(n);
2188  is_parent_out = istart1(n);
2189  ie_parent_out = iend1(n);
2190  js_parent_out = jstart1(n);
2191  je_parent_out = jend1(n);
2192  tile_parent_out = tile1(n);
2193  else
2194  is_nest_out = istart1(n);
2195  ie_nest_out = iend1(n);
2196  js_nest_out = jstart1(n);
2197  je_nest_out = jend1(n);
2198  tile_nest_out = tile1(n);
2199  is_parent_out = istart2(n);
2200  ie_parent_out = iend2(n);
2201  js_parent_out = jstart2(n);
2202  je_parent_out = jend2(n);
2203  tile_parent_out = tile2(n);
2204  endif
2205  enddo
2206 
2207  deallocate(nx, ny, tile1, tile2)
2208  deallocate(istart1, iend1, jstart1, jend1)
2209  deallocate(istart2, iend2, jstart2, jend2)
2210 
2211 
2212  return
2213 
2214 end function get_nest_contact_fms2_io
2215 
2216 !#######################################################################
2217 subroutine set_comm_get1_repro(xmap)
2218  type (xmap_type), intent(inout) :: xmap
2219  integer, dimension(xmap%npes) :: pe_ind, cnt
2220  integer, dimension(0:xmap%npes-1) :: send_ind, pl
2221  integer :: npes, nsend, nrecv, mypos
2222  integer :: m, p, pos, n, g, l, im, i, j
2223  type(comm_type), pointer, save :: comm => null()
2224 
2225  comm => xmap%get1_repro
2226  npes = xmap%npes
2227 
2228  nrecv = 0
2229  mypos = mpp_pe() - mpp_root_pe()
2230  do m=0,npes-1
2231  p = mod(mypos+npes-m, npes)
2232  if( xmap%recv_count_repro(p) > 0 ) then
2233  nrecv = nrecv + 1
2234  pe_ind(nrecv) = p
2235  endif
2236  enddo
2237 
2238  comm%nrecv = nrecv
2239  if( nrecv > 0 ) then
2240  if (associated(comm%recv)) deallocate(comm%recv) !< Check if allocated
2241  allocate(comm%recv(nrecv))
2242  pos = 0
2243  do n = 1, nrecv
2244  p = pe_ind(n)
2245  comm%recv(n)%count = xmap%recv_count_repro(p)
2246  comm%recv(n)%pe = p + xmap%root_pe
2247  comm%recv(n)%buffer_pos = pos
2248  pos = pos + comm%recv(n)%count
2249  enddo
2250  endif
2251 
2252 
2253  ! send information
2254  nsend = 0
2255  mypos = mpp_pe() - mpp_root_pe()
2256  do m=0,xmap%npes-1
2257  p = mod(mypos+m, npes)
2258  if( xmap%send_count_repro(p) > 0 ) then
2259  nsend = nsend + 1
2260  pe_ind(nsend) = p
2261  send_ind(p) = nsend
2262  endif
2263  enddo
2264 
2265  comm%nsend = nsend
2266  if( nsend > 0 ) then
2267  if (associated(comm%send)) deallocate(comm%send) !< Check if allocated
2268  allocate(comm%send(nsend))
2269  pos = 0
2270  cnt(:) = 0
2271  do n = 1, nsend
2272  p = pe_ind(n)
2273  comm%send(n)%count = xmap%send_count_repro(p)
2274  comm%send(n)%pe = p + xmap%root_pe
2275  comm%send(n)%buffer_pos = pos
2276  pos = pos + comm%send(n)%count
2277  allocate(comm%send(n)%i(comm%send(n)%count))
2278  allocate(comm%send(n)%j(comm%send(n)%count))
2279  allocate(comm%send(n)%g(comm%send(n)%count))
2280  allocate(comm%send(n)%xLoc(comm%send(n)%count))
2281  enddo
2282 
2283  do g=2,size(xmap%grids(:))
2284  im = xmap%grids(g)%im
2285  do l=1,xmap%grids(g)%size ! index into this side 2 grid's patterns
2286  p = xmap%grids(g)%x(l)%pe-xmap%root_pe
2287  n = send_ind(p)
2288  cnt(n) = cnt(n) + 1
2289  pos = cnt(n)
2290  i = xmap%grids(g)%x(l)%i2
2291  j = xmap%grids(g)%x(l)%j2
2292  if(xmap%grids(g)%is_ug) then
2293  comm%send(n)%i(pos) = xmap%grids(g)%l_index((j-1)*im+i)
2294  comm%send(n)%j(pos) = 1
2295  else
2296  comm%send(n)%i(pos) = xmap%grids(g)%x(l)%i2
2297  comm%send(n)%j(pos) = xmap%grids(g)%x(l)%j2
2298  endif
2299  comm%send(n)%g(pos) = g
2300  enddo
2301  enddo
2302  !--- make sure the count is correct
2303  do n = 1, nsend
2304  if( comm%send(n)%count .NE. cnt(n) ) call error_mesg('xgrid_mod', &
2305  .NE.'comm%send(n)%count cnt(n)', fatal)
2306  enddo
2307  endif
2308 
2309  !--- set up the recv_pos for unpack the data.
2310  pl(:) = 1
2311  do g=2,size(xmap%grids(:))
2312  do l=1,xmap%grids(g)%size_repro ! index into side1 grid's patterns
2313  p = xmap%grids(g)%x_repro(l)%pe-xmap%root_pe
2314  xmap%grids(g)%x_repro(l)%recv_pos = pl(p)
2315  pl(p) = pl(p) + 1
2316  end do
2317  end do
2318 
2319 
2320 
2321 end subroutine set_comm_get1_repro
2322 
2323 !#######################################################################
2324 subroutine set_comm_get1(xmap)
2325  type (xmap_type), intent(inout) :: xmap
2326  type (grid_type), pointer, save :: grid1 =>null()
2327  integer, allocatable :: send_size(:)
2328  integer, allocatable :: recv_size(:)
2329  integer :: max_size, g, npes, l, ll, nset, m
2330  integer :: i1, j1, tile1, p, n, pos, buffer_pos, mypos
2331  integer :: nsend, nrecv, rbuf_size, sbuf_size, msgsize
2332  logical :: found
2333  real(r8_kind), allocatable :: recv_buf(:), send_buf(:)
2334  real(r8_kind), allocatable :: diarray(:), djarray(:)
2335  integer, allocatable :: iarray(:), jarray(:), tarray(:)
2336  integer, allocatable :: pos_x(:), pelist(:), size_pe(:), pe_side1(:)
2337  integer :: recv_buffer_pos(0:xmap%npes)
2338  integer :: send_buffer_pos(0:xmap%npes)
2339  type(comm_type), pointer, save :: comm => null()
2340  integer :: i, j
2341 
2342  max_size = 0
2343  do g=2,size(xmap%grids(:))
2344  max_size = max_size + xmap%grids(g)%size
2345  enddo
2346  comm => xmap%get1
2347  grid1 => xmap%grids(1)
2348  comm%nsend = 0
2349  comm%nrecv = 0
2350  npes = xmap%npes
2351 
2352  allocate(pelist(0:npes-1))
2353  call mpp_get_current_pelist(pelist)
2354  allocate(send_size(0:npes-1))
2355  allocate(recv_size(0:npes-1))
2356  allocate(size_pe(0:npes-1))
2357  allocate(pos_x(0:npes-1))
2358  size_pe = 0
2359  send_size = 0
2360  recv_size = 0
2361 
2362  if(max_size > 0) then
2363  allocate(pe_side1(max_size))
2364  if (associated(xmap%ind_get1)) deallocate(xmap%ind_get1) !< Check if allocated
2365  allocate(xmap%ind_get1(max_size))
2366 
2367  !--- find the recv_indx
2368  ll = 0
2369  do g=2,size(xmap%grids(:))
2370  do l=1,xmap%grids(g)%size
2371  i1 = xmap%grids(g)%x(l)%i1
2372  j1 = xmap%grids(g)%x(l)%j1
2373  tile1 = xmap%grids(g)%x(l)%tile
2374  do p=0,npes-1
2375  if(grid1%tile(p) == tile1) then
2376  if(in_box_nbr(i1, j1, grid1, p)) then
2377  size_pe(p) = size_pe(p) + 1
2378  exit
2379  endif
2380  endif
2381  enddo
2382  if( p == npes ) then
2383  call error_mesg('xgrid_mod', 'tile is not in grid1%tile(:)', fatal)
2384  endif
2385  ll = ll + 1
2386  pe_side1(ll) = p
2387  enddo
2388  enddo
2389 
2390  pos_x = 0
2391  do p = 1, npes-1
2392  pos_x(p) = pos_x(p-1) + size_pe(p-1)
2393  enddo
2394 
2395  !---find the send size for get_1_from_xgrid
2396  allocate(iarray(max_size))
2397  allocate(jarray(max_size))
2398  allocate(tarray(max_size))
2399  if(monotonic_exchange) then
2400  allocate(diarray(max_size))
2401  allocate(djarray(max_size))
2402  endif
2403 
2404  ll = 0
2405 
2406  do g=2,size(xmap%grids(:))
2407  do l=1,xmap%grids(g)%size
2408  i1 = xmap%grids(g)%x(l)%i1
2409  j1 = xmap%grids(g)%x(l)%j1
2410  tile1 = xmap%grids(g)%x(l)%tile
2411  ll = ll + 1
2412  p = pe_side1(ll)
2413 
2414  found = .false.
2415  if(send_size(p) > 0) then
2416  if( i1 == iarray(pos_x(p)+send_size(p)) .AND. j1 == jarray(pos_x(p)+send_size(p)) &
2417  .AND. tile1 == tarray(pos_x(p)+send_size(p))) then
2418  found = .true.
2419  n = send_size(p)
2420  else
2421  !---may need to replace with a fast search algorithm
2422  do n = 1, send_size(p)
2423  if(i1 == iarray(pos_x(p)+n) .AND. j1 == jarray(pos_x(p)+n) .AND. tile1 == tarray(pos_x(p)+n)) then
2424  found = .true.
2425  exit
2426  endif
2427  enddo
2428  endif
2429  endif
2430  if( (.NOT. found) .OR. monotonic_exchange ) then
2431  send_size(p) = send_size(p)+1
2432  pos = pos_x(p)+send_size(p)
2433  iarray(pos) = i1
2434  jarray(pos) = j1
2435  tarray(pos) = tile1
2436  if(monotonic_exchange) then
2437  diarray(pos) = xmap%grids(g)%x(l)%di
2438  djarray(pos) = xmap%grids(g)%x(l)%dj
2439  endif
2440  n = send_size(p)
2441  endif
2442  xmap%ind_get1(ll) = n
2443  enddo
2444  enddo
2445 
2446  pos_x = 0
2447  do p = 1, npes-1
2448  pos_x(p) = pos_x(p-1) + send_size(p-1)
2449  enddo
2450 
2451  ll = 0
2452  do g=2,size(xmap%grids(:))
2453  do l=1,xmap%grids(g)%size
2454  ll = ll + 1
2455  p = pe_side1(ll)
2456  xmap%ind_get1(ll) = pos_x(p) + xmap%ind_get1(ll)
2457  enddo
2458  enddo
2459  endif
2460 
2461  mypos = mpp_pe()-mpp_root_pe()
2462 
2463  ! send/recv for get_1_from_xgrid_recv
2464  recv_size(:) = xmap%your2my1_size(:)
2465  nsend = count( send_size> 0)
2466  comm%nsend = nsend
2467  if(nsend>0) then
2468  if (associated(comm%send)) deallocate(comm%send) !< Check if allocated
2469  allocate(comm%send(nsend))
2470  comm%send(:)%count = 0
2471  endif
2472 
2473  pos = 0
2474  do p = 0, npes-1
2475  send_buffer_pos(p) = pos
2476  pos = pos + send_size(p)
2477  enddo
2478 
2479  pos = 0
2480  comm%sendsize = 0
2481  do n = 0, npes-1
2482  p = mod(mypos+n, npes)
2483  if(send_size(p)>0) then
2484  pos = pos + 1
2485  allocate(comm%send(pos)%i(send_size(p)))
2486  comm%send(pos)%buffer_pos = send_buffer_pos(p)
2487  comm%send(pos)%count = send_size(p)
2488  comm%send(pos)%pe = pelist(p)
2489  comm%sendsize = comm%sendsize + send_size(p)
2490  endif
2491  enddo
2492 
2493  nset = 3
2494  if(monotonic_exchange) nset = 5
2495  rbuf_size = sum(recv_size)*nset
2496  sbuf_size = sum(send_size)*nset
2497  if(rbuf_size>0) allocate(recv_buf(rbuf_size))
2498  if(sbuf_size>0) allocate(send_buf(sbuf_size))
2499 
2500  pos = 0
2501  do n = 0, npes-1
2502  p = mod(mypos+npes-n, npes)
2503  if(recv_size(p) ==0) cycle
2504  msgsize = recv_size(p)*nset
2505  call mpp_recv(recv_buf(pos+1), glen=msgsize, from_pe=pelist(p), block=.false., tag=comm_tag_4)
2506  pos = pos + msgsize
2507  enddo
2508 
2509  pos_x = 0
2510  do p = 1, npes-1
2511  pos_x(p) = pos_x(p-1) + size_pe(p-1)
2512  enddo
2513  ll = 0
2514  pos = 0
2515  do n = 0, npes-1
2516  p = mod(mypos+n, npes)
2517  do l = 1, send_size(p)
2518  send_buf(pos+1) = real(iarray(pos_x(p)+l), r8_kind)
2519  send_buf(pos+2) = real(jarray(pos_x(p)+l), r8_kind)
2520  send_buf(pos+3) = real(tarray(pos_x(p)+l), r8_kind)
2521  if(monotonic_exchange) then
2522  send_buf(pos+4) = diarray(pos_x(p)+l)
2523  send_buf(pos+5) = djarray(pos_x(p)+l)
2524  endif
2525  pos = pos + nset
2526  enddo
2527  enddo
2528 
2529  pos = 0
2530  do n = 0, npes-1
2531  p = mod(mypos+n, npes)
2532  if(send_size(p) ==0) cycle
2533  msgsize = send_size(p)*nset
2534  call mpp_send(send_buf(pos+1), plen=msgsize, to_pe=pelist(p), tag=comm_tag_4 )
2535  pos = pos + msgsize
2536  enddo
2537 
2538  call mpp_sync_self(check=event_recv)
2539  nrecv = count(recv_size>0)
2540  comm%nrecv = nrecv
2541  comm%recvsize = 0
2542 
2543  if(nrecv >0) then
2544  if (associated(comm%recv)) deallocate(comm%recv) !< Check if allocated
2545  allocate(comm%recv(nrecv))
2546  comm%recv(:)%count = 0
2547  !--- set up the buffer pos for each receiving
2548  buffer_pos = 0
2549  do p = 0, npes-1
2550  recv_buffer_pos(p) = buffer_pos
2551  buffer_pos = buffer_pos + recv_size(p)
2552  enddo
2553  pos = 0
2554  buffer_pos = 0
2555  do m=0,npes-1
2556  p = mod(mypos+npes-m, npes)
2557  if(recv_size(p)>0) then
2558  pos = pos + 1
2559  allocate(comm%recv(pos)%i(recv_size(p)))
2560  allocate(comm%recv(pos)%j(recv_size(p)))
2561  allocate(comm%recv(pos)%tile(recv_size(p)))
2562  comm%recv(pos)%buffer_pos = recv_buffer_pos(p)
2563  comm%recv(pos)%pe = pelist(p)
2564  comm%recv(pos)%count = recv_size(p)
2565  comm%recvsize = comm%recvsize + recv_size(p)
2566  if(monotonic_exchange) then
2567  allocate(comm%recv(pos)%di(recv_size(p)))
2568  allocate(comm%recv(pos)%dj(recv_size(p)))
2569  endif
2570  if(grid1%is_ug) then
2571  do n = 1, recv_size(p)
2572  i = int(recv_buf(buffer_pos+1))
2573  j = int(recv_buf(buffer_pos+2))
2574  comm%recv(pos)%i(n) = grid1%l_index((j-1)*grid1%im+i)
2575  comm%recv(pos)%j(n) = 1
2576  comm%recv(pos)%tile(n) = int(recv_buf(buffer_pos+3))
2577  if(monotonic_exchange) then
2578  comm%recv(pos)%di(n) = recv_buf(buffer_pos+4)
2579  comm%recv(pos)%dj(n) = recv_buf(buffer_pos+5)
2580  endif
2581  buffer_pos = buffer_pos + nset
2582  enddo
2583  else
2584  do n = 1, recv_size(p)
2585  comm%recv(pos)%i(n) = int(recv_buf(buffer_pos+1) )- grid1%is_me + 1
2586  comm%recv(pos)%j(n) = int(recv_buf(buffer_pos+2) )- grid1%js_me + 1
2587  comm%recv(pos)%tile(n) = int(recv_buf(buffer_pos+3))
2588  if(monotonic_exchange) then
2589  comm%recv(pos)%di(n) = recv_buf(buffer_pos+4)
2590  comm%recv(pos)%dj(n) = recv_buf(buffer_pos+5)
2591  endif
2592  buffer_pos = buffer_pos + nset
2593  enddo
2594  endif
2595  endif
2596  enddo
2597  if (associated(comm%unpack_ind)) deallocate(comm%unpack_ind) !< Check if allocated
2598  allocate(comm%unpack_ind(nrecv))
2599  pos = 0
2600  do p = 0, npes-1
2601  if(recv_size(p)>0) then
2602  pos = pos + 1
2603  do m = 1, nrecv
2604  if(comm%recv(m)%pe == pelist(p)) then
2605  comm%unpack_ind(pos) = m
2606  exit
2607  endif
2608  enddo
2609  endif
2610  enddo
2611  endif
2612  call mpp_sync_self()
2613 
2614  if(allocated(send_buf) ) deallocate(send_buf)
2615  if(allocated(recv_buf) ) deallocate(recv_buf)
2616  if(allocated(pelist) ) deallocate(pelist)
2617  if(allocated(pos_x) ) deallocate(pos_x)
2618  if(allocated(pelist) ) deallocate(pelist)
2619  if(allocated(iarray) ) deallocate(iarray)
2620  if(allocated(jarray) ) deallocate(jarray)
2621  if(allocated(tarray) ) deallocate(tarray)
2622  if(allocated(size_pe) ) deallocate(size_pe)
2623 
2624 end subroutine set_comm_get1
2625 
2626 !###############################################################################
2627 subroutine set_comm_put1(xmap)
2628  type (xmap_type), intent(inout) :: xmap
2629  type (grid_type), pointer, save :: grid1 =>null()
2630  integer, allocatable :: send_size(:)
2631  integer, allocatable :: recv_size(:)
2632  integer :: max_size, g, npes, l, ll, m, mypos
2633  integer :: i1, j1, tile1, p, n, pos, buffer_pos
2634  integer :: nsend, nrecv, msgsize, nset, rbuf_size, sbuf_size
2635  logical :: found
2636  real(r8_kind), allocatable :: recv_buf(:), send_buf(:)
2637  real(r8_kind), allocatable :: diarray(:), djarray(:)
2638  integer, allocatable :: iarray(:), jarray(:), tarray(:)
2639  integer, allocatable :: pos_x(:), pelist(:), size_pe(:), pe_put1(:)
2640  integer :: recv_buffer_pos(0:xmap%npes)
2641  type(comm_type), pointer, save :: comm => null()
2642 
2643 
2644  comm => xmap%put1
2645  if(nnest == 0 .OR. xmap%grids(1)%id .NE. 'ATM' ) then
2646  comm%nsend = xmap%get1%nrecv
2647  comm%nrecv = xmap%get1%nsend
2648  comm%sendsize = xmap%get1%recvsize
2649  comm%recvsize = xmap%get1%sendsize
2650  comm%send => xmap%get1%recv
2651  comm%recv => xmap%get1%send
2652  xmap%ind_put1 => xmap%ind_get1
2653  return
2654  endif
2655 
2656  max_size = 0
2657  do g=2,size(xmap%grids(:))
2658  max_size = max_size + xmap%grids(g)%size
2659  enddo
2660  grid1 => xmap%grids(1)
2661  comm%nsend = 0
2662  comm%nrecv = 0
2663  npes = xmap%npes
2664  allocate(pelist(0:npes-1))
2665  call mpp_get_current_pelist(pelist)
2666  allocate(send_size(0:npes-1))
2667  allocate(recv_size(0:npes-1))
2668  allocate(size_pe(0:npes-1))
2669  allocate(pos_x(0:npes-1))
2670  size_pe = 0
2671  send_size = 0
2672  recv_size = 0
2673 
2674  if(max_size > 0) then
2675  allocate(pe_put1(max_size))
2676  if (associated(xmap%ind_put1)) deallocate(xmap%ind_put1) !< Check if allocated
2677  allocate(xmap%ind_put1(max_size))
2678 
2679  !--- find the recv_indx
2680  ll = 0
2681  do g=2,size(xmap%grids(:))
2682  do l=1,xmap%grids(g)%size
2683  i1 = xmap%grids(g)%x(l)%i1
2684  j1 = xmap%grids(g)%x(l)%j1
2685  tile1 = xmap%grids(g)%x(l)%tile
2686  do p=0,npes-1
2687  if(grid1%tile(p) == tile1) then
2688  if(in_box(i1, j1, grid1%is(p), grid1%ie(p), grid1%js(p), grid1%je(p))) then
2689  size_pe(p) = size_pe(p) + 1
2690  exit
2691  endif
2692  endif
2693  enddo
2694  ll = ll + 1
2695  pe_put1(ll) = p
2696  enddo
2697  enddo
2698 
2699  pos_x = 0
2700  do p = 1, npes-1
2701  pos_x(p) = pos_x(p-1) + size_pe(p-1)
2702  enddo
2703 
2704  !---find the send size for get_1_from_xgrid
2705  allocate(iarray(max_size))
2706  allocate(jarray(max_size))
2707  allocate(tarray(max_size))
2708  if(monotonic_exchange) then
2709  allocate(diarray(max_size))
2710  allocate(djarray(max_size))
2711  endif
2712 
2713  ll = 0
2714 
2715  do g=2,size(xmap%grids(:))
2716  do l=1,xmap%grids(g)%size
2717  i1 = xmap%grids(g)%x(l)%i1
2718  j1 = xmap%grids(g)%x(l)%j1
2719  tile1 = xmap%grids(g)%x(l)%tile
2720  ll = ll + 1
2721  p = pe_put1(ll)
2722 
2723  found = .false.
2724  if(send_size(p) > 0) then
2725  if( i1 == iarray(pos_x(p)+send_size(p)) .AND. j1 == jarray(pos_x(p)+send_size(p)) &
2726  .AND. tile1 == tarray(pos_x(p)+send_size(p))) then
2727  found = .true.
2728  n = send_size(p)
2729  else
2730  !---may need to replace with a fast search algorithm
2731  do n = 1, send_size(p)
2732  if(i1 == iarray(pos_x(p)+n) .AND. j1 == jarray(pos_x(p)+n) .AND. tile1 == tarray(pos_x(p)+n)) then
2733  found = .true.
2734  exit
2735  endif
2736  enddo
2737  endif
2738  endif
2739  if( (.NOT. found) .OR. monotonic_exchange ) then
2740  send_size(p) = send_size(p)+1
2741  pos = pos_x(p)+send_size(p)
2742  iarray(pos) = i1
2743  jarray(pos) = j1
2744  tarray(pos) = tile1
2745  if(monotonic_exchange) then
2746  diarray(pos) = xmap%grids(g)%x(l)%di
2747  djarray(pos) = xmap%grids(g)%x(l)%dj
2748  endif
2749  n = send_size(p)
2750  endif
2751  xmap%ind_put1(ll) = n
2752  enddo
2753  enddo
2754 
2755  pos_x = 0
2756  do p = 1, npes-1
2757  pos_x(p) = pos_x(p-1) + send_size(p-1)
2758  enddo
2759 
2760  ll = 0
2761  do g=2,size(xmap%grids(:))
2762  do l=1,xmap%grids(g)%size
2763  i1 = xmap%grids(g)%x(l)%i1
2764  j1 = xmap%grids(g)%x(l)%j1
2765  tile1 = xmap%grids(g)%x(l)%tile
2766  ll = ll + 1
2767  p = pe_put1(ll)
2768  xmap%ind_put1(ll) = pos_x(p) + xmap%ind_put1(ll)
2769  enddo
2770  enddo
2771  endif
2772 
2773 
2774  mypos = mpp_pe()-mpp_root_pe()
2775 
2776  if (do_alltoall) then
2777  call mpp_alltoall(send_size, 1, recv_size, 1)
2778  else
2779  do n = 0, npes-1
2780  p = mod(mypos+npes-n, npes)
2781  call mpp_recv(recv_size(p), glen=1, from_pe=pelist(p), block=.false., tag=comm_tag_5)
2782  enddo
2783 
2784  !--- send data
2785  do n = 0, npes-1
2786  p = mod(mypos+n, npes)
2787  call mpp_send(send_size(p), plen=1, to_pe=pelist(p), tag=comm_tag_5)
2788  enddo
2789 
2790  call mpp_sync_self(check=event_recv)
2791  call mpp_sync_self()
2792  endif
2793  !--- recv for put_1_to_xgrid
2794  nrecv = count( send_size> 0)
2795  comm%nrecv = nrecv
2796  if(nrecv>0) then
2797  if (associated(comm%recv)) deallocate(comm%recv) !< Check if allocated
2798  allocate(comm%recv(nrecv))
2799  comm%recv(:)%count = 0
2800  endif
2801  pos = 0
2802  comm%recvsize = 0
2803  do p = 0, npes-1
2804  recv_buffer_pos(p) = pos
2805  pos = pos + send_size(p)
2806  enddo
2807 
2808  pos = 0
2809  do n = 0, npes-1
2810  p = mod(mypos+npes-n, npes)
2811  if(send_size(p)>0) then
2812  pos = pos + 1
2813  allocate(comm%recv(pos)%i(send_size(p)))
2814  comm%recv(pos)%buffer_pos = recv_buffer_pos(p)
2815  comm%recv(pos)%count = send_size(p)
2816  comm%recv(pos)%pe = pelist(p)
2817  comm%recvsize = comm%recvsize + send_size(p)
2818  endif
2819  enddo
2820 
2821  nset = 3
2822  if(monotonic_exchange) nset = 5
2823  rbuf_size = sum(recv_size)*nset
2824  sbuf_size = sum(send_size)*nset
2825  if(rbuf_size>0) allocate(recv_buf(rbuf_size))
2826  if(sbuf_size>0) allocate(send_buf(sbuf_size))
2827 
2828  pos = 0
2829  do n = 0, npes-1
2830  p = mod(mypos+npes-n, npes)
2831  if(recv_size(p) ==0) cycle
2832  msgsize = recv_size(p)*nset
2833  call mpp_recv(recv_buf(pos+1), glen=msgsize, from_pe=pelist(p), block=.false., tag=comm_tag_6)
2834  pos = pos + msgsize
2835  enddo
2836 
2837  pos_x = 0
2838  do p = 1, npes-1
2839  pos_x(p) = pos_x(p-1) + size_pe(p-1)
2840  enddo
2841  ll = 0
2842  pos = 0
2843  do n = 0, npes-1
2844  p = mod(mypos+n, npes)
2845  do l = 1, send_size(p)
2846  send_buf(pos+1) = real(iarray(pos_x(p)+l), r8_kind)
2847  send_buf(pos+2) = real(jarray(pos_x(p)+l), r8_kind)
2848  send_buf(pos+3) = real(tarray(pos_x(p)+l), r8_kind)
2849  if(monotonic_exchange) then
2850  send_buf(pos+4) = diarray(pos_x(p)+l)
2851  send_buf(pos+5) = djarray(pos_x(p)+l)
2852  endif
2853  pos = pos + nset
2854  enddo
2855  enddo
2856 
2857  pos = 0
2858  do n = 0, npes-1
2859  p = mod(mypos+n, npes)
2860  if(send_size(p) ==0) cycle
2861  msgsize = send_size(p)*nset
2862  call mpp_send(send_buf(pos+1), plen=msgsize, to_pe=pelist(p), tag=comm_tag_6 )
2863  pos = pos + msgsize
2864  enddo
2865 
2866  call mpp_sync_self(check=event_recv)
2867  nsend = count(recv_size>0)
2868  comm%nsend = nsend
2869  comm%sendsize = 0
2870 
2871  if(nsend >0) then
2872  if (associated(comm%send)) deallocate(comm%send) !< Check if allocated
2873  allocate(comm%send(nsend))
2874  comm%send(:)%count = 0
2875  pos = 0
2876  buffer_pos = 0
2877  do m=0,npes-1
2878  p = mod(mypos+npes-m, npes)
2879  if(recv_size(p)>0) then
2880  pos = pos + 1
2881  allocate(comm%send(pos)%i(recv_size(p)))
2882  allocate(comm%send(pos)%j(recv_size(p)))
2883  allocate(comm%send(pos)%tile(recv_size(p)))
2884  comm%send(pos)%pe = pelist(p)
2885  comm%send(pos)%count = recv_size(p)
2886  comm%sendsize = comm%sendsize + recv_size(p)
2887  if(monotonic_exchange) then
2888  allocate(comm%send(pos)%di(recv_size(p)))
2889  allocate(comm%send(pos)%dj(recv_size(p)))
2890  endif
2891  do n = 1, recv_size(p)
2892  comm%send(pos)%i(n) = int(recv_buf(buffer_pos+1) )- grid1%is_me + 1
2893  comm%send(pos)%j(n) = int(recv_buf(buffer_pos+2) )- grid1%js_me + 1
2894  comm%send(pos)%tile(n) = int(recv_buf(buffer_pos+3))
2895  if(monotonic_exchange) then
2896  comm%send(pos)%di(n) = recv_buf(buffer_pos+4)
2897  comm%send(pos)%dj(n) = recv_buf(buffer_pos+5)
2898  endif
2899  buffer_pos = buffer_pos + nset
2900  enddo
2901  endif
2902  enddo
2903  endif
2904 
2905  call mpp_sync_self()
2906  if(allocated(send_buf) ) deallocate(send_buf)
2907  if(allocated(recv_buf) ) deallocate(recv_buf)
2908  if(allocated(pelist) ) deallocate(pelist)
2909  if(allocated(pos_x) ) deallocate(pos_x)
2910  if(allocated(pelist) ) deallocate(pelist)
2911  if(allocated(iarray) ) deallocate(iarray)
2912  if(allocated(jarray) ) deallocate(jarray)
2913  if(allocated(tarray) ) deallocate(tarray)
2914  if(allocated(size_pe) ) deallocate(size_pe)
2915 
2916 end subroutine set_comm_put1
2917 
2918 
2919 !###############################################################################
2920 !> @brief Regenerate/Update the xmap
2921 !! @details This subroutine basically regenerates the exchange grid via updating the xmap.
2922 !! Practically xmap is the object specifying the exchange grid and has all the relevant information of Xgrid.
2923 !! Particularly note that regenerating the xmap/Xgrid accounts for dynamical changes of the subgrid parametrization
2924 !! of the side 2 components (land and ice-ocean).
2925 !! E.g., for when side 2 is the ice , the xgrid is regenrated so that
2926 !! OCN grid cells that are partially or totally open water contribute to (are side2 parent of) the Xgrid
2927 !! and conversely
2928 !! OCN grid cells that are totally ice covered do not contribute to (are kicked out of) the Xgrid.
2929 !! This makes xmap a dynamical object and a powerful tool for flux exchange calculations.
2930 !!
2931 !! Things to keep in mind about xmap/xgrid:
2932 !! xgrid contains two sides:
2933 !! side1: This is the side where 2d arrays are put to and get from the Xgrid
2934 !! side2: This is the side where 3d arrays are put to and get from the Xgrid.
2935 !! This was designed to enable exchange along sub-grid-scale (3rd dimension) for component models that have
2936 !! subgrid scale parametrization (e.g., seaice categories and land tiles).
2937 !! @param[inout] xmap exchange grid
2938 !!
2939 subroutine regen(xmap)
2940 type (xmap_type), intent(inout) :: xmap
2941 
2942  integer :: g, l, k, max_size
2943  integer :: i1, j1, i2, j2, p
2944  integer :: tile1
2945  integer :: ll, lll
2946  logical :: overlap_with_nest
2947  integer :: cnt(xmap%get1%nsend)
2948  integer :: i,j,n,xloc,pos,nsend,m,npes, mypos
2949  integer :: send_ind(0:xmap%npes-1)
2950 
2951  max_size = 0
2952 
2953  do g=2,size(xmap%grids(:))
2954  max_size = max_size + xmap%grids(g)%size * xmap%grids(g)%km
2955  end do
2956 
2957  if (max_size>size(xmap%x1(:))) then
2958  if (associated(xmap%x1)) deallocate(xmap%x1) !< Check x1 if allocated
2959  if (associated(xmap%x2)) deallocate(xmap%x2) !< Check x2 if allocated
2960  allocate( xmap%x1(1:max_size) )
2961  allocate( xmap%x2(1:max_size) )
2962  endif
2963 
2964 
2965  do g=2,size(xmap%grids(:))
2966  xmap%grids(g)%first = 1
2967  xmap%grids(g)%last = 0
2968  end do
2969 
2970  xmap%size = 0
2971  ll = 0
2972  do g=2,size(xmap%grids(:))
2973  xmap%grids(g)%first = xmap%size + 1;
2974 
2975  do l=1,xmap%grids(g)%size
2976  i1 = xmap%grids(g)%x(l)%i1
2977  j1 = xmap%grids(g)%x(l)%j1
2978  i2 = xmap%grids(g)%x(l)%i2
2979  j2 = xmap%grids(g)%x(l)%j2
2980  tile1 = xmap%grids(g)%x(l)%tile
2981  ll = ll + 1
2982  if(xmap%grids(g)%is_ug) then
2983  do k=1,xmap%grids(g)%km
2984  lll = xmap%grids(g)%l_index((j2-1)*xmap%grids(g)%im+i2)
2985  if (xmap%grids(g)%frac_area(lll,1,k)/=0.0_r8_kind) then
2986  xmap%size = xmap%size+1
2987  xmap%x1(xmap%size)%pos = xmap%ind_get1(ll)
2988  xmap%x1(xmap%size)%i = xmap%grids(g)%x(l)%i1
2989  xmap%x1(xmap%size)%j = xmap%grids(g)%x(l)%j1
2990  xmap%x1(xmap%size)%tile = xmap%grids(g)%x(l)%tile
2991  xmap%x1(xmap%size)%area = xmap%grids(g)%x(l)%area &
2992  *xmap%grids(g)%frac_area(lll,1,k)
2993  xmap%x1(xmap%size)%di = xmap%grids(g)%x(l)%di
2994  xmap%x1(xmap%size)%dj = xmap%grids(g)%x(l)%dj
2995  xmap%x2(xmap%size)%i = xmap%grids(g)%x(l)%i2
2996  xmap%x2(xmap%size)%j = xmap%grids(g)%x(l)%j2
2997  xmap%x2(xmap%size)%l = lll
2998  xmap%x2(xmap%size)%k = k
2999  xmap%x2(xmap%size)%area = xmap%grids(g)%x(l)%area * xmap%grids(g)%x(l)%scale
3000  endif
3001  enddo
3002  else
3003  do k=1,xmap%grids(g)%km
3004  if (xmap%grids(g)%frac_area(i2,j2,k)/=0.0_r8_kind) then
3005  xmap%size = xmap%size+1
3006  xmap%x1(xmap%size)%pos = xmap%ind_get1(ll)
3007  xmap%x1(xmap%size)%i = xmap%grids(g)%x(l)%i1
3008  xmap%x1(xmap%size)%j = xmap%grids(g)%x(l)%j1
3009  xmap%x1(xmap%size)%tile = xmap%grids(g)%x(l)%tile
3010  xmap%x1(xmap%size)%area = xmap%grids(g)%x(l)%area &
3011  *xmap%grids(g)%frac_area(i2,j2,k)
3012  xmap%x1(xmap%size)%di = xmap%grids(g)%x(l)%di
3013  xmap%x1(xmap%size)%dj = xmap%grids(g)%x(l)%dj
3014  xmap%x2(xmap%size)%i = xmap%grids(g)%x(l)%i2
3015  xmap%x2(xmap%size)%j = xmap%grids(g)%x(l)%j2
3016  xmap%x2(xmap%size)%k = k
3017  xmap%x2(xmap%size)%area = xmap%grids(g)%x(l)%area * xmap%grids(g)%x(l)%scale
3018  end if
3019  enddo
3020  end if
3021  end do
3022  xmap%grids(g)%last = xmap%size
3023  end do
3024 
3025 
3026  if (max_size>size(xmap%x1_put(:))) then
3027  if (associated(xmap%x1_put)) deallocate(xmap%x1_put) !< Check if allocated
3028  allocate( xmap%x1_put(1:max_size) )
3029  endif
3030  if (max_size>size(xmap%x2_get(:))) then
3031  if (associated(xmap%x2_get)) deallocate(xmap%x2_get) !< Check if allocated
3032  allocate( xmap%x2_get(1:max_size) )
3033  endif
3034 
3035  do g=2,size(xmap%grids(:))
3036  xmap%grids(g)%first_get = 1
3037  xmap%grids(g)%last_get = 0
3038  end do
3039 
3040  xmap%size_put1 = 0
3041  xmap%size_get2 = 0
3042  ll = 0
3043  do g=2,size(xmap%grids(:))
3044  xmap%grids(g)%first_get = xmap%size_get2 + 1;
3045 
3046  do l=1,xmap%grids(g)%size
3047  i1 = xmap%grids(g)%x(l)%i1
3048  j1 = xmap%grids(g)%x(l)%j1
3049  i2 = xmap%grids(g)%x(l)%i2
3050  j2 = xmap%grids(g)%x(l)%j2
3051  tile1 = xmap%grids(g)%x(l)%tile
3052  ll = ll + 1
3053  overlap_with_nest = .false.
3054  if( xmap%grids(1)%id == "ATM" .AND. tile1 == tile_parent .AND. &
3055  in_box(i1, j1, is_parent, ie_parent, js_parent, je_parent) ) overlap_with_nest = .true.
3056  if(xmap%grids(g)%is_ug) then
3057  do k=1,xmap%grids(g)%km
3058  lll = xmap%grids(g)%l_index((j2-1)*xmap%grids(g)%im+i2)
3059  if (xmap%grids(g)%frac_area(lll,1,k)/=0.0_r8_kind) then
3060  xmap%size_put1 = xmap%size_put1+1
3061  xmap%x1_put(xmap%size_put1)%pos = xmap%ind_put1(ll)
3062  xmap%x1_put(xmap%size_put1)%i = xmap%grids(g)%x(l)%i1
3063  xmap%x1_put(xmap%size_put1)%j = xmap%grids(g)%x(l)%j1
3064  xmap%x1_put(xmap%size_put1)%tile = xmap%grids(g)%x(l)%tile
3065  xmap%x1_put(xmap%size_put1)%area = xmap%grids(g)%x(l)%area &
3066  *xmap%grids(g)%frac_area(lll,1,k)
3067  xmap%x1_put(xmap%size_put1)%di = xmap%grids(g)%x(l)%di
3068  xmap%x1_put(xmap%size_put1)%dj = xmap%grids(g)%x(l)%dj
3069  if( .not. overlap_with_nest) then
3070  xmap%size_get2 = xmap%size_get2+1
3071  xmap%x2_get(xmap%size_get2)%i = xmap%grids(g)%x(l)%i2
3072  xmap%x2_get(xmap%size_get2)%j = xmap%grids(g)%x(l)%j2
3073  xmap%x2_get(xmap%size_get2)%l = lll
3074  xmap%x2_get(xmap%size_get2)%k = k
3075  xmap%x2_get(xmap%size_get2)%area = xmap%grids(g)%x(l)%area * xmap%grids(g)%x(l)%scale
3076  xmap%x2_get(xmap%size_get2)%pos = xmap%size_put1
3077  endif
3078  end if
3079  end do
3080  else
3081  do k=1,xmap%grids(g)%km
3082  if (xmap%grids(g)%frac_area(i2,j2,k)/=0.0_r8_kind) then
3083  xmap%size_put1 = xmap%size_put1+1
3084  xmap%x1_put(xmap%size_put1)%pos = xmap%ind_put1(ll)
3085  xmap%x1_put(xmap%size_put1)%i = xmap%grids(g)%x(l)%i1
3086  xmap%x1_put(xmap%size_put1)%j = xmap%grids(g)%x(l)%j1
3087  xmap%x1_put(xmap%size_put1)%tile = xmap%grids(g)%x(l)%tile
3088  xmap%x1_put(xmap%size_put1)%area = xmap%grids(g)%x(l)%area &
3089  *xmap%grids(g)%frac_area(i2,j2,k)
3090  xmap%x1_put(xmap%size_put1)%di = xmap%grids(g)%x(l)%di
3091  xmap%x1_put(xmap%size_put1)%dj = xmap%grids(g)%x(l)%dj
3092  if( .not. overlap_with_nest) then
3093  xmap%size_get2 = xmap%size_get2+1
3094  xmap%x2_get(xmap%size_get2)%i = xmap%grids(g)%x(l)%i2
3095  xmap%x2_get(xmap%size_get2)%j = xmap%grids(g)%x(l)%j2
3096  xmap%x2_get(xmap%size_get2)%k = k
3097  xmap%x2_get(xmap%size_get2)%area = xmap%grids(g)%x(l)%area * xmap%grids(g)%x(l)%scale
3098  xmap%x2_get(xmap%size_get2)%pos = xmap%size_put1
3099  endif
3100  end if
3101  end do
3102  endif
3103  end do
3104  xmap%grids(g)%last_get = xmap%size_get2
3105  end do
3106 
3107  !---set up information for get_1_from_xgrid_repro
3108  if (make_exchange_reproduce) then
3109  if (xmap%get1_repro%nsend > 0) then
3110  xloc = 0
3111  nsend = 0
3112  npes = xmap%npes
3113  mypos = mpp_pe() - mpp_root_pe()
3114  cnt(:) = 0
3115  do m=0,npes-1
3116  p = mod(mypos+m, npes)
3117  if( xmap%send_count_repro(p) > 0 ) then
3118  nsend = nsend + 1
3119  send_ind(p) = nsend
3120  endif
3121  enddo
3122  do g=2,size(xmap%grids(:))
3123  do l=1,xmap%grids(g)%size ! index into this side 2 grid's patterns
3124  p = xmap%grids(g)%x(l)%pe-xmap%root_pe
3125  n = send_ind(p)
3126  cnt(n) = cnt(n) + 1
3127  pos = cnt(n)
3128  xmap%get1_repro%send(n)%xLoc(pos) = xloc
3129  if( xmap%grids(g)%is_ug ) then
3130  i = xmap%grids(g)%x(l)%l2
3131  xloc = xloc + count(xmap%grids(g)%frac_area(i,1,:)/=0.0_r8_kind)
3132  else
3133  i = xmap%grids(g)%x(l)%i2
3134  j = xmap%grids(g)%x(l)%j2
3135  xloc = xloc + count(xmap%grids(g)%frac_area(i,j,:)/=0.0_r8_kind)
3136  endif
3137  enddo
3138  enddo
3139  endif
3140  endif
3141 
3142 end subroutine regen
3143 
3144 !#######################################################################
3145 !> @brief Changes sub-grid portion areas and/or number.
3146 !! @details (re)sets the "fraction area" of the side 2 component grid cell.
3147 !! "fraction area" is a dynamic property of the component model (seaice or land)
3148 !! that needs to be updated after each timestep of that component in order for the exhange mechanism to work properly.
3149 !! The input is a 3d array of numbers between 0 and 1. It signifies the
3150 !! fraction of the component grid cell area which has a model-specific property.
3151 !! This property is used for some sub-grid scale parametrization in the component model.
3152 !! E.g., for the seaice component model, the quantity of seaice in each grid cell (i,j)
3153 !! is distibuted into N=grid%km partitions (ice categories) each parametrized with a weight (part_size) that add to 1.
3154 !! E.g., for 6+2 thickness (h) categories used in GFDL seaice models we have
3155 !! given hlim(1, ..., 8) = [1.0e-10, 0.1, 0.3, 0.7, 1.1, 1.5, 2.0, 2.5] (meters)
3156 !! Caterory n=1 : h <= hlim(1), essentially no ice
3157 !! Caterory n=2...7 : hlim(n-1) < h <= hlim(n)
3158 !! Caterory n=8 : hlim(n-1) < h , unlimimitted ice thickness
3159 !! E.g., if seaice in grid cell (i,j) is parameterized as
3160 !! 10 % open water, 0% category 1, 40% category 2 , 50% category 3 then we have
3161 !! f(i,j,1:km) = part_size(i,j,1:8) = [0.1, 0.0, 0.4, 0.5, 0.0, 0.0, 0.0, 0.0]
3162 !!
3163 !! @param[in] f real(r8_kind) 3D array
3164 !! @param[in] grid_id 3 character grid ID
3165 !! @param[inout] xmap exchange grid
3166 !!
3167 !! <br>Example usage:
3168 !! @code{.F90}
3169 !! call fms_xgrid_set_frac_area (Ice%part_size(isc:iec,jsc:jec,:) , 'OCN', xmap_sfc)
3170 !! @endcode
3171 !!
3172 subroutine set_frac_area_sg(f, grid_id, xmap)
3173 real(r8_kind), dimension(:,:,:), intent(in) :: f !< fraction area to be set
3174 character(len=3), intent(in) :: grid_id !< 3 character grid ID
3175 type (xmap_type), intent(inout) :: xmap !< exchange grid with given grid ID
3176 
3177  integer :: g
3178  type(grid_type), pointer, save :: grid =>null()
3179 
3180  if (grid_id==xmap%grids(1)%id) call error_mesg ('xgrid_mod', &
3181  'set_frac_area called on side 1 grid', fatal)
3182  do g=2,size(xmap%grids(:))
3183  grid => xmap%grids(g)
3184  if (grid_id==grid%id) then
3185  if (size(f,3)/=size(grid%frac_area,3)) then
3186  if (associated(grid%frac_area)) deallocate (grid%frac_area) !< Check if allocated
3187  grid%km = size(f,3);
3188  allocate( grid%frac_area(grid%is_me:grid%ie_me, grid%js_me:grid%je_me, &
3189  grid%km) )
3190  end if
3191  grid%frac_area = f;
3192  call regen(xmap)
3193  return;
3194  end if
3195  end do
3196 
3197  call error_mesg ('xgrid_mod', 'set_frac_area: could not find grid id', fatal)
3198 
3199 end subroutine set_frac_area_sg
3200 
3201 !#######################################################################
3202 
3203 !> @brief Changes sub-grid portion areas and/or number.
3204 subroutine set_frac_area_ug(f, grid_id, xmap)
3205 real(r8_kind), dimension(:,:), intent(in) :: f !< fractional area to set
3206 character(len=3), intent(in) :: grid_id !< 3 character grid ID
3207 type (xmap_type), intent(inout) :: xmap !< exchange grid with given grid ID
3208 
3209  integer :: g
3210  type(grid_type), pointer, save :: grid =>null()
3211 
3212  if (grid_id==xmap%grids(1)%id) call error_mesg ('xgrid_mod', &
3213  'set_frac_area_ug called on side 1 grid', fatal)
3214  if (grid_id .NE. 'LND' ) call error_mesg ('xgrid_mod', &
3215  .NE.'set_frac_area_ug called for grid_id LND', fatal)
3216  do g=2,size(xmap%grids(:))
3217  grid => xmap%grids(g)
3218  if (grid_id==grid%id) then
3219  if (size(f,2)/=size(grid%frac_area,3)) then
3220  if (associated(grid%frac_area)) deallocate (grid%frac_area) !< Check if allocated
3221  grid%km = size(f,2);
3222  allocate( grid%frac_area(grid%ls_me:grid%le_me, 1, grid%km) )
3223  end if
3224  grid%frac_area(:,1,:) = f(:,:);
3225  call regen(xmap)
3226  return;
3227  end if
3228  end do
3229 
3230  call error_mesg ('xgrid_mod', 'set_frac_area_ug: could not find grid id', fatal)
3231 
3232 end subroutine set_frac_area_ug
3233 
3234 !#######################################################################
3235 
3236 !> @brief Returns current size of exchange grid variables.
3237 !! @return size of given exchange grid's variable
3238 integer function xgrid_count(xmap)
3239 type (xmap_type), intent(inout) :: xmap
3240 
3241  xgrid_count = xmap%size
3242 end function xgrid_count
3243 
3244 !#######################################################################
3245 
3246 !> Scatters data to exchange grid
3247 subroutine put_side1_to_xgrid(d, grid_id, x, xmap, remap_method, complete)
3248  real(r8_kind), dimension(:,:), intent(in) :: d !< data to send
3249  character(len=3), intent(in) :: grid_id !< 3 character grid ID
3250  real(r8_kind), dimension(:), intent(inout) :: x !< xgrid data
3251  type (xmap_type), intent(inout) :: xmap !< exchange grid
3252  integer, intent(in), optional :: remap_method !< exchange grid interpolation method can
3253  !! be FIRST_ORDER(=1) or SECOND_ORDER(=2)
3254  logical, intent(in), optional :: complete
3255 
3256  logical :: is_complete, set_mismatch
3257  integer :: g, method
3258  character(len=2) :: text
3259  integer, save :: isize=0
3260  integer, save :: jsize=0
3261  integer, save :: lsize=0
3262  integer, save :: xsize=0
3263  integer, save :: method_saved=0
3264  character(len=3), save :: grid_id_saved=""
3265  integer(i8_kind), dimension(MAX_FIELDS), save :: d_addrs = -9999_i8_kind
3266  integer(i8_kind), dimension(MAX_FIELDS), save :: x_addrs = -9999_i8_kind
3267 
3268  if (grid_id==xmap%grids(1)%id) then
3269  method = first_order ! default
3270  if(present(remap_method)) method = remap_method
3271  is_complete = .true.
3272  if(present(complete)) is_complete=complete
3273  lsize = lsize + 1
3274  if( lsize > max_fields ) then
3275  write( text,'(i2)' ) max_fields
3276  call error_mesg ('xgrid_mod', 'MAX_FIELDS='//trim(text)//' exceeded for group put_side1_to_xgrid', fatal)
3277  endif
3278  d_addrs(lsize) = loc(d)
3279  x_addrs(lsize) = loc(x)
3280 
3281  if(lsize == 1) then
3282  isize = size(d,1)
3283  jsize = size(d,2)
3284  xsize = size(x(:))
3285  method_saved = method
3286  grid_id_saved = grid_id
3287  else
3288  set_mismatch = .false.
3289  set_mismatch = set_mismatch .OR. (isize /= size(d,1))
3290  set_mismatch = set_mismatch .OR. (jsize /= size(d,2))
3291  set_mismatch = set_mismatch .OR. (xsize /= size(x(:)))
3292  set_mismatch = set_mismatch .OR. (method_saved /= method)
3293  set_mismatch = set_mismatch .OR. (grid_id_saved /= grid_id)
3294  if(set_mismatch)then
3295  write( text,'(i2)' ) lsize
3296  call error_mesg ('xgrid_mod', 'Incompatible field at count '//text//' for group put_side1_to_xgrid', fatal )
3297  endif
3298  endif
3299 
3300  if(is_complete) then
3301  !--- when exchange_monotonic is true and the side 1 ia atm, will always use monotonic
3302  !second order conservative.
3303  if(monotonic_exchange .AND. grid_id == 'ATM') then
3304  call put_1_to_xgrid_order_2(d_addrs, x_addrs, xmap, isize, jsize, xsize, lsize)
3305  else if(method == first_order) then
3306  call put_1_to_xgrid_order_1(d_addrs, x_addrs, xmap, isize, jsize, xsize, lsize)
3307  else
3308  if(grid_id .NE. 'ATM') call error_mesg ('xgrid_mod', &
3309  "second order put_to_xgrid should only be applied to 'ATM' model, "//&
3310  "contact developer", fatal)
3311  call put_1_to_xgrid_order_2(d_addrs, x_addrs, xmap, isize, jsize, xsize, lsize)
3312  endif
3313 
3314  d_addrs = -9999_i8_kind
3315  x_addrs = -9999_i8_kind
3316  isize = 0
3317  jsize = 0
3318  xsize = 0
3319  lsize = 0
3320  method_saved = 0
3321  grid_id_saved = ""
3322  endif
3323  return
3324  end if
3325 
3326  do g=2,size(xmap%grids(:))
3327  if (grid_id==xmap%grids(g)%id) &
3328  call error_mesg ('xgrid_mod', &
3329  'put_to_xgrid expects a 3D side 2 grid', fatal)
3330  end do
3331 
3332  call error_mesg ('xgrid_mod', 'put_to_xgrid: could not find grid id', fatal)
3333 
3334 end subroutine put_side1_to_xgrid
3335 
3336 !#######################################################################
3337 
3338 !> Scatters data to exchange grid
3339 subroutine put_side2_to_xgrid(d, grid_id, x, xmap)
3340 real(r8_kind), dimension(:,:,:), intent(in) :: d !< data to send
3341 character(len=3), intent(in) :: grid_id !< 3 character grid ID
3342 real(r8_kind), dimension(:), intent(inout) :: x !< xgrid data
3343 type (xmap_type), intent(inout) :: xmap !< exchange grid
3344 
3345  integer :: g
3346 
3347  if (grid_id==xmap%grids(1)%id) &
3348  call error_mesg ('xgrid_mod', &
3349  'put_side2_to_xgrid expects a 3D side 2 grid', fatal)
3350 
3351  do g=2,size(xmap%grids(:))
3352  if (grid_id==xmap%grids(g)%id) then
3353  call put_2_to_xgrid(d, xmap%grids(g), x, xmap)
3354  return;
3355  end if
3356  end do
3357 
3358  call error_mesg ('xgrid_mod', 'put_to_xgrid: could not find grid id', fatal)
3359 
3360 end subroutine put_side2_to_xgrid
3361 
3362 !#######################################################################
3363 
3364 subroutine get_side1_from_xgrid(d, grid_id, x, xmap, complete)
3365  real(r8_kind), dimension(:,:), intent(out) :: d !< recieved xgrid data
3366  character(len=3), intent(in) :: grid_id !< 3 character grid ID
3367  real(r8_kind), dimension(:), intent(in) :: x !< xgrid data
3368  type (xmap_type), intent(inout) :: xmap !< exchange grid
3369  logical, intent(in), optional :: complete
3370 
3371  logical :: is_complete, set_mismatch
3372  integer :: g
3373  character(len=2) :: text
3374  integer, save :: isize=0
3375  integer, save :: jsize=0
3376  integer, save :: lsize=0
3377  integer, save :: xsize=0
3378  character(len=3), save :: grid_id_saved=""
3379  integer(i8_kind), dimension(MAX_FIELDS), save :: d_addrs = -9999_i8_kind
3380  integer(i8_kind), dimension(MAX_FIELDS), save :: x_addrs = -9999_i8_kind
3381 
3382  d = 0.0_r8_kind
3383  if (grid_id==xmap%grids(1)%id) then
3384  is_complete = .true.
3385  if(present(complete)) is_complete=complete
3386  lsize = lsize + 1
3387  if( lsize > max_fields ) then
3388  write( text,'(i2)' ) max_fields
3389  call error_mesg ('xgrid_mod', 'MAX_FIELDS='//trim(text)//' exceeded for group get_side1_from_xgrid', fatal)
3390  endif
3391  d_addrs(lsize) = loc(d)
3392  x_addrs(lsize) = loc(x)
3393 
3394  if(lsize == 1) then
3395  isize = size(d,1)
3396  jsize = size(d,2)
3397  xsize = size(x(:))
3398  grid_id_saved = grid_id
3399  else
3400  set_mismatch = .false.
3401  set_mismatch = set_mismatch .OR. (isize /= size(d,1))
3402  set_mismatch = set_mismatch .OR. (jsize /= size(d,2))
3403  set_mismatch = set_mismatch .OR. (xsize /= size(x(:)))
3404  set_mismatch = set_mismatch .OR. (grid_id_saved /= grid_id)
3405  if(set_mismatch)then
3406  write( text,'(i2)' ) lsize
3407  call error_mesg ('xgrid_mod', 'Incompatible field at count '//text// &
3408  & ' for group get_side1_from_xgrid', fatal )
3409  endif
3410  endif
3411 
3412  if(is_complete) then
3413  if (make_exchange_reproduce) then
3414  call get_1_from_xgrid_repro(d_addrs, x_addrs, xmap, xsize, lsize)
3415  else
3416  call get_1_from_xgrid(d_addrs, x_addrs, xmap, isize, jsize, xsize, lsize)
3417  end if
3418  d_addrs(1:lsize) = -9999
3419  x_addrs(1:lsize) = -9999
3420  isize = 0
3421  jsize = 0
3422  xsize = 0
3423  lsize = 0
3424  grid_id_saved = ""
3425  endif
3426  return;
3427  end if
3428 
3429  do g=2,size(xmap%grids(:))
3430  if (grid_id==xmap%grids(g)%id) &
3431  call error_mesg ('xgrid_mod', &
3432  'get_from_xgrid expects a 3D side 2 grid', fatal)
3433  end do
3434 
3435  call error_mesg ('xgrid_mod', 'get_from_xgrid: could not find grid id', fatal)
3436 
3437 end subroutine get_side1_from_xgrid
3438 
3439 !#######################################################################
3440 
3441 subroutine get_side2_from_xgrid(d, grid_id, x, xmap)
3442 real(r8_kind), dimension(:,:,:), intent(out) :: d !< received xgrid data
3443 character(len=3), intent(in) :: grid_id !< 3 character grid ID
3444 real(r8_kind), dimension(:), intent(in) :: x !< xgrid data
3445 type (xmap_type), intent(in) :: xmap !< exchange grid
3446 
3447  integer :: g
3448 
3449  if (grid_id==xmap%grids(1)%id) &
3450  call error_mesg ('xgrid_mod', &
3451  'get_from_xgrid expects a 2D side 1 grid', fatal)
3452 
3453  do g=2,size(xmap%grids(:))
3454  if (grid_id==xmap%grids(g)%id) then
3455  call get_2_from_xgrid(d, xmap%grids(g), x, xmap)
3456  return;
3457  end if
3458  end do
3459 
3460  call error_mesg ('xgrid_mod', 'get_from_xgrid: could not find grid id', fatal)
3461 
3462 end subroutine get_side2_from_xgrid
3463 
3464 !#######################################################################
3465 
3466 !> @brief Returns logical associating exchange grid cells with given side two grid.
3467 subroutine some(xmap, some_arr, grid_id)
3468 type (xmap_type), intent(in) :: xmap
3469 character(len=3), optional, intent(in) :: grid_id
3470 logical, dimension(:), intent(out) :: some_arr !< logical associating exchange grid cells with given side 2 grid.
3471 
3472  integer :: g
3473 
3474  if (.not.present(grid_id)) then
3475 
3476  if(xmap%size > 0) then
3477  some_arr = .true.
3478  else
3479  some_arr = .false.
3480  end if
3481  return;
3482  end if
3483 
3484  if (grid_id==xmap%grids(1)%id) &
3485  call error_mesg ('xgrid_mod', 'some expects a side 2 grid id', fatal)
3486 
3487  do g=2,size(xmap%grids(:))
3488  if (grid_id==xmap%grids(g)%id) then
3489  some_arr = .false.
3490  some_arr(xmap%grids(g)%first:xmap%grids(g)%last) = .true.;
3491  return;
3492  end if
3493  end do
3494 
3495  call error_mesg ('xgrid_mod', 'some could not find grid id', fatal)
3496 
3497 end subroutine some
3498 
3499 !#######################################################################
3500 
3501 subroutine put_2_to_xgrid(d, grid, x, xmap)
3502 type (grid_type), intent(in) :: grid
3503 real(r8_kind), dimension(grid%is_me:grid%ie_me, & grid%js_me:grid%je_me, grid%km), intent(in) :: d
3504 real(r8_kind), dimension(:), intent(inout) :: x
3505 type (xmap_type), intent(in) :: xmap
3506 
3507  integer :: l
3508  call mpp_clock_begin(id_put_2_to_xgrid)
3509 
3510  do l=grid%first,grid%last
3511  x(l) = d(xmap%x2(l)%i,xmap%x2(l)%j,xmap%x2(l)%k)
3512  end do
3513 
3514  call mpp_clock_end(id_put_2_to_xgrid)
3515 end subroutine put_2_to_xgrid
3516 
3517 !#######################################################################
3518 
3519 subroutine get_2_from_xgrid(d, grid, x, xmap)
3520 type (grid_type), intent(in) :: grid
3521 real(r8_kind), dimension(grid%is_me:grid%ie_me, & grid%js_me:grid%je_me, grid%km), intent(out) :: d
3522 real(r8_kind), dimension(:), intent(in) :: x
3523 type (xmap_type), intent(in) :: xmap
3524 
3525  integer :: l, k
3526 
3527  call mpp_clock_begin(id_get_2_from_xgrid)
3528 
3529  d = 0.0_r8_kind
3530  do l=grid%first_get,grid%last_get
3531  d(xmap%x2_get(l)%i,xmap%x2_get(l)%j,xmap%x2_get(l)%k) = &
3532  d(xmap%x2_get(l)%i,xmap%x2_get(l)%j,xmap%x2_get(l)%k) + xmap%x2_get(l)%area*x(xmap%x2_get(l)%pos)
3533  end do
3534  !
3535  ! normalize with side 2 grid cell areas
3536  !
3537  do k=1,size(d,3)
3538  d(:,:,k) = d(:,:,k) * grid%area_inv
3539  end do
3540 
3541  call mpp_clock_end(id_get_2_from_xgrid)
3542 
3543 end subroutine get_2_from_xgrid
3544 
3545 !#######################################################################
3546 
3547 subroutine put_1_to_xgrid_order_1(d_addrs, x_addrs, xmap, isize, jsize, xsize, lsize)
3548  integer(i8_kind), dimension(:), intent(in) :: d_addrs
3549  integer(i8_kind), dimension(:), intent(in) :: x_addrs
3550  type (xmap_type), intent(inout) :: xmap
3551  integer, intent(in) :: isize, jsize, xsize, lsize
3552 
3553  integer :: i, j, p, buffer_pos, msgsize
3554  integer :: from_pe, to_pe, pos, n, l, count
3555  integer :: ibegin, istart, iend, start_pos
3556  type (comm_type), pointer, save :: comm =>null()
3557  real(r8_kind) :: recv_buffer(xmap%put1%recvsize*lsize)
3558  real(r8_kind) :: send_buffer(xmap%put1%sendsize*lsize)
3559  real(r8_kind) :: unpack_buffer(xmap%put1%recvsize)
3560 
3561  real(r8_kind), dimension(isize, jsize) :: d
3562  real(r8_kind), dimension(xsize) :: x
3563  pointer(ptr_d, d)
3564  pointer(ptr_x, x)
3565 
3566  call mpp_clock_begin(id_put_1_to_xgrid_order_1)
3567 
3568  !--- pre-post receiving
3569  comm => xmap%put1
3570  do p = 1, comm%nrecv
3571  msgsize = comm%recv(p)%count*lsize
3572  from_pe = comm%recv(p)%pe
3573  buffer_pos = comm%recv(p)%buffer_pos*lsize
3574  call mpp_recv(recv_buffer(buffer_pos+1), glen=msgsize, from_pe = from_pe, block=.false., tag=comm_tag_7)
3575  enddo
3576 
3577  !--- send the data
3578  buffer_pos = 0
3579  do p = 1, comm%nsend
3580  msgsize = comm%send(p)%count*lsize
3581  to_pe = comm%send(p)%pe
3582  pos = buffer_pos
3583  do l = 1, lsize
3584  ptr_d = d_addrs(l)
3585  do n = 1, comm%send(p)%count
3586  pos = pos + 1
3587  i = comm%send(p)%i(n)
3588  j = comm%send(p)%j(n)
3589  send_buffer(pos) = d(i,j)
3590  enddo
3591  enddo
3592  call mpp_send(send_buffer(buffer_pos+1), plen=msgsize, to_pe = to_pe, tag=comm_tag_7 )
3593  buffer_pos = buffer_pos + msgsize
3594  enddo
3595 
3596  call mpp_sync_self(check=event_recv)
3597 
3598  !--- unpack the buffer
3599  if( lsize == 1) then
3600  ptr_x = x_addrs(1)
3601  do l=1,xmap%size_put1
3602  x(l) = recv_buffer(xmap%x1_put(l)%pos)
3603  end do
3604  else
3605  start_pos = 0
3606 !$OMP parallel do default(none) shared(lsize,x_addrs,comm,recv_buffer,xmap) &
3607 !$OMP private(ptr_x,count,ibegin,istart,iend,pos,unpack_buffer)
3608  do l = 1, lsize
3609  ptr_x = x_addrs(l)
3610  do p = 1, comm%nrecv
3611  count = comm%recv(p)%count
3612  ibegin = comm%recv(p)%buffer_pos*lsize + 1
3613  istart = ibegin + (l-1)*count
3614  iend = istart + count - 1
3615  pos = comm%recv(p)%buffer_pos
3616  do n = istart, iend
3617  pos = pos + 1
3618  unpack_buffer(pos) = recv_buffer(n)
3619  enddo
3620  enddo
3621  do i=1,xmap%size_put1
3622  x(i) = unpack_buffer(xmap%x1_put(i)%pos)
3623  end do
3624  enddo
3625  endif
3626 
3627  call mpp_sync_self()
3628 
3629  call mpp_clock_end(id_put_1_to_xgrid_order_1)
3630 
3631 end subroutine put_1_to_xgrid_order_1
3632 
3633 !#######################################################################
3634 
3635 
3636 subroutine put_1_to_xgrid_order_2(d_addrs, x_addrs, xmap, isize, jsize, xsize, lsize)
3637  integer(i8_kind), dimension(:), intent(in) :: d_addrs
3638  integer(i8_kind), dimension(:), intent(in) :: x_addrs
3639  type (xmap_type), intent(inout) :: xmap
3640  integer, intent(in) :: isize, jsize, xsize, lsize
3641 
3642  !: NOTE: halo size is assumed to be 1 in setup_xmap
3643  real(r8_kind), dimension(0:isize+1, 0:jsize+1, lsize) :: tmp
3644  real(r8_kind), dimension(isize, jsize, lsize) :: tmpx, tmpy
3645  real(r8_kind), dimension(isize, jsize, lsize) :: d_bar_max, d_bar_min
3646  real(r8_kind), dimension(isize, jsize, lsize) :: d_max, d_min
3647  real(r8_kind) :: d_bar
3648  integer :: i, is, ie, j, js, je, ii, jj
3649  integer :: p, l, isd, jsd
3650  type (grid_type), pointer, save :: grid1 =>null()
3651  type (comm_type), pointer, save :: comm =>null()
3652  integer :: buffer_pos, msgsize, from_pe, to_pe, pos, n
3653  integer :: ibegin, count, istart, iend
3654  real(r8_kind) :: recv_buffer(xmap%put1%recvsize*lsize*3)
3655  real(r8_kind) :: send_buffer(xmap%put1%sendsize*lsize*3)
3656  real(r8_kind) :: unpack_buffer(xmap%put1%recvsize*3)
3657  logical :: on_west_edge, on_east_edge, on_south_edge, on_north_edge
3658  real(r8_kind), dimension(isize, jsize) :: d
3659  real(r8_kind), dimension(xsize) :: x
3660  pointer(ptr_d, d)
3661  pointer(ptr_x, x)
3662 
3663  call mpp_clock_begin(id_put_1_to_xgrid_order_2)
3664  grid1 => xmap%grids(1)
3665 
3666  is = grid1%is_me; ie = grid1%ie_me
3667  js = grid1%js_me; je = grid1%je_me
3668  isd = grid1%isd_me
3669  jsd = grid1%jsd_me
3670 
3671 !$OMP parallel do default(none) shared(lsize,tmp,d_addrs,isize,jsize) private(ptr_d)
3672  do l = 1, lsize
3673  tmp(:,:,l) = large_number
3674  ptr_d = d_addrs(l)
3675  tmp(1:isize,1:jsize,l) = d(:,:)
3676  enddo
3677 
3678  if(grid1%is_latlon) then
3679  call mpp_update_domains(tmp,grid1%domain_with_halo)
3680 !$OMP parallel do default(none) shared(lsize,tmp,grid1,is,ie,js,je,isd,jsd,tmpx,tmpy)
3681  do l = 1, lsize
3682  tmpy(:,:,l) = grad_merid_latlon(tmp(:,:,l), grid1%lat, is, ie, js, je, isd, jsd)
3683  tmpx(:,:,l) = grad_zonal_latlon(tmp(:,:,l), grid1%lon, grid1%lat, is, ie, js, je, isd, jsd)
3684  enddo
3685  else
3686  call mpp_update_domains(tmp,grid1%domain)
3687  on_west_edge = (is==1)
3688  on_east_edge = (ie==grid1%im)
3689  on_south_edge = (js==1)
3690  on_north_edge = (je==grid1%jm)
3691 !$OMP parallel do default(none) shared(lsize,tmp,grid1,tmpx,tmpy, &
3692 !$OMP on_west_edge,on_east_edge,on_south_edge,on_north_edge)
3693  do l = 1, lsize
3694  call gradient_cubic(tmp(:,:,l), grid1%box%dx, grid1%box%dy, grid1%box%area, &
3695  grid1%box%edge_w, grid1%box%edge_e, grid1%box%edge_s, &
3696  grid1%box%edge_n, grid1%box%en1, grid1%box%en2, &
3697  grid1%box%vlon, grid1%box%vlat, tmpx(:,:,l), tmpy(:,:,l), &
3698  on_west_edge, on_east_edge, on_south_edge, on_north_edge)
3699  enddo
3700  end if
3701 
3702  !--- pre-post receiving
3703  buffer_pos = 0
3704  comm => xmap%put1
3705  do p = 1, comm%nrecv
3706  msgsize = comm%recv(p)%count*lsize
3707  buffer_pos = comm%recv(p)%buffer_pos*lsize
3708  if(.NOT. monotonic_exchange) then
3709  msgsize = msgsize*3
3710  buffer_pos = buffer_pos*3
3711  endif
3712  from_pe = comm%recv(p)%pe
3713  call mpp_recv(recv_buffer(buffer_pos+1), glen=msgsize, from_pe = from_pe, block=.false., tag=comm_tag_8)
3714  enddo
3715 
3716  !--- compute d_bar_max and d_bar_min.
3717  if(monotonic_exchange) then
3718 !$OMP parallel do default(none) shared(lsize,isize,jsize,d_bar_max,d_bar_min,d_max,d_min,tmp)
3719  do l = 1, lsize
3720  do j = 1, jsize
3721  do i = 1, isize
3722  d_bar_max(i,j,l) = -large_number
3723  d_bar_min(i,j,l) = large_number
3724  d_max(i,j,l) = -large_number
3725  d_min(i,j,l) = large_number
3726  do jj = j-1, j+1
3727  do ii = i-1, i+1
3728  if(tmp(i,j,l) .NE. large_number) then
3729  if(tmp(i,j,l) > d_bar_max(i,j,l)) d_bar_max(i,j,l) = tmp(i,j,l)
3730  if(tmp(i,j,l) < d_bar_min(i,j,l)) d_bar_min(i,j,l) = tmp(i,j,l)
3731  endif
3732  enddo
3733  enddo
3734  enddo
3735  enddo
3736  enddo
3737  endif
3738 
3739  !--- send the data
3740  buffer_pos = 0
3741  if(monotonic_exchange) then
3742  pos = 0
3743  do p = 1, comm%nsend
3744  msgsize = comm%send(p)%count*lsize
3745  to_pe = comm%send(p)%pe
3746  do l = 1, lsize
3747  ptr_d = d_addrs(l)
3748  do n = 1, comm%send(p)%count
3749  pos = pos + 1
3750  i = comm%send(p)%i(n)
3751  j = comm%send(p)%j(n)
3752  send_buffer(pos) = d(i,j) + tmpy(i,j,l)*comm%send(p)%dj(n) + tmpx(i,j,l)*comm%send(p)%di(n)
3753  if(send_buffer(pos) > d_max(i,j,l)) d_max(i,j,l) = send_buffer(pos)
3754  if(send_buffer(pos) < d_min(i,j,l)) d_min(i,j,l) = send_buffer(pos)
3755  enddo
3756  enddo
3757  enddo
3758 
3759  do p = 1, comm%nsend
3760  msgsize = comm%send(p)%count*lsize
3761  to_pe = comm%send(p)%pe
3762  pos = buffer_pos
3763  do l = 1, lsize
3764  ptr_d = d_addrs(l)
3765  do n = 1, comm%send(p)%count
3766  pos = pos + 1
3767  i = comm%send(p)%i(n)
3768  j = comm%send(p)%j(n)
3769  d_bar = d(i,j)
3770  if( d_max(i,j,l) > d_bar_max(i,j,l) ) then
3771  send_buffer(pos) = d_bar + ((send_buffer(pos)-d_bar)/(d_max(i,j,l)-d_bar)) * (d_bar_max(i,j,l)-d_bar)
3772  else if( d_min(i,j,l) < d_bar_min(i,j,l) ) then
3773  send_buffer(pos) = d_bar + ((send_buffer(pos)-d_bar)/(d_min(i,j,l)-d_bar)) * (d_bar_min(i,j,l)-d_bar)
3774  endif
3775  enddo
3776  enddo
3777  call mpp_send(send_buffer(buffer_pos+1), plen=msgsize, to_pe = to_pe, tag=comm_tag_8 )
3778  buffer_pos = buffer_pos + msgsize
3779  enddo
3780  else
3781  do p = 1, comm%nsend
3782  msgsize = comm%send(p)%count*lsize*3
3783  to_pe = comm%send(p)%pe
3784  pos = buffer_pos
3785  do l = 1, lsize
3786  ptr_d = d_addrs(l)
3787  do n = 1, comm%send(p)%count
3788  pos = pos + 3
3789  i = comm%send(p)%i(n)
3790  j = comm%send(p)%j(n)
3791  send_buffer(pos-2) = d(i,j)
3792  send_buffer(pos-1) = tmpy(i,j,l)
3793  send_buffer(pos ) = tmpx(i,j,l)
3794  enddo
3795  enddo
3796  call mpp_send(send_buffer(buffer_pos+1), plen=msgsize, to_pe = to_pe, tag=comm_tag_8 )
3797  buffer_pos = buffer_pos + msgsize
3798  enddo
3799  endif
3800 
3801  call mpp_sync_self(check=event_recv)
3802 
3803  !--- unpack the buffer
3804  if(monotonic_exchange) then
3805  if( lsize == 1) then
3806  ptr_x = x_addrs(1)
3807  do l=1,xmap%size_put1
3808  pos = xmap%x1_put(l)%pos
3809  x(l) = recv_buffer(pos)
3810  end do
3811  else
3812  do l = 1, lsize
3813  ptr_x = x_addrs(l)
3814  pos = 0
3815  do p = 1, comm%nsend
3816  count = comm%send(p)%count
3817  ibegin = comm%recv(p)%buffer_pos*lsize + 1
3818  istart = ibegin + (l-1)*count
3819  iend = istart + count - 1
3820  pos = comm%recv(p)%buffer_pos
3821  do n = istart, iend
3822  pos = pos + 1
3823  unpack_buffer(pos) = recv_buffer(n)
3824  enddo
3825  enddo
3826  do i=1,xmap%size_put1
3827  pos = xmap%x1_put(i)%pos
3828  x(i) = unpack_buffer(pos)
3829  end do
3830  enddo
3831  endif
3832  else
3833  if( lsize == 1) then
3834  ptr_x = x_addrs(1)
3835 !$OMP parallel do default(none) shared(xmap,recv_buffer,ptr_x) private(pos)
3836  do l=1,xmap%size_put1
3837  pos = xmap%x1_put(l)%pos
3838  x(l) = recv_buffer(3*pos-2) + recv_buffer(3*pos-1)*xmap%x1_put(l)%dj + recv_buffer(3*pos)*xmap%x1_put(l)%di
3839  end do
3840  else
3841 !$OMP parallel do default(none) shared(lsize,comm,xmap,recv_buffer,x_addrs) &
3842 !$OMP private(ptr_x,pos,ibegin,istart,iend,count,unpack_buffer)
3843  do l = 1, lsize
3844  ptr_x = x_addrs(l)
3845  pos = 0
3846  ibegin = 1
3847  do p = 1, comm%nrecv
3848  count = comm%recv(p)%count*3
3849  ibegin = comm%recv(p)%buffer_pos*lsize*3 + 1
3850  istart = ibegin + (l-1)*count
3851  iend = istart + count - 1
3852  pos = comm%recv(p)%buffer_pos*3
3853  do n = istart, iend
3854  pos = pos + 1
3855  unpack_buffer(pos) = recv_buffer(n)
3856  enddo
3857  enddo
3858  do i=1,xmap%size_put1
3859  pos = xmap%x1_put(i)%pos
3860  x(i) = unpack_buffer(3*pos-2) + unpack_buffer(3*pos-1)*xmap%x1_put(i)%dj + unpack_buffer(3*pos) &
3861  & * xmap%x1_put(i)%di
3862  end do
3863  enddo
3864  endif
3865  endif
3866 
3867  call mpp_sync_self()
3868  call mpp_clock_end(id_put_1_to_xgrid_order_2)
3869 
3870 end subroutine put_1_to_xgrid_order_2
3871 
3872 !#######################################################################
3873 
3874 subroutine get_1_from_xgrid(d_addrs, x_addrs, xmap, isize, jsize, xsize, lsize)
3875  integer(i8_kind), dimension(:), intent(in) :: d_addrs
3876  integer(i8_kind), dimension(:), intent(in) :: x_addrs
3877  type (xmap_type), intent(inout) :: xmap
3878  integer, intent(in) :: isize, jsize, xsize, lsize
3879 
3880  real(r8_kind), dimension(xmap%size), target :: dg(xmap%size, lsize)
3881  integer :: i, j, l, p, n, m
3882  integer :: msgsize, buffer_pos, pos
3883  integer :: istart, iend, count
3884  real(r8_kind) , pointer, save :: dgp =>null()
3885  type(grid_type) , pointer, save :: grid1 =>null()
3886  type(comm_type) , pointer, save :: comm =>null()
3887  type(overlap_type), pointer, save :: send => null()
3888  type(overlap_type), pointer, save :: recv => null()
3889  real(r8_kind) :: recv_buffer(xmap%get1%recvsize*lsize*3)
3890  real(r8_kind) :: send_buffer(xmap%get1%sendsize*lsize*3)
3891  real(r8_kind) :: d(isize,jsize)
3892  real(r8_kind), dimension(xsize) :: x
3893  pointer(ptr_d, d)
3894  pointer(ptr_x, x)
3895 
3896  call mpp_clock_begin(id_get_1_from_xgrid)
3897 
3898  comm => xmap%get1
3899  grid1 => xmap%grids(1)
3900 
3901  do p = 1, comm%nrecv
3902  recv => comm%recv(p)
3903  msgsize = recv%count*lsize
3904  buffer_pos = recv%buffer_pos*lsize
3905  call mpp_recv(recv_buffer(buffer_pos+1), glen=msgsize, from_pe = recv%pe, block=.false., tag=comm_tag_9)
3906  enddo
3907 
3908  dg = 0.0_r8_kind;
3909 !$OMP parallel do default(none) shared(lsize,xmap,dg,x_addrs) private(dgp,ptr_x)
3910  do l = 1, lsize
3911  ptr_x = x_addrs(l)
3912  do i=1,xmap%size
3913  dgp => dg(xmap%x1(i)%pos,l)
3914  dgp = dgp + xmap%x1(i)%area*x(i)
3915  enddo
3916  enddo
3917 
3918 
3919  !--- send the data
3920  buffer_pos = 0
3921  istart = 1
3922  do p = 1, comm%nsend
3923  send => comm%send(p)
3924  msgsize = send%count*lsize
3925  pos = buffer_pos
3926  istart = send%buffer_pos+1
3927  iend = istart + send%count - 1
3928  do l = 1, lsize
3929  do n = istart, iend
3930  pos = pos + 1
3931  send_buffer(pos) = dg(n,l)
3932  enddo
3933  enddo
3934  call mpp_send(send_buffer(buffer_pos+1), plen=msgsize, to_pe = send%pe, tag=comm_tag_9 )
3935  buffer_pos = buffer_pos + msgsize
3936  istart = iend + 1
3937  enddo
3938 
3939  call mpp_sync_self(check=event_recv)
3940 
3941  !--- unpack the buffer
3942  do l = 1, lsize
3943  ptr_d = d_addrs(l)
3944  d = 0.0_r8_kind
3945  enddo
3946  !--- To bitwise reproduce old results, first copy the data onto its own pe.
3947 
3948  do p = 1, comm%nrecv
3949  recv => comm%recv(p)
3950  count = recv%count
3951  buffer_pos = recv%buffer_pos*lsize
3952  if( recv%pe == xmap%me ) then
3953 !$OMP parallel do default(none) shared(lsize,recv,recv_buffer,buffer_pos,d_addrs,count) &
3954 !$OMP private(ptr_d,i,j,pos)
3955  do l = 1, lsize
3956  pos = buffer_pos + (l-1)*count
3957  ptr_d = d_addrs(l)
3958  do n = 1,count
3959  i = recv%i(n)
3960  j = recv%j(n)
3961  pos = pos + 1
3962  d(i,j) = recv_buffer(pos)
3963  enddo
3964  enddo
3965  exit
3966  endif
3967  enddo
3968 
3969  pos = 0
3970  do m = 1, comm%nrecv
3971  p = comm%unpack_ind(m)
3972  recv => comm%recv(p)
3973  if( recv%pe == xmap%me ) then
3974  cycle
3975  endif
3976  buffer_pos = recv%buffer_pos*lsize
3977 !$OMP parallel do default(none) shared(lsize,recv,recv_buffer,buffer_pos,d_addrs) &
3978 !$OMP private(ptr_d,i,j,pos)
3979  do l = 1, lsize
3980  pos = buffer_pos + (l-1)*recv%count
3981  ptr_d = d_addrs(l)
3982  do n = 1, recv%count
3983  i = recv%i(n)
3984  j = recv%j(n)
3985  pos = pos + 1
3986  d(i,j) = d(i,j) + recv_buffer(pos)
3987  enddo
3988  enddo
3989  enddo
3990 
3991  !
3992  ! normalize with side 1 grid cell areas
3993  !
3994 !$OMP parallel do default(none) shared(lsize,d_addrs,grid1) private(ptr_d)
3995  do l = 1, lsize
3996  ptr_d = d_addrs(l)
3997  d = d * grid1%area_inv
3998  enddo
3999  call mpp_sync_self()
4000  call mpp_clock_end(id_get_1_from_xgrid)
4001 
4002 end subroutine get_1_from_xgrid
4003 
4004 !#######################################################################
4005 
4006 subroutine get_1_from_xgrid_repro(d_addrs, x_addrs, xmap, xsize, lsize)
4007  integer(i8_kind), dimension(:), intent(in) :: d_addrs
4008  integer(i8_kind), dimension(:), intent(in) :: x_addrs
4009  type (xmap_type), intent(inout) :: xmap
4010  integer, intent(in) :: xsize, lsize
4011 
4012  integer :: g, i, j, k, p, l, n, l2, l3
4013  integer :: msgsize, buffer_pos, pos
4014  type (grid_type), pointer, save :: grid =>null()
4015  type(comm_type), pointer, save :: comm => null()
4016  type(overlap_type), pointer, save :: send => null()
4017  type(overlap_type), pointer, save :: recv => null()
4018  integer, dimension(0:xmap%npes-1) :: pl, ml
4019  real(r8_kind) :: recv_buffer(xmap%recv_count_repro_tot*lsize)
4020  real(r8_kind) :: send_buffer(xmap%send_count_repro_tot*lsize)
4021  real(r8_kind) :: d(xmap%grids(1)%is_me:xmap%grids(1)%ie_me, &
4022  xmap%grids(1)%js_me:xmap%grids(1)%je_me)
4023  real(r8_kind), dimension(xsize) :: x
4024  pointer(ptr_d, d)
4025  pointer(ptr_x, x)
4026 
4027  call mpp_clock_begin(id_get_1_from_xgrid_repro)
4028  comm => xmap%get1_repro
4029  !--- pre-post receiving
4030  do p = 1, comm%nrecv
4031  recv => comm%recv(p)
4032  msgsize = recv%count*lsize
4033  buffer_pos = recv%buffer_pos*lsize
4034  call mpp_recv(recv_buffer(buffer_pos+1), glen=msgsize, from_pe = recv%pe, block=.false., tag=comm_tag_10)
4035  n = recv%pe -xmap%root_pe
4036  pl(n) = buffer_pos
4037  ml(n) = recv%count
4038  enddo
4039 
4040  !pack the data
4041  send_buffer(:) = 0.0_r8_kind
4042 !$OMP parallel do default(none) shared(lsize,x_addrs,comm,xmap,send_buffer) &
4043 !$OMP private(ptr_x,i,j,g,l2,pos,send)
4044  do p = 1, comm%nsend
4045  pos = comm%send(p)%buffer_pos*lsize
4046  send => comm%send(p)
4047  do l = 1,lsize
4048  ptr_x = x_addrs(l)
4049  do n = 1, send%count
4050  i = send%i(n)
4051  j = send%j(n)
4052  g = send%g(n)
4053  l2 = send%xloc(n)
4054  pos = pos + 1
4055  do k =1, xmap%grids(g)%km
4056  if(xmap%grids(g)%frac_area(i,j,k)/=0.0_r8_kind) then
4057  l2 = l2+1
4058  send_buffer(pos) = send_buffer(pos) + xmap%x1(l2)%area *x(l2)
4059  endif
4060  enddo
4061  enddo
4062  enddo
4063  enddo
4064 
4065  do p =1, comm%nsend
4066  buffer_pos = comm%send(p)%buffer_pos*lsize
4067  msgsize = comm%send(p)%count*lsize
4068  call mpp_send(send_buffer(buffer_pos+1), plen=msgsize, to_pe=comm%send(p)%pe, tag=comm_tag_10)
4069  enddo
4070 
4071  do l = 1, lsize
4072  ptr_d = d_addrs(l)
4073  d = 0
4074  enddo
4075 
4076  call mpp_sync_self(check=event_recv)
4077 
4078 !$OMP parallel do default(none) shared(lsize,d_addrs,xmap,recv_buffer,pl,ml) &
4079 !$OMP private(ptr_d,grid,i,j,p,pos)
4080  do l = 1, lsize
4081  ptr_d = d_addrs(l)
4082  do g=2,size(xmap%grids(:))
4083  grid => xmap%grids(g)
4084  do l3=1,grid%size_repro ! index into side1 grid's patterns
4085  i = grid%x_repro(l3)%i1
4086  j = grid%x_repro(l3)%j1
4087  p = grid%x_repro(l3)%pe-xmap%root_pe
4088  pos = pl(p) + (l-1)*ml(p) + grid%x_repro(l3)%recv_pos
4089  d(i,j) = d(i,j) + recv_buffer(pos)
4090  end do
4091  end do
4092  ! normalize with side 1 grid cell areas
4093  d = d * xmap%grids(1)%area_inv
4094  enddo
4095 
4096  call mpp_sync_self()
4097 
4098  call mpp_clock_end(id_get_1_from_xgrid_repro)
4099 
4100 end subroutine get_1_from_xgrid_repro
4101 
4102 !#######################################################################
4103 
4104 !> @brief conservation_check - returns three numbers which are the global sum of a
4105 !! variable (1) on its home model grid, (2) after interpolation to the other
4106 !! side grid(s), and (3) after re_interpolation back onto its home side grid(s).
4107 !! @return real(r8_kind) conservation_check_side1
4108 function conservation_check_side1(d, grid_id, xmap,remap_method) ! this one for 1->2->1
4109 real(r8_kind), dimension(:,:), intent(in) :: d !< model data to check
4110 character(len=3), intent(in) :: grid_id !< 3 character grid id
4111 type (xmap_type), intent(inout) :: xmap !< exchange grid
4112 real(r8_kind), dimension(3) :: conservation_check_side1
4113 integer, intent(in), optional :: remap_method
4114 
4115 
4116  real(r8_kind), dimension(xmap%size) :: x_over, x_back
4117  real(r8_kind), dimension(size(d,1),size(d,2)) :: d1
4118  real(r8_kind), dimension(:,:,:), allocatable :: d2
4119  integer :: g
4120  type (grid_type), pointer, save :: grid1 =>null(), grid2 =>null()
4121 
4122  grid1 => xmap%grids(1)
4123  conservation_check_side1 = 0.0_r8_kind
4124  if(grid1%tile_me .NE. tile_nest) conservation_check_side1(1) = sum(grid1%area*d)
4125 ! if(grid1%tile_me .NE. tile_parent .OR. grid1%id .NE. "ATM") &
4126 ! conservation_check_side1(1) = sum(grid1%area*d)
4127 
4128  call put_to_xgrid (d, grid1%id, x_over, xmap, remap_method) ! put from side 1
4129  do g=2,size(xmap%grids(:))
4130  grid2 => xmap%grids(g)
4131  if(grid2%on_this_pe) then
4132  allocate (d2(grid2%is_me:grid2%ie_me, grid2%js_me:grid2%je_me, grid2%km) )
4133  endif
4134  call get_from_xgrid (d2, grid2%id, x_over, xmap) ! get onto side 2's
4135  if(grid2%on_this_pe) then
4136  conservation_check_side1(2) = conservation_check_side1(2) + sum( grid2%area * sum(grid2%frac_area*d2,dim=3) )
4137  endif
4138  call put_to_xgrid (d2, grid2%id, x_back, xmap) ! put from side 2's
4139  if(allocated(d2))deallocate (d2)
4140  end do
4141  call get_from_xgrid(d1, grid1%id, x_back, xmap) ! get onto side 1
4142  if(grid1%tile_me .NE. tile_nest) conservation_check_side1(3) = sum(grid1%area*d1)
4143 ! if(grid1%tile_me .NE. tile_parent .OR. grid1%id .NE. "ATM") &
4144 ! conservation_check_side1(3) = sum(grid1%area*d1)
4145  call mpp_sum(conservation_check_side1,3)
4146 
4147 end function conservation_check_side1
4148 
4149 !#######################################################################
4150 
4151 !> @brief conservation_check - returns three numbers which are the global sum of a
4152 !! variable (1) on its home model grid, (2) after interpolation to the other
4153 !! side grid(s), and (3) after re_interpolation back onto its home side grid(s).
4154 !! @return real(r8_kind) conservation_check_side2
4155 function conservation_check_side2(d, grid_id, xmap,remap_method) ! this one for 2->1->2
4156 real(r8_kind), dimension(:,:,:), intent(in) :: d !< model data to check
4157 character(len=3), intent(in) :: grid_id !< 3 character grid ID
4158 type (xmap_type), intent(inout) :: xmap !< exchange grid
4159 real(r8_kind), dimension(3) :: conservation_check_side2
4160 integer, intent(in), optional :: remap_method
4161 
4162 
4163  real(r8_kind), dimension(xmap%size) :: x_over, x_back
4164  real(r8_kind), dimension(:,: ), allocatable :: d1
4165  real(r8_kind), dimension(:,:,:), allocatable :: d2
4166  integer :: g
4167  type (grid_type), pointer, save :: grid1 =>null(), grid2 =>null()
4168 
4169  grid1 => xmap%grids(1)
4170  conservation_check_side2 = 0.0_r8_kind
4171  do g = 2,size(xmap%grids(:))
4172  grid2 => xmap%grids(g)
4173  if (grid_id==grid2%id) then
4174  if(grid2%on_this_pe) then
4175  conservation_check_side2(1) = sum( grid2%area * sum(grid2%frac_area*d,dim=3) )
4176  endif
4177  call put_to_xgrid(d, grid_id, x_over, xmap) ! put from this side 2
4178  else
4179  call put_to_xgrid(0.0_r8_kind * grid2%frac_area, grid2%id, x_over, xmap) ! zero rest
4180  end if
4181  end do
4182 
4183  allocate ( d1(size(grid1%area,1),size(grid1%area,2)) )
4184  call get_from_xgrid(d1, grid1%id, x_over, xmap) ! get onto side 1
4185  if(grid1%tile_me .NE. tile_nest)conservation_check_side2(2) = sum(grid1%area*d1)
4186  call put_to_xgrid(d1, grid1%id, x_back, xmap,remap_method) ! put from side 1
4187  deallocate ( d1 )
4188 
4189  conservation_check_side2(3) = 0.0_r8_kind;
4190  do g = 2,size(xmap%grids(:))
4191  grid2 => xmap%grids(g)
4192  if(grid2%on_this_pe) then
4193  allocate ( d2( size(grid2%frac_area, 1), size(grid2%frac_area, 2), &
4194  size(grid2%frac_area, 3) ) )
4195  endif
4196  call get_from_xgrid(d2, grid2%id, x_back, xmap) ! get onto side 2's
4197  conservation_check_side2(3) = conservation_check_side2(3) + sum( grid2%area * sum(grid2%frac_area*d2,dim=3) )
4198  if(allocated(d2) )deallocate ( d2 )
4199  end do
4200  call mpp_sum(conservation_check_side2, 3)
4201 
4202 end function conservation_check_side2
4203 ! </FUNCTION>
4204 
4205 !#######################################################################
4206 
4207 !> @brief conservation_check_ug - returns three numbers which are the global sum of a
4208 !! variable (1) on its home model grid, (2) after interpolation to the other
4209 !! side grid(s), and (3) after re_interpolation back onto its home side grid(s).
4210 !! @return real(r8_kind) conservation_check_ug_side1
4211 function conservation_check_ug_side1(d, grid_id, xmap,remap_method) ! this one for 1->2->1
4212 real(r8_kind), dimension(:,:), intent(in) :: d !< model data to check
4213 character(len=3), intent(in) :: grid_id !< 3 character grid ID
4214 type (xmap_type), intent(inout) :: xmap !< exchange grid
4215 real(r8_kind), dimension(3) :: conservation_check_ug_side1
4216 integer, intent(in), optional :: remap_method
4217 
4218  real(r8_kind), dimension(xmap%size) :: x_over, x_back
4219  real(r8_kind), dimension(size(d,1),size(d,2)) :: d1
4220  real(r8_kind), dimension(:,:,:), allocatable :: d2
4221  real(r8_kind), dimension(: ), allocatable :: d_ug
4222  real(r8_kind), dimension(:,:), allocatable :: d2_ug
4223  integer :: g
4224  type (grid_type), pointer, save :: grid1 =>null(), grid2 =>null()
4225 
4226  grid1 => xmap%grids(1)
4227  conservation_check_ug_side1 = 0.0_r8_kind
4228 
4229 
4230  if(grid1%is_ug) then
4231  allocate(d_ug(grid1%ls_me:grid1%le_me))
4232  call mpp_pass_sg_to_ug(grid1%ug_domain, d, d_ug)
4233  if(grid1%tile_me .NE. tile_nest) conservation_check_ug_side1(1) = sum(grid1%area(:,1)*d_ug)
4234  call put_to_xgrid_ug (d_ug, grid1%id, x_over, xmap) ! put from side 1
4235  else
4236  if(grid1%tile_me .NE. tile_nest) conservation_check_ug_side1(1) = sum(grid1%area*d)
4237  call put_to_xgrid (d, grid1%id, x_over, xmap, remap_method) ! put from side 1
4238  endif
4239  do g=2,size(xmap%grids(:))
4240  grid2 => xmap%grids(g)
4241  if(grid2%is_ug) then
4242  if(grid2%on_this_pe) then
4243  allocate (d2_ug(grid2%ls_me:grid2%le_me, grid2%km) )
4244  d2_ug = 0
4245  endif
4246  call get_from_xgrid_ug (d2_ug, grid2%id, x_over, xmap) ! get onto side 2's
4247  if(grid2%on_this_pe) then
4249  sum( grid2%area(:,1) * sum(grid2%frac_area(:,1,:)*d2_ug,dim=2) )
4250  endif
4251  call put_to_xgrid_ug (d2_ug, grid2%id, x_back, xmap) ! put from side 2's
4252  if(allocated(d2_ug))deallocate (d2_ug)
4253  else
4254  if(grid2%on_this_pe) then
4255  allocate (d2(grid2%is_me:grid2%ie_me, grid2%js_me:grid2%je_me, grid2%km) )
4256  endif
4257  call get_from_xgrid (d2, grid2%id, x_over, xmap) ! get onto side 2's
4258  if(grid2%on_this_pe) then
4260  & + sum( grid2%area * sum(grid2%frac_area*d2,dim=3) )
4261  endif
4262  call put_to_xgrid (d2, grid2%id, x_back, xmap) ! put from side 2's
4263  if(allocated(d2))deallocate (d2)
4264  endif
4265  end do
4266  if(grid1%is_ug) then
4267 ! call get_from_xgrid_ug(d_ug, grid1%id, x_back, xmap) ! get onto side 1
4268  if(grid1%tile_me .NE. tile_nest) conservation_check_ug_side1(3) = sum(grid1%area(:,1)*d_ug)
4269  else
4270  call get_from_xgrid(d1, grid1%id, x_back, xmap) ! get onto side 1
4271  if(grid1%tile_me .NE. tile_nest) conservation_check_ug_side1(3) = sum(grid1%area*d1)
4272  endif
4273  if(allocated(d_ug)) deallocate(d_ug)
4274  call mpp_sum(conservation_check_ug_side1,3)
4275 
4276 end function conservation_check_ug_side1
4277 
4278 !#######################################################################
4279 
4280 !> @brief conservation_check_ug - returns three numbers which are the global sum of a
4281 !! variable (1) on its home model grid, (2) after interpolation to the other
4282 !! side grid(s), and (3) after re_interpolation back onto its home side grid(s).
4283 !! @return real(r8_kind) conservation_check_ug_side2
4284 function conservation_check_ug_side2(d, grid_id, xmap,remap_method) ! this one for 2->1->2
4285 real(r8_kind), dimension(:,:,:), intent(in) :: d !< model data to check
4286 character(len=3), intent(in) :: grid_id !< 3 character grid ID
4287 type (xmap_type), intent(inout) :: xmap !< exchange grid
4288 real(r8_kind), dimension(3) :: conservation_check_ug_side2
4289 integer, intent(in), optional :: remap_method
4290 
4291 
4292  real(r8_kind), dimension(xmap%size) :: x_over, x_back
4293  real(r8_kind), dimension(:,: ), allocatable :: d1, d_ug
4294  real(r8_kind), dimension(:,:,:), allocatable :: d2
4295  integer :: g
4296  type (grid_type), pointer, save :: grid1 =>null(), grid2 =>null()
4297 
4298  grid1 => xmap%grids(1)
4299  conservation_check_ug_side2 = 0.0_r8_kind
4300  do g = 2,size(xmap%grids(:))
4301  grid2 => xmap%grids(g)
4302  if (grid_id==grid2%id) then
4303  if(grid2%on_this_pe) then
4304  if(grid2%is_ug) then
4305  allocate(d_ug(grid2%ls_me:grid2%le_me,grid2%km))
4306  call mpp_pass_sg_to_ug(grid2%ug_domain, d, d_ug)
4307  conservation_check_ug_side2(1) = sum( grid2%area(:,1) * sum(grid2%frac_area(:,1,:)*d_ug,dim=2) )
4308  else
4309  conservation_check_ug_side2(1) = sum( grid2%area(:,:) * sum(grid2%frac_area(:,:,:)*d,dim=3) )
4310  endif
4311  endif
4312  if(grid2%is_ug) then
4313  call put_to_xgrid_ug(d_ug, grid_id, x_over, xmap) ! put from this side 2
4314  else
4315  call put_to_xgrid(d, grid_id, x_over, xmap) ! put from this side 2
4316  endif
4317  if(allocated(d_ug)) deallocate(d_ug)
4318  else
4319  if(grid2%is_ug) then
4320  call put_to_xgrid_ug(0.0_r8_kind * grid2%frac_area(:,1,:), grid2%id, x_over, xmap) ! zero rest
4321  else
4322  call put_to_xgrid(0.0_r8_kind * grid2%frac_area, grid2%id, x_over, xmap) ! zero rest
4323  endif
4324  end if
4325  end do
4326 
4327  allocate ( d1(size(grid1%area,1),size(grid1%area,2)) )
4328  if(grid1%is_ug) then
4329  call get_from_xgrid_ug(d1(:,1), grid1%id, x_over, xmap) ! get onto side 1
4330  else
4331  call get_from_xgrid(d1, grid1%id, x_over, xmap) ! get onto side 1
4332  endif
4333  if(grid1%tile_me .NE. tile_nest)conservation_check_ug_side2(2) = sum(grid1%area*d1)
4334  if(grid1%is_ug) then
4335  call put_to_xgrid_ug(d1(:,1), grid1%id, x_back, xmap) ! put from side 1
4336  else
4337  call put_to_xgrid(d1, grid1%id, x_back, xmap,remap_method) ! put from side 1
4338  endif
4339  deallocate ( d1 )
4340 
4341  conservation_check_ug_side2(3) = 0.0_r8_kind;
4342  do g = 2,size(xmap%grids(:))
4343  grid2 => xmap%grids(g)
4344  if(grid2%on_this_pe) then
4345  allocate ( d2( size(grid2%frac_area, 1), size(grid2%frac_area, 2), &
4346  size(grid2%frac_area, 3) ) )
4347  endif
4348  if(grid2%is_ug) then
4349  call get_from_xgrid_ug(d2(:,1,:), grid2%id, x_back, xmap) ! get onto side 2's
4350  else
4351  call get_from_xgrid(d2, grid2%id, x_back, xmap) ! get onto side 2's
4352  endif
4353  conservation_check_ug_side2(3) = conservation_check_ug_side2(3) + sum( grid2%area * sum(grid2%frac_area*d2,dim=3) )
4354  if(allocated(d2) )deallocate ( d2 )
4355  end do
4356  call mpp_sum(conservation_check_ug_side2, 3)
4357 
4358 end function conservation_check_ug_side2
4359 ! </FUNCTION>
4360 
4361 
4362 !******************************************************************************
4363 !> @brief This routine is used to get the grid area of component model with id.
4364 subroutine get_xmap_grid_area(id, xmap, area)
4365  character(len=3), intent(in) :: id
4366  type (xmap_type), intent(inout) :: xmap
4367  real(r8_kind), dimension(:,:), intent(out) :: area
4368  integer :: g
4369  logical :: found
4370 
4371  found = .false.
4372  do g = 1, size(xmap%grids(:))
4373  if (id==xmap%grids(g)%id ) then
4374  if(size(area,1) .NE. size(xmap%grids(g)%area,1) .OR. size(area,2) .NE. size(xmap%grids(g)%area,2) ) &
4375  call error_mesg("xgrid_mod", "size mismatch between area and xmap%grids(g)%area", fatal)
4376  area = xmap%grids(g)%area
4377  found = .true.
4378  exit
4379  end if
4380  end do
4381 
4382  if(.not. found) call error_mesg("xgrid_mod", id//" is not found in xmap%grids id", fatal)
4383 
4384 end subroutine get_xmap_grid_area
4385 
4386 !#######################################################################
4387 
4388 !> @brief This function is used to calculate the gradient along zonal direction.
4389 !! Maybe need to setup a limit for the gradient. The grid is assumeed
4390 !! to be regular lat-lon grid
4391 !! @return real(r8_kind) grad_zonal_latlon
4392 function grad_zonal_latlon(d, lon, lat, is, ie, js, je, isd, jsd)
4393 
4394  integer, intent(in) :: isd, jsd
4395  real(r8_kind), dimension(isd:,jsd:), intent(in) :: d
4396  real(r8_kind), dimension(:), intent(in) :: lon
4397  real(r8_kind), dimension(:), intent(in) :: lat
4398  integer, intent(in) :: is, ie, js, je
4399  real(r8_kind), dimension(is:ie,js:je) :: grad_zonal_latlon
4400  real(r8_kind) :: dx, costheta
4401  integer :: i, j, ip1, im1
4402 
4403  ! calculate the gradient of the data on each grid
4404  do i = is, ie
4405  if(i == 1) then
4406  ip1 = i+1; im1 = i
4407  else if(i==size(lon(:)) ) then
4408  ip1 = i; im1 = i-1
4409  else
4410  ip1 = i+1; im1 = i-1
4411  endif
4412  dx = lon(ip1) - lon(im1)
4413  if(abs(dx).lt.eps ) call error_mesg('xgrids_mod(grad_zonal_latlon)', 'Improper grid size in lontitude', fatal)
4414  if(dx .gt. pi) dx = dx - 2.0_r8_kind* pi
4415  if(dx .lt. -pi) dx = dx + 2.0_r8_kind* pi
4416  do j = js, je
4417  costheta = cos(lat(j))
4418  if(abs(costheta) .lt. eps) call error_mesg('xgrids_mod(grad_zonal_latlon)', 'Improper latitude grid', fatal)
4419  grad_zonal_latlon(i,j) = (d(ip1,j)-d(im1,j))/(dx*costheta)
4420  enddo
4421  enddo
4422 
4423  return
4424 
4425 end function grad_zonal_latlon
4426 
4427 !#######################################################################
4428 
4429 !> @brief This function is used to calculate the gradient along meridinal direction.
4430 !! Maybe need to setup a limit for the gradient. regular lat-lon grid are assumed
4431 !! @return grad_merid_latlon
4432 function grad_merid_latlon(d, lat, is, ie, js, je, isd, jsd)
4433  integer, intent(in) :: isd, jsd
4434  real(r8_kind), dimension(isd:,jsd:), intent(in) :: d
4435  real(r8_kind), dimension(:), intent(in) :: lat
4436  integer, intent(in) :: is, ie, js, je
4437  real(r8_kind), dimension(is:ie,js:je) :: grad_merid_latlon
4438  real(r8_kind) :: dy
4439  integer :: i, j, jp1, jm1
4440 
4441  ! calculate the gradient of the data on each grid
4442  do j = js, je
4443  if(j == 1) then
4444  jp1 = j+1; jm1 = j
4445  else if(j == size(lat(:)) ) then
4446  jp1 = j; jm1 = j-1
4447  else
4448  jp1 = j+1; jm1 = j-1
4449  endif
4450  dy = lat(jp1) - lat(jm1)
4451  if(abs(dy).lt.eps) call error_mesg('xgrids_mod(grad_merid_latlon)', 'Improper grid size in latitude', fatal)
4452 
4453  do i = is, ie
4454  grad_merid_latlon(i,j) = (d(i,jp1) - d(i,jm1))/dy
4455  enddo
4456  enddo
4457 
4458  return
4459 end function grad_merid_latlon
4460 
4461 !#######################################################################
4462 subroutine get_index_range(xmap, grid_index, is, ie, js, je, km)
4463 
4464  type(xmap_type), intent(in) :: xmap
4465  integer, intent(in) :: grid_index
4466  integer, intent(out) :: is, ie, js, je, km
4467 
4468  is = xmap % grids(grid_index) % is_me
4469  ie = xmap % grids(grid_index) % ie_me
4470  js = xmap % grids(grid_index) % js_me
4471  je = xmap % grids(grid_index) % je_me
4472  km = xmap % grids(grid_index) % km
4473 
4474 end subroutine get_index_range
4475 !#######################################################################
4476 
4477 !> @brief this version takes rank 3 data, it can be used to compute the flux on anything but the
4478 !! first grid, which typically is on the atmos side.
4479 !! note that "from" and "to" are optional, the stocks will be subtracted, resp. added, only
4480 !! if these are present.
4481 subroutine stock_move_3d(from, to, grid_index, stock_data3d, xmap, &
4482  & delta_t, from_side, to_side, radius, verbose, ier)
4484  ! this version takes rank 3 data, it can be used to compute the flux on anything but the
4485  ! first grid, which typically is on the atmos side.
4486  ! note that "from" and "to" are optional, the stocks will be subtracted, resp. added, only
4487  ! if these are present.
4488 
4489  use mpp_mod, only : mpp_sum
4490  use mpp_domains_mod, only : domain2d, mpp_redistribute, mpp_get_compute_domain
4491 
4492  type(stock_type), intent(inout), optional :: from, to
4493  integer, intent(in) :: grid_index !< grid index
4494  real(r8_kind), intent(in) :: stock_data3d(:,:,:) !< data array is 3d
4495  type(xmap_type), intent(in) :: xmap
4496  real(r8_kind), intent(in) :: delta_t
4497  integer, intent(in) :: from_side !< ISTOCK_TOP, ISTOCK_BOTTOM, or ISTOCK_SIDE
4498  integer, intent(in) :: to_side !< ISTOCK_TOP, ISTOCK_BOTTOM, or ISTOCK_SIDE
4499  real(r8_kind), intent(in) :: radius !< earth radius
4500  character(len=*), intent(in), optional :: verbose
4501  integer, intent(out) :: ier
4502 
4503  real(r8_kind) :: from_dq, to_dq
4504 
4505  ier = 0
4506  if(grid_index == 1) then
4507  ! data has rank 3 so grid index must be > 1
4508  ier = 1
4509  return
4510  endif
4511 
4512  if(.not. associated(xmap%grids) ) then
4513  ier = 2
4514  return
4515  endif
4516 
4517  from_dq = delta_t * 4.0_r8_kind * pi * radius**2 * sum( sum(xmap%grids(grid_index)%area * &
4518  & sum(xmap%grids(grid_index)%frac_area * stock_data3d, dim=3), dim=1))
4519  to_dq = from_dq
4520 
4521  ! update only if argument is present.
4522  if(present(to )) to % dq( to_side) = to % dq( to_side) + to_dq
4523  if(present(from)) from % dq(from_side) = from % dq(from_side) - from_dq
4524 
4525  if(present(verbose).and.debug_stocks) then
4526  call mpp_sum(from_dq)
4527  call mpp_sum(to_dq)
4528  from_dq = from_dq/(4.0_r8_kind*pi*radius**2)
4529  to_dq = to_dq /(4.0_r8_kind*pi*radius**2)
4530  if(mpp_pe()==mpp_root_pe()) then
4531  write(stocks_file,'(a,es19.12,a,es19.12,a)') verbose, from_dq,' [*/m^2]'
4532  endif
4533  endif
4534 
4535 end subroutine stock_move_3d
4536 
4537 !...................................................................
4538 !> @brief this version takes rank 2 data, it can be used to compute the flux on the atmos side
4539 !! note that "from" and "to" are optional, the stocks will be subtracted, resp. added, only
4540 !! if these are present.
4541 subroutine stock_move_2d(from, to, grid_index, stock_data2d, xmap, &
4542  & delta_t, from_side, to_side, radius, verbose, ier)
4544  ! this version takes rank 2 data, it can be used to compute the flux on the atmos side
4545  ! note that "from" and "to" are optional, the stocks will be subtracted, resp. added, only
4546  ! if these are present.
4547 
4548  use mpp_mod, only : mpp_sum
4549  use mpp_domains_mod, only : domain2d, mpp_redistribute, mpp_get_compute_domain
4550 
4551  type(stock_type), intent(inout), optional :: from, to
4552  integer, optional, intent(in) :: grid_index
4553  real(r8_kind), intent(in) :: stock_data2d(:,:) !< data array is 2d
4554  type(xmap_type), intent(in) :: xmap
4555  real(r8_kind), intent(in) :: delta_t
4556  integer, intent(in) :: from_side !< ISTOCK_TOP, ISTOCK_BOTTOM, or ISTOCK_SIDE
4557  integer, intent(in) :: to_side !< ISTOCK_TOP, ISTOCK_BOTTOM, or ISTOCK_SIDE
4558  real(r8_kind), intent(in) :: radius !< earth radius
4559  character(len=*), intent(in) :: verbose
4560  integer, intent(out) :: ier
4561 
4562  real(r8_kind) :: to_dq, from_dq
4563 
4564  ier = 0
4565 
4566  if(.not. associated(xmap%grids) ) then
4567  ier = 3
4568  return
4569  endif
4570 
4571  if( .not. present(grid_index) .or. grid_index==1 ) then
4572 
4573  ! only makes sense if grid_index == 1
4574  from_dq = delta_t * 4.0_r8_kind*pi*radius**2 * sum(sum(xmap%grids(1)%area * stock_data2d, dim=1))
4575  to_dq = from_dq
4576 
4577  else
4578 
4579  ier = 4
4580  return
4581 
4582  endif
4583 
4584  ! update only if argument is present.
4585  if(present(to )) to % dq( to_side) = to % dq( to_side) + to_dq
4586  if(present(from)) from % dq(from_side) = from % dq(from_side) - from_dq
4587 
4588  if(debug_stocks) then
4589  call mpp_sum(from_dq)
4590  call mpp_sum(to_dq)
4591  from_dq = from_dq/(4.0_r8_kind*pi*radius**2)
4592  to_dq = to_dq /(4.0_r8_kind*pi*radius**2)
4593  if(mpp_pe()==mpp_root_pe()) then
4594  write(stocks_file,'(a,es19.12,a,es19.12,a)') verbose, from_dq,' [*/m^2]'
4595  endif
4596  endif
4597 
4598 end subroutine stock_move_2d
4599 
4600 !#######################################################################
4601 !> @brief this version takes rank 3 data, it can be used to compute the flux on anything but the
4602 !! first grid, which typically is on the atmos side.
4603 !! note that "from" and "to" are optional, the stocks will be subtracted, resp. added, only
4604 !! if these are present.
4605 subroutine stock_move_ug_3d(from, to, grid_index, stock_ug_data3d, xmap, &
4606  & delta_t, from_side, to_side, radius, verbose, ier)
4608  ! this version takes rank 3 data, it can be used to compute the flux on anything but the
4609  ! first grid, which typically is on the atmos side.
4610  ! note that "from" and "to" are optional, the stocks will be subtracted, resp. added, only
4611  ! if these are present.
4612 
4613  use mpp_mod, only : mpp_sum
4614  use mpp_domains_mod, only : domain2d, mpp_redistribute, mpp_get_compute_domain
4615 
4616  type(stock_type), intent(inout), optional :: from, to
4617  integer, intent(in) :: grid_index !< grid index
4618  real(r8_kind), intent(in) :: stock_ug_data3d(:,:) !< data array is 3d
4619  type(xmap_type), intent(in) :: xmap
4620  real(r8_kind), intent(in) :: delta_t
4621  integer, intent(in) :: from_side !< ISTOCK_TOP, ISTOCK_BOTTOM, or ISTOCK_SIDE
4622  integer, intent(in) :: to_side !< ISTOCK_TOP, ISTOCK_BOTTOM, or ISTOCK_SIDE
4623  real(r8_kind), intent(in) :: radius !< earth radius
4624  character(len=*), intent(in), optional :: verbose
4625  integer, intent(out) :: ier
4626  real(r8_kind), dimension(size(stock_ug_data3d,1),size(stock_ug_data3d,2)) :: tmp
4627 
4628  real(r8_kind) :: from_dq, to_dq
4629 
4630  ier = 0
4631  if(grid_index == 1) then
4632  ! data has rank 3 so grid index must be > 1
4633  ier = 1
4634  return
4635  endif
4636 
4637  if(.not. associated(xmap%grids) ) then
4638  ier = 2
4639  return
4640  endif
4641 
4642  tmp = xmap%grids(grid_index)%frac_area(:,1,:) * stock_ug_data3d
4643  from_dq = delta_t * 4.0_r8_kind * pi * radius**2 * sum( xmap%grids(grid_index)%area(:,1) * &
4644  & sum(tmp, dim=2))
4645  to_dq = from_dq
4646 
4647  ! update only if argument is present.
4648  if(present(to )) to % dq( to_side) = to % dq( to_side) + to_dq
4649  if(present(from)) from % dq(from_side) = from % dq(from_side) - from_dq
4650 
4651  if(present(verbose).and.debug_stocks) then
4652  call mpp_sum(from_dq)
4653  call mpp_sum(to_dq)
4654  from_dq = from_dq/(4.0_r8_kind*pi*radius**2)
4655  to_dq = to_dq /(4.0_r8_kind*pi*radius**2)
4656  if(mpp_pe()==mpp_root_pe()) then
4657  write(stocks_file,'(a,es19.12,a,es19.12,a)') verbose, from_dq,' [*/m^2]'
4658  endif
4659  endif
4660 
4661 end subroutine stock_move_ug_3d
4662 
4663 
4664 
4665 !#######################################################################
4666 !> @brief surface/time integral of a 2d array
4667 subroutine stock_integrate_2d(integrate_data2d, xmap, delta_t, radius, res, ier)
4668 
4669  ! surface/time integral of a 2d array
4670 
4671  use mpp_mod, only : mpp_sum
4672 
4673  real(r8_kind), intent(in) :: integrate_data2d(:,:) !< data array is 2d
4674  type(xmap_type), intent(in) :: xmap
4675  real(r8_kind), intent(in) :: delta_t
4676  real(r8_kind), intent(in) :: radius !< earth radius
4677  real(r8_kind), intent(out) :: res
4678  integer, intent(out) :: ier
4679 
4680  ier = 0
4681  res = 0.0_r8_kind
4682 
4683  if(.not. associated(xmap%grids) ) then
4684  ier = 6
4685  return
4686  endif
4687 
4688  res = delta_t * 4.0_r8_kind * pi * radius**2 * sum(sum(xmap%grids(1)%area * integrate_data2d, dim=1))
4689 
4690 end subroutine stock_integrate_2d
4691 !#######################################################################
4692 
4693 !#######################################################################
4694 
4695 
4696 
4697 subroutine stock_print(stck, Time, comp_name, index, ref_value, radius, pelist)
4698 
4699  use mpp_mod, only : mpp_pe, mpp_root_pe, mpp_sum
4700  use time_manager_mod, only : time_type, get_time
4701  use diag_manager_mod, only : register_diag_field,send_data
4702 
4703  type(stock_type), intent(in) :: stck
4704  type(time_type), intent(in) :: Time
4705  character(len=*) :: comp_name
4706  integer, intent(in) :: index !< to map stock element (water, heat, ..) to a name
4707  real(r8_kind), intent(in) :: ref_value !< the stock value returned by the component per PE
4708  real(r8_kind), intent(in) :: radius
4709  integer, intent(in), optional :: pelist(:)
4710 
4711  integer, parameter :: initID = -2 !< initial value for diag IDs. Must not be equal to the value
4712  !! that register_diag_field returns when it can't register the filed -- otherwise the registration
4713  !! is attempted every time this subroutine is called
4714 
4715  real(r8_kind) :: f_value, c_value, planet_area
4716  character(len=80) :: formatString
4717  integer :: iday, isec, hours
4718  integer :: diagID, compInd
4719  integer, dimension(NELEMS,4), save :: f_valueDiagID = initid
4720  integer, dimension(NELEMS,4), save :: c_valueDiagID = initid
4721  integer, dimension(NELEMS,4), save :: fmc_valueDiagID = initid
4722 
4723  real(r8_kind) :: diagField
4724  logical :: used
4725  character(len=30) :: field_name, units
4726 
4727  f_value = sum(stck % dq)
4728  c_value = ref_value - stck % q_start
4729  if(present(pelist)) then
4730  call mpp_sum(f_value, pelist=pelist)
4731  call mpp_sum(c_value, pelist=pelist)
4732  else
4733  call mpp_sum(f_value)
4734  call mpp_sum(c_value)
4735  endif
4736 
4737  if(mpp_pe() == mpp_root_pe()) then
4738  ! normalize to 1 earth m^2
4739  planet_area = 4.0_r8_kind * pi * radius**2
4740  f_value = f_value / planet_area
4741  c_value = c_value / planet_area
4742 
4743  if(comp_name == 'ATM') compind = 1
4744  if(comp_name == 'LND') compind = 2
4745  if(comp_name == 'ICE') compind = 3
4746  if(comp_name == 'OCN') compind = 4
4747 
4748 
4749  if(f_valuediagid(index,compind) == initid) then
4750  field_name = trim(comp_name) // trim(stock_names(index))
4751  field_name = trim(field_name) // 'StocksChange_Flux'
4752  units = trim(stock_units(index))
4753  f_valuediagid(index,compind) = register_diag_field('stock_print', field_name, time, &
4754  units=units)
4755  endif
4756 
4757  if(c_valuediagid(index,compind) == initid) then
4758  field_name = trim(comp_name) // trim(stock_names(index))
4759  field_name = trim(field_name) // 'StocksChange_Comp'
4760  units = trim(stock_units(index))
4761  c_valuediagid(index,compind) = register_diag_field('stock_print', field_name, time, &
4762  units=units)
4763  endif
4764 
4765  if(fmc_valuediagid(index,compind) == initid) then
4766  field_name = trim(comp_name) // trim(stock_names(index))
4767  field_name = trim(field_name) // 'StocksChange_Diff'
4768  units = trim(stock_units(index))
4769  fmc_valuediagid(index,compind) = register_diag_field('stock_print', field_name, time, &
4770  units=units)
4771  endif
4772 
4773  diagid=f_valuediagid(index,compind)
4774  diagfield = f_value
4775  if (diagid > 0) used = send_data(diagid, diagfield, time = time)
4776  diagid=c_valuediagid(index,compind)
4777  diagfield = c_value
4778  if (diagid > 0) used = send_data(diagid, diagfield, time)
4779  diagid=fmc_valuediagid(index,compind)
4780  diagfield = f_value-c_value
4781  if (diagid > 0) used = send_data(diagid, diagfield, time=time)
4782 
4783 
4784  call get_time(time, isec, iday)
4785  hours = iday*24 + isec/3600
4786  formatstring = '(a,a,a,i16,2x,es22.15,2x,es22.15,2x,es22.15)'
4787  write(stocks_file,formatstring) trim(comp_name),stock_names(index),stock_units(index) &
4788  ,hours,f_value,c_value,f_value-c_value
4789 
4790  endif
4791 
4792 
4793 end subroutine stock_print
4794 
4795 
4796 !###############################################################################
4797  !> @return logical is_lat_lon
4798  function is_lat_lon(lon, lat)
4799  real(r8_kind), dimension(:,:), intent(in) :: lon, lat
4800  logical :: is_lat_lon
4801  integer :: i, j, nlon, nlat, num
4802 
4803  is_lat_lon = .true.
4804  nlon = size(lon,1)
4805  nlat = size(lon,2)
4806  loop_lat: do j = 1, nlat
4807  do i = 2, nlon
4808  if(lat(i,j) .NE. lat(1,j)) then
4809  is_lat_lon = .false.
4810  exit loop_lat
4811  end if
4812  end do
4813  end do loop_lat
4814 
4815  if(is_lat_lon) then
4816  loop_lon: do i = 1, nlon
4817  do j = 2, nlat
4818  if(lon(i,j) .NE. lon(i,1)) then
4819  is_lat_lon = .false.
4820  exit loop_lon
4821  end if
4822  end do
4823  end do loop_lon
4824  end if
4825 
4826  num = 0
4827  if(is_lat_lon) num = 1
4828  call mpp_min(num)
4829  if(num == 1) then
4830  is_lat_lon = .true.
4831  else
4832  is_lat_lon = .false.
4833  end if
4834 
4835  return
4836  end function is_lat_lon
4837 
4838 !#######################################################################
4839 
4840 ! <SUBROUTINE NAME="get_side1_from_xgrid_ug" INTERFACE="get_from_xgrid_ug">
4841 ! <IN NAME="x" TYPE="real(r8_kind)" DIM="(:)" > </IN>
4842 ! <IN NAME="grid_id" TYPE=" character(len=3)" > </IN>
4843 ! <OUT NAME="d" TYPE="real(r8_kind)" DIM="(:,:)" > </OUT>
4844 ! <INOUT NAME="xmap" TYPE="xmap_type" > </INOUT>
4845 
4846 subroutine get_side1_from_xgrid_ug(d, grid_id, x, xmap, complete)
4847  real(r8_kind), dimension(:), intent(out) :: d
4848  character(len=3), intent(in) :: grid_id
4849  real(r8_kind), dimension(:), intent(in) :: x
4850  type (xmap_type), intent(inout) :: xmap
4851  logical, intent(in), optional :: complete
4852 
4853  logical :: is_complete, set_mismatch
4854  integer :: g
4855  character(len=2) :: text
4856  integer, save :: isize=0
4857  integer, save :: lsize=0
4858  integer, save :: xsize=0
4859  character(len=3), save :: grid_id_saved=""
4860  integer(i8_kind), dimension(MAX_FIELDS), save :: d_addrs = -9999_i8_kind
4861  integer(i8_kind), dimension(MAX_FIELDS), save :: x_addrs = -9999_i8_kind
4862 
4863  d = 0.0_r8_kind
4864  if (grid_id==xmap%grids(1)%id) then
4865  is_complete = .true.
4866  if(present(complete)) is_complete=complete
4867  lsize = lsize + 1
4868  if( lsize > max_fields ) then
4869  write( text,'(i2)' ) max_fields
4870  call error_mesg ('xgrid_mod', 'MAX_FIELDS='//trim(text)//' exceeded for group get_side1_from_xgrid_ug', fatal)
4871  endif
4872  d_addrs(lsize) = loc(d)
4873  x_addrs(lsize) = loc(x)
4874 
4875  if(lsize == 1) then
4876  isize = size(d(:))
4877  xsize = size(x(:))
4878  grid_id_saved = grid_id
4879  else
4880  set_mismatch = .false.
4881  set_mismatch = set_mismatch .OR. (isize /= size(d(:)))
4882  set_mismatch = set_mismatch .OR. (xsize /= size(x(:)))
4883  set_mismatch = set_mismatch .OR. (grid_id_saved /= grid_id)
4884  if(set_mismatch)then
4885  write( text,'(i2)' ) lsize
4886  call error_mesg ('xgrid_mod', 'Incompatible field at count '//text// &
4887  & ' for group get_side1_from_xgrid_ug', fatal )
4888  endif
4889  endif
4890 
4891  if(is_complete) then
4892  if (make_exchange_reproduce) then
4893  call get_1_from_xgrid_ug_repro(d_addrs, x_addrs, xmap, xsize, lsize)
4894  else
4895  call get_1_from_xgrid_ug(d_addrs, x_addrs, xmap, isize, xsize, lsize)
4896  end if
4897  d_addrs(1:lsize) = -9999_i8_kind
4898  x_addrs(1:lsize) = -9999_i8_kind
4899  isize = 0
4900  xsize = 0
4901  lsize = 0
4902  grid_id_saved = ""
4903  endif
4904  return;
4905  end if
4906 
4907  do g=2,size(xmap%grids(:))
4908  if (grid_id==xmap%grids(g)%id) &
4909  call error_mesg ('xgrid_mod', &
4910  'get_from_xgrid_ug expects a 3D side 2 grid', fatal)
4911  end do
4912 
4913  call error_mesg ('xgrid_mod', 'get_from_xgrid_ug: could not find grid id', fatal)
4914 
4915 end subroutine get_side1_from_xgrid_ug
4916 ! </SUBROUTINE>
4917 
4918 !#######################################################################
4919 
4920 ! <SUBROUTINE NAME="put_side1_to_xgrid_ug" INTERFACE="put_to_xgrid_ug">
4921 ! <IN NAME="d" TYPE="real(r8_kind)" DIM="(:,:)" > </IN>
4922 ! <IN NAME="grid_id" TYPE=" character(len=3)" > </IN>
4923 ! <INOUT NAME="x" TYPE="real(r8_kind)" DIM="(:)" > </INOUT>
4924 ! <INOUT NAME="xmap" TYPE="xmap_type" > </INOUT>
4925 ! <IN NAME="remap_method" TYPE="integer,optional"></IN>
4926 
4927 !> @brief Currently only support first order.
4928 subroutine put_side1_to_xgrid_ug(d, grid_id, x, xmap, complete)
4929  real(r8_kind), dimension(:), intent(in) :: d !<
4930  character(len=3), intent(in) :: grid_id
4931  real(r8_kind), dimension(:), intent(inout) :: x
4932  type (xmap_type), intent(inout) :: xmap
4933  logical, intent(in), optional :: complete
4934 
4935  logical :: is_complete, set_mismatch
4936  integer :: g
4937  character(len=2) :: text
4938  integer, save :: dsize=0
4939  integer, save :: lsize=0
4940  integer, save :: xsize=0
4941  character(len=3), save :: grid_id_saved=""
4942  integer(i8_kind), dimension(MAX_FIELDS), save :: d_addrs = -9999_i8_kind
4943  integer(i8_kind), dimension(MAX_FIELDS), save :: x_addrs = -9999_i8_kind
4944 
4945  if (grid_id==xmap%grids(1)%id) then
4946  is_complete = .true.
4947  if(present(complete)) is_complete=complete
4948  lsize = lsize + 1
4949  if( lsize > max_fields ) then
4950  write( text,'(i2)' ) max_fields
4951  call error_mesg ('xgrid_mod', 'MAX_FIELDS='//trim(text)//' exceeded for group put_side1_to_xgrid_ug', fatal)
4952  endif
4953  d_addrs(lsize) = loc(d)
4954  x_addrs(lsize) = loc(x)
4955 
4956  if(lsize == 1) then
4957  dsize = size(d(:))
4958  xsize = size(x(:))
4959  grid_id_saved = grid_id
4960  else
4961  set_mismatch = .false.
4962  set_mismatch = set_mismatch .OR. (dsize /= size(d(:)))
4963  set_mismatch = set_mismatch .OR. (xsize /= size(x(:)))
4964  set_mismatch = set_mismatch .OR. (grid_id_saved /= grid_id)
4965  if(set_mismatch)then
4966  write( text,'(i2)' ) lsize
4967  call error_mesg ('xgrid_mod', 'Incompatible field at count '//text// &
4968  & ' for group put_side1_to_xgrid_ug', fatal )
4969  endif
4970  endif
4971 
4972  if(is_complete) then
4973  call put_1_to_xgrid_ug_order_1(d_addrs, x_addrs, xmap, dsize, xsize, lsize)
4974  d_addrs(1:lsize) = -9999_i8_kind
4975  x_addrs(1:lsize) = -9999_i8_kind
4976  dsize = 0
4977  xsize = 0
4978  lsize = 0
4979  grid_id_saved = ""
4980  endif
4981  return
4982  end if
4983 
4984  do g=2,size(xmap%grids(:))
4985  if (grid_id==xmap%grids(g)%id) &
4986  call error_mesg ('xgrid_mod', &
4987  'put_to_xgrid_ug expects a 2D side 2 grid', fatal)
4988  end do
4989 
4990  call error_mesg ('xgrid_mod', 'put_to_xgrid_ug: could not find grid id', fatal)
4991 
4992 end subroutine put_side1_to_xgrid_ug
4993 ! </SUBROUTINE>
4994 
4995 !#######################################################################
4996 
4997 ! <SUBROUTINE NAME="put_side2_to_xgrid_ug" INTERFACE="put_to_xgrid_ug">
4998 ! <IN NAME="d" TYPE="real(r8_kind)" DIM="(:,:)" > </IN>
4999 ! <IN NAME="grid_id" TYPE=" character(len=3)" > </IN>
5000 ! <INOUT NAME="x" TYPE="real(r8_kind)" DIM="(:)" > </INOUT>
5001 ! <INOUT NAME="xmap" TYPE="xmap_type" > </INOUT>
5002 
5003 subroutine put_side2_to_xgrid_ug(d, grid_id, x, xmap)
5004  real(r8_kind), dimension(:,:), intent(in) :: d
5005  character(len=3), intent(in) :: grid_id
5006  real(r8_kind), dimension(:), intent(inout) :: x
5007  type (xmap_type), intent(inout) :: xmap
5008 
5009  integer :: g
5010 
5011  if (grid_id==xmap%grids(1)%id) &
5012  call error_mesg ('xgrid_mod', &
5013  'put_to_xgrid_ug expects a 2D side 1 grid', fatal)
5014 
5015  do g=2,size(xmap%grids(:))
5016  if (grid_id==xmap%grids(g)%id) then
5017  call put_2_to_xgrid_ug(d, xmap%grids(g), x, xmap)
5018  return;
5019  end if
5020  end do
5021 
5022  call error_mesg ('xgrid_mod', 'put_to_xgrid_ug: could not find grid id', fatal)
5023 
5024 end subroutine put_side2_to_xgrid_ug
5025 ! </SUBROUTINE>
5026 
5027 !#######################################################################
5028 
5029 ! <SUBROUTINE NAME="get_side2_from_xgrid_ug" INTERFACE="get_from_xgrid_ug">
5030 ! <IN NAME="x" TYPE="real(r8_kind)" DIM="(:)" > </IN>
5031 ! <IN NAME="grid_id" TYPE=" character(len=3)" > </IN>
5032 ! <OUT NAME="d" TYPE="real(r8_kind)" DIM="(:,:)" > </OUT>
5033 ! <INOUT NAME="xmap" TYPE="xmap_type" > </INOUT>
5034 
5035 subroutine get_side2_from_xgrid_ug(d, grid_id, x, xmap)
5036  real(r8_kind), dimension(:,:), intent(out) :: d
5037  character(len=3), intent(in) :: grid_id
5038  real(r8_kind), dimension(:), intent(in) :: x
5039  type (xmap_type), intent(in) :: xmap
5040 
5041  integer :: g
5042 
5043  if (grid_id==xmap%grids(1)%id) &
5044  call error_mesg ('xgrid_mod', &
5045  'get_from_xgrid_ug expects a 2D side 1 grid', fatal)
5046 
5047  do g=2,size(xmap%grids(:))
5048  if (grid_id==xmap%grids(g)%id) then
5049  call get_2_from_xgrid_ug(d, xmap%grids(g), x, xmap)
5050  return;
5051  end if
5052  end do
5053 
5054  call error_mesg ('xgrid_mod', 'get_from_xgrid_ug: could not find grid id', fatal)
5055 
5056 end subroutine get_side2_from_xgrid_ug
5057 ! </SUBROUTINE>
5058 
5059 
5060 !#######################################################################
5061 
5062 subroutine put_1_to_xgrid_ug_order_1(d_addrs, x_addrs, xmap, dsize, xsize, lsize)
5063  integer(i8_kind), dimension(:), intent(in) :: d_addrs
5064  integer(i8_kind), dimension(:), intent(in) :: x_addrs
5065  type (xmap_type), intent(inout) :: xmap
5066  integer, intent(in) :: dsize, xsize, lsize
5067 
5068  integer :: i, p, buffer_pos, msgsize
5069  integer :: from_pe, to_pe, pos, n, l, count
5070  integer :: ibegin, istart, iend, start_pos
5071  type (comm_type), pointer, save :: comm =>null()
5072  real(r8_kind) :: recv_buffer(xmap%put1%recvsize*lsize)
5073  real(r8_kind) :: send_buffer(xmap%put1%sendsize*lsize)
5074  real(r8_kind) :: unpack_buffer(xmap%put1%recvsize)
5075 
5076  real(r8_kind), dimension(dsize) :: d
5077  real(r8_kind), dimension(xsize) :: x
5078  pointer(ptr_d, d)
5079  pointer(ptr_x, x)
5080  integer :: lll
5081 
5082  call mpp_clock_begin(id_put_1_to_xgrid_order_1)
5083 
5084  !--- pre-post receiving
5085  comm => xmap%put1
5086  do p = 1, comm%nrecv
5087  msgsize = comm%recv(p)%count*lsize
5088  from_pe = comm%recv(p)%pe
5089  buffer_pos = comm%recv(p)%buffer_pos*lsize
5090  call mpp_recv(recv_buffer(buffer_pos+1), glen=msgsize, from_pe = from_pe, block=.false., tag=comm_tag_7)
5091  enddo
5092 
5093  !--- send the data
5094  buffer_pos = 0
5095  do p = 1, comm%nsend
5096  msgsize = comm%send(p)%count*lsize
5097  to_pe = comm%send(p)%pe
5098  pos = buffer_pos
5099  do l = 1, lsize
5100  ptr_d = d_addrs(l)
5101  do n = 1, comm%send(p)%count
5102  pos = pos + 1
5103  lll = comm%send(p)%i(n)
5104  send_buffer(pos) = d(lll)
5105  enddo
5106  enddo
5107  call mpp_send(send_buffer(buffer_pos+1), plen=msgsize, to_pe = to_pe, tag=comm_tag_7 )
5108  buffer_pos = buffer_pos + msgsize
5109  enddo
5110 
5111  call mpp_sync_self(check=event_recv)
5112 
5113  !--- unpack the buffer
5114  if( lsize == 1) then
5115  ptr_x = x_addrs(1)
5116  do l=1,xmap%size_put1
5117  x(l) = recv_buffer(xmap%x1_put(l)%pos)
5118  end do
5119  else
5120  start_pos = 0
5121 !$OMP parallel do default(none) shared(lsize,x_addrs,comm,recv_buffer,xmap) &
5122 !$OMP private(ptr_x,count,ibegin,istart,iend,pos,unpack_buffer)
5123  do l = 1, lsize
5124  ptr_x = x_addrs(l)
5125  do p = 1, comm%nrecv
5126  count = comm%recv(p)%count
5127  ibegin = comm%recv(p)%buffer_pos*lsize + 1
5128  istart = ibegin + (l-1)*count
5129  iend = istart + count - 1
5130  pos = comm%recv(p)%buffer_pos
5131  do n = istart, iend
5132  pos = pos + 1
5133  unpack_buffer(pos) = recv_buffer(n)
5134  enddo
5135  enddo
5136  do i=1,xmap%size_put1
5137  x(i) = unpack_buffer(xmap%x1_put(i)%pos)
5138  end do
5139  enddo
5140  endif
5141 
5142  call mpp_sync_self()
5143 
5144  call mpp_clock_end(id_put_1_to_xgrid_order_1)
5145 
5146 end subroutine put_1_to_xgrid_ug_order_1
5147 
5148 !#######################################################################
5149 
5150 subroutine put_2_to_xgrid_ug(d, grid, x, xmap)
5151 type (grid_type), intent(in) :: grid
5152 real(r8_kind), dimension(grid%ls_me:grid%le_me, grid%km), intent(in) :: d
5153 real(r8_kind), dimension(:), intent(inout) :: x
5154 type (xmap_type), intent(in) :: xmap
5155 
5156  integer :: l
5157  call mpp_clock_begin(id_put_2_to_xgrid)
5158 
5159  do l=grid%first,grid%last
5160  x(l) = d(xmap%x2(l)%l,xmap%x2(l)%k)
5161  end do
5162 
5163  call mpp_clock_end(id_put_2_to_xgrid)
5164 end subroutine put_2_to_xgrid_ug
5165 
5166 
5167 subroutine get_1_from_xgrid_ug(d_addrs, x_addrs, xmap, isize, xsize, lsize)
5168  integer(i8_kind), dimension(:), intent(in) :: d_addrs
5169  integer(i8_kind), dimension(:), intent(in) :: x_addrs
5170  type (xmap_type), intent(inout) :: xmap
5171  integer, intent(in) :: isize, xsize, lsize
5172 
5173  real(r8_kind), dimension(xmap%size), target :: dg(xmap%size, lsize)
5174  integer :: i, j, l, p, n, m
5175  integer :: msgsize, buffer_pos, pos
5176  integer :: istart, iend, count
5177  real(r8_kind) , pointer, save :: dgp =>null()
5178  type (grid_type) , pointer, save :: grid1 =>null()
5179  type (comm_type) , pointer, save :: comm =>null()
5180  type(overlap_type), pointer, save :: send => null()
5181  type(overlap_type), pointer, save :: recv => null()
5182  real(r8_kind) :: recv_buffer(xmap%get1%recvsize*lsize*3)
5183  real(r8_kind) :: send_buffer(xmap%get1%sendsize*lsize*3)
5184  real(r8_kind) :: d(isize)
5185  real(r8_kind), dimension(xsize) :: x
5186  pointer(ptr_d, d)
5187  pointer(ptr_x, x)
5188 
5189  call mpp_clock_begin(id_get_1_from_xgrid)
5190 
5191  comm => xmap%get1
5192  grid1 => xmap%grids(1)
5193 
5194  do p = 1, comm%nrecv
5195  recv => comm%recv(p)
5196  msgsize = recv%count*lsize
5197  buffer_pos = recv%buffer_pos*lsize
5198  call mpp_recv(recv_buffer(buffer_pos+1), glen=msgsize, from_pe = recv%pe, block=.false., tag=comm_tag_9)
5199  enddo
5200 
5201  dg = 0.0_r8_kind;
5202 !$OMP parallel do default(none) shared(lsize,xmap,dg,x_addrs) private(dgp,ptr_x)
5203  do l = 1, lsize
5204  ptr_x = x_addrs(l)
5205  do i=1,xmap%size
5206  dgp => dg(xmap%x1(i)%pos,l)
5207  dgp = dgp + xmap%x1(i)%area*x(i)
5208  enddo
5209  enddo
5210 
5211 
5212  !--- send the data
5213  buffer_pos = 0
5214  istart = 1
5215  do p = 1, comm%nsend
5216  send => comm%send(p)
5217  msgsize = send%count*lsize
5218  pos = buffer_pos
5219  istart = send%buffer_pos+1
5220  iend = istart + send%count - 1
5221  do l = 1, lsize
5222  do n = istart, iend
5223  pos = pos + 1
5224  send_buffer(pos) = dg(n,l)
5225  enddo
5226  enddo
5227  call mpp_send(send_buffer(buffer_pos+1), plen=msgsize, to_pe = send%pe, tag=comm_tag_9 )
5228  buffer_pos = buffer_pos + msgsize
5229  istart = iend + 1
5230  enddo
5231 
5232  call mpp_sync_self(check=event_recv)
5233 
5234  !--- unpack the buffer
5235  do l = 1, lsize
5236  ptr_d = d_addrs(l)
5237  d = 0.0_r8_kind
5238  enddo
5239  !--- To bitwise reproduce old results, first copy the data onto its own pe.
5240 
5241  do p = 1, comm%nrecv
5242  recv => comm%recv(p)
5243  count = recv%count
5244  buffer_pos = recv%buffer_pos*lsize
5245  if( recv%pe == xmap%me ) then
5246 !$OMP parallel do default(none) shared(lsize,recv,recv_buffer,buffer_pos,d_addrs,count) &
5247 !$OMP private(ptr_d,i,pos)
5248  do l = 1, lsize
5249  pos = buffer_pos + (l-1)*count
5250  ptr_d = d_addrs(l)
5251  do n = 1,count
5252  i = recv%i(n)
5253  pos = pos + 1
5254  d(i) = recv_buffer(pos)
5255  enddo
5256  enddo
5257  exit
5258  endif
5259  enddo
5260 
5261  pos = 0
5262  do m = 1, comm%nrecv
5263  p = comm%unpack_ind(m)
5264  recv => comm%recv(p)
5265  if( recv%pe == xmap%me ) then
5266  cycle
5267  endif
5268  buffer_pos = recv%buffer_pos*lsize
5269 !$OMP parallel do default(none) shared(lsize,recv,recv_buffer,buffer_pos,d_addrs) &
5270 !$OMP private(ptr_d,i,j,pos)
5271  do l = 1, lsize
5272  pos = buffer_pos + (l-1)*recv%count
5273  ptr_d = d_addrs(l)
5274  do n = 1, recv%count
5275  i = recv%i(n)
5276  pos = pos + 1
5277  d(i) = d(i) + recv_buffer(pos)
5278  enddo
5279  enddo
5280  enddo
5281 
5282  !
5283  ! normalize with side 1 grid cell areas
5284  !
5285 !$OMP parallel do default(none) shared(lsize,d_addrs,grid1) private(ptr_d)
5286  do l = 1, lsize
5287  ptr_d = d_addrs(l)
5288  d = d * grid1%area_inv(:,1)
5289  enddo
5290  call mpp_sync_self()
5291  call mpp_clock_end(id_get_1_from_xgrid)
5292 
5293 end subroutine get_1_from_xgrid_ug
5294 
5295 !#######################################################################
5296 
5297 subroutine get_1_from_xgrid_ug_repro(d_addrs, x_addrs, xmap, xsize, lsize)
5298  integer(i8_kind), dimension(:), intent(in) :: d_addrs
5299  integer(i8_kind), dimension(:), intent(in) :: x_addrs
5300  type (xmap_type), intent(inout) :: xmap
5301  integer, intent(in) :: xsize, lsize
5302 
5303  integer :: g, i, j, k, p, l, n, l2, l3
5304  integer :: msgsize, buffer_pos, pos
5305  type (grid_type), pointer, save :: grid =>null()
5306  type(comm_type), pointer, save :: comm => null()
5307  type(overlap_type), pointer, save :: send => null()
5308  type(overlap_type), pointer, save :: recv => null()
5309  integer, dimension(0:xmap%npes-1) :: pl, ml
5310  real(r8_kind) :: recv_buffer(xmap%recv_count_repro_tot*lsize)
5311  real(r8_kind) :: send_buffer(xmap%send_count_repro_tot*lsize)
5312  real(r8_kind) :: d(xmap%grids(1)%ls_me:xmap%grids(1)%le_me)
5313  real(r8_kind), dimension(xsize) :: x
5314  pointer(ptr_d, d)
5315  pointer(ptr_x, x)
5316 
5317  call mpp_clock_begin(id_get_1_from_xgrid_repro)
5318  comm => xmap%get1_repro
5319  !--- pre-post receiving
5320  do p = 1, comm%nrecv
5321  recv => comm%recv(p)
5322  msgsize = recv%count*lsize
5323  buffer_pos = recv%buffer_pos*lsize
5324  call mpp_recv(recv_buffer(buffer_pos+1), glen=msgsize, from_pe = recv%pe, block=.false., tag=comm_tag_10)
5325  n = recv%pe -xmap%root_pe
5326  pl(n) = buffer_pos
5327  ml(n) = recv%count
5328  enddo
5329 
5330  !pack the data
5331  send_buffer(:) = 0.0_r8_kind
5332 !$OMP parallel do default(none) shared(lsize,x_addrs,comm,xmap,send_buffer) &
5333 !$OMP private(ptr_x,i,j,g,l2,pos,send)
5334  do p = 1, comm%nsend
5335  pos = comm%send(p)%buffer_pos*lsize
5336  send => comm%send(p)
5337  do l = 1,lsize
5338  ptr_x = x_addrs(l)
5339  do n = 1, send%count
5340  i = send%i(n)
5341  j = send%j(n)
5342  g = send%g(n)
5343  l2 = send%xloc(n)
5344  pos = pos + 1
5345  do k =1, xmap%grids(g)%km
5346  if(xmap%grids(g)%frac_area(i,j,k)/=0.0_r8_kind) then
5347  l2 = l2+1
5348  send_buffer(pos) = send_buffer(pos) + xmap%x1(l2)%area *x(l2)
5349  endif
5350  enddo
5351  enddo
5352  enddo
5353  enddo
5354 
5355  do p =1, comm%nsend
5356  buffer_pos = comm%send(p)%buffer_pos*lsize
5357  msgsize = comm%send(p)%count*lsize
5358  call mpp_send(send_buffer(buffer_pos+1), plen=msgsize, to_pe=comm%send(p)%pe, tag=comm_tag_10)
5359  enddo
5360 
5361  do l = 1, lsize
5362  ptr_d = d_addrs(l)
5363  d = 0
5364  enddo
5365 
5366  call mpp_sync_self(check=event_recv)
5367 
5368 !$OMP parallel do default(none) shared(lsize,d_addrs,xmap,recv_buffer,pl,ml) &
5369 !$OMP private(ptr_d,grid,i,j,p,pos)
5370  do l = 1, lsize
5371  ptr_d = d_addrs(l)
5372  do g=2,size(xmap%grids(:))
5373  grid => xmap%grids(g)
5374  do l3=1,grid%size_repro ! index into side1 grid's patterns
5375  i = grid%x_repro(l3)%l1
5376  p = grid%x_repro(l3)%pe-xmap%root_pe
5377  pos = pl(p) + (l-1)*ml(p) + grid%x_repro(l3)%recv_pos
5378  d(i) = d(i) + recv_buffer(pos)
5379  end do
5380  end do
5381  ! normalize with side 1 grid cell areas
5382  d = d * xmap%grids(1)%area_inv(:,1)
5383  enddo
5384 
5385  call mpp_sync_self()
5386 
5387  call mpp_clock_end(id_get_1_from_xgrid_repro)
5388 
5389 end subroutine get_1_from_xgrid_ug_repro
5390 
5391 !#######################################################################
5392 
5393 subroutine get_2_from_xgrid_ug(d, grid, x, xmap)
5394 type (grid_type), intent(in) :: grid
5395 real(r8_kind), dimension(grid%ls_me:grid%le_me, grid%km), intent(out) :: d
5396 real(r8_kind), dimension(:), intent(in) :: x
5397 type (xmap_type), intent(in) :: xmap
5398 
5399  integer :: l, k
5400 
5401  call mpp_clock_begin(id_get_2_from_xgrid)
5402 
5403  d = 0.0_r8_kind
5404  do l=grid%first_get,grid%last_get
5405  d(xmap%x2_get(l)%l,xmap%x2_get(l)%k) = &
5406  d(xmap%x2_get(l)%l,xmap%x2_get(l)%k) + xmap%x2_get(l)%area*x(xmap%x2_get(l)%pos)
5407  end do
5408  !
5409  ! normalize with side 2 grid cell areas
5410  !
5411  do k=1,size(d,2)
5412  d(:,k) = d(:,k) * grid%area_inv(:,1)
5413  end do
5414 
5415  call mpp_clock_end(id_get_2_from_xgrid)
5416 
5417 end subroutine get_2_from_xgrid_ug
5418 
5419 !######################################################################
5420 !> @return logical in_box_me
5421 logical function in_box_me(i, j, grid)
5422  integer, intent(in) :: i, j
5423  type (grid_type), intent(in) :: grid
5424  integer :: g
5425 
5426  if(grid%is_ug) then
5427  g = (j-1)*grid%ni + i
5428  in_box_me = (g>=grid%gs_me) .and. (g<=grid%ge_me)
5429  else
5430  in_box_me = (i>=grid%is_me) .and. (i<=grid%ie_me) .and. (j>=grid%js_me) .and. (j<=grid%je_me)
5431  endif
5432 
5433 end function in_box_me
5434 
5435 !######################################################################
5436 !> @return logical in_box_nbr
5437 logical function in_box_nbr(i, j, grid, p)
5438  integer, intent(in) :: i, j, p
5439  type (grid_type), intent(in) :: grid
5440  integer :: g
5441 
5442  if(grid%is_ug) then
5443  g = (j-1)*grid%ni + i
5444  in_box_nbr = (g>=grid%gs(p)) .and. (g<=grid%ge(p))
5445  else
5446  in_box_nbr = (i>=grid%is(p)) .and. (i<=grid%ie(p)) .and. (j>=grid%js(p)) .and. (j<=grid%je(p))
5447  endif
5448 
5449 end function in_box_nbr
5450 
5451 end module xgrid_mod
5452 !> @}
5453 ! close documentation grouping
5454 
Register a diagnostic field for a given module.
Send data over to output fields.
Close a netcdf or domain file opened with open_file or open_virtual_file.
Definition: fms2_io.F90:166
Opens a given netcdf or domain file.
Definition: fms2_io.F90:122
Read data from a defined field in a file.
Definition: fms2_io.F90:292
integer function, public check_nml_error(IOSTAT, NML_NAME)
Checks the iostat argument that is returned after reading a namelist and determines if the error code...
Definition: fms.F90:580
subroutine, public write_version_number(version, tag, unit)
Prints to the log file (or a specified unit) the version id string and tag name.
Definition: fms.F90:758
subroutine, public error_mesg(routine, message, level)
Print notes, warnings and error messages; terminates program for warning and error messages....
Definition: fms.F90:498
integer function, public get_mosaic_xgrid_size(fileobj)
return exchange grid size of mosaic xgrid file.
Definition: mosaic2.F90:119
subroutine, public get_mosaic_contact(fileobj, tile1, tile2, istart1, iend1, jstart1, jend1, istart2, iend2, jstart2, jend2)
Get contact information from mosaic_file Example usage: call get_mosaic_contact(mosaic_file,...
Definition: mosaic2.F90:223
integer function, public get_mosaic_ntiles(fileobj)
Get number of tiles in the mosaic_file.
Definition: mosaic2.F90:135
subroutine, public get_mosaic_tile_grid(grid_file, fileobj, domain, tile_count)
Gets the name of a mosaic tile grid file.
Definition: mosaic2.F90:429
integer function, public get_mosaic_ncontacts(fileobj)
Get number of contacts in the mosaic_file.
Definition: mosaic2.F90:151
subroutine, public get_mosaic_grid_sizes(fileobj, nx, ny)
Get grid size of each tile from mosaic_file.
Definition: mosaic2.F90:173
integer function mpp_get_domain_npes(domain)
Set user stack size.
integer function, dimension(size(domain%tile_id(:))) mpp_get_tile_id(domain)
Returns the tile_id on current pe.
integer function mpp_get_current_ntile(domain)
Returns number of tile on current pe.
integer function mpp_get_ntile_count(domain)
Returns number of tiles in mosaic.
subroutine mpp_compute_extent(isg, ieg, ndivs, ibegin, iend, extent)
Computes extents for a grid decomposition with the given indices and divisions.
subroutine mpp_get_tile_list(domain, tiles)
Return the tile_id on current pelist. one-tile-per-pe is assumed.
logical function mpp_domain_is_initialized(domain)
Set user stack size.
subroutine mpp_get_domain_pelist(domain, pelist)
Set user stack size.
integer function mpp_get_domain_root_pe(domain)
Set user stack size.
Broadcasts domain to every pe. Only useful outside the context of it's own pelist.
Deallocate given 1D or 2D domain.
Set up a domain decomposition.
These routines retrieve the axis specifications associated with the compute domains....
Retrieve the entire array of compute domain extents associated with a decomposition.
These routines retrieve the axis specifications associated with the data domains. The domain is a der...
These routines retrieve the axis specifications associated with the global domains....
Global sum of domain-decomposed arrays. mpp_global_sum is used to get the sum of a domain-decomposed...
Modifies the extents (compute, data and global) of a given domain.
Passes data from a structured grid to an unstructured grid Example usage:
Reorganization of distributed global arrays. mpp_redistribute is used to reorganize a distributed ar...
Performs halo updates for a given domain.
One dimensional domain used to manage shared data access between pes.
The domain2D type contains all the necessary information to define the global, compute and data domai...
Domain information for managing data on unstructured grids.
subroutine mpp_sync_self(pelist, check, request, msg_size, msg_type)
This is to check if current PE's outstanding puts are complete but we can't use shmem_fence because w...
integer function stdout()
This function returns the current standard fortran unit numbers for output.
Definition: mpp_util.inc:43
subroutine mpp_set_current_pelist(pelist, no_sync)
Set context pelist.
Definition: mpp_util.inc:490
integer function stdlog()
This function returns the current standard fortran unit numbers for log messages. Log messages,...
Definition: mpp_util.inc:59
integer function mpp_npes()
Returns processor count for current pelist.
Definition: mpp_util.inc:421
integer function mpp_pe()
Returns processor ID.
Definition: mpp_util.inc:407
subroutine mpp_sync(pelist, do_self)
Synchronize PEs in list.
integer function mpp_clock_id(name, flags, grain)
Return an ID for a new or existing clock.
Definition: mpp_util.inc:705
Scatter a vector across all PEs.
Definition: mpp.F90:762
Reduction operations. Find the max of scalar a from the PEs in pelist result is also automatically br...
Definition: mpp.F90:538
Reduction operations. Find the min of scalar a from the PEs in pelist result is also automatically br...
Definition: mpp.F90:560
Recieve data from another PE.
Definition: mpp.F90:937
Send data to a receiving PE.
Definition: mpp.F90:1004
Reduction operation.
Definition: mpp.F90:597
Holds stocks amounts per PE values.
subroutine, public get_time(Time, seconds, days, ticks, err_msg)
Returns days and seconds ( < 86400 ) corresponding to a time. err_msg should be checked for any error...
Type to represent amounts of time. Implemented as seconds and days to allow for larger intervals.
subroutine get_area_elements_fms2_io(fileobj, name, get_area_data)
Read the area elements from NetCDF file.
Definition: xgrid.F90:1461
integer function, public xgrid_count(xmap)
Returns current size of exchange grid variables.
Definition: xgrid.F90:3239
subroutine, public some(xmap, some_arr, grid_id)
Returns logical associating exchange grid cells with given side two grid.
Definition: xgrid.F90:3468
subroutine regen(xmap)
Regenerate/Update the xmap.
Definition: xgrid.F90:2940
integer nsubset
Number of processors to read exchange grid information. Those processors that read the exchange grid ...
Definition: xgrid.F90:160
subroutine put_side1_to_xgrid(d, grid_id, x, xmap, remap_method, complete)
Scatters data to exchange grid.
Definition: xgrid.F90:3248
integer function get_nest_contact_fms2_io(fileobj, tile_nest_out, tile_parent_out, is_nest_out, ie_nest_out, js_nest_out, je_nest_out, is_parent_out, ie_parent_out, js_parent_out, je_parent_out)
currently we are assuming there is only one nest region
Definition: xgrid.F90:2121
subroutine, public get_ocean_model_area_elements(domain, grid_file)
Read Ocean area element data from netCDF file.
Definition: xgrid.F90:1484
subroutine, public set_frac_area_ug(f, grid_id, xmap)
Changes sub-grid portion areas and/or number.
Definition: xgrid.F90:3205
real(r8_kind) function, dimension(3) conservation_check_ug_side1(d, grid_id, xmap, remap_method)
conservation_check_ug - returns three numbers which are the global sum of a variable (1) on its home ...
Definition: xgrid.F90:4214
subroutine get_side1_from_xgrid(d, grid_id, x, xmap, complete)
Definition: xgrid.F90:3365
subroutine stock_move_3d(from, to, grid_index, stock_data3d, xmap, delta_t, from_side, to_side, radius, verbose, ier)
this version takes rank 3 data, it can be used to compute the flux on anything but the first grid,...
Definition: xgrid.F90:4485
real(r8_kind), dimension(:,:), allocatable, public area_atm_sphere
Area elements based on a the spherical model used by the ICE layer.
Definition: xgrid.F90:181
logical function in_box_me(i, j, grid)
Definition: xgrid.F90:5424
logical function in_box(i, j, is, ie, js, je)
Definition: xgrid.F90:512
logical use_mpp_io
use_mpp_io Default = .false. When true, uses mpp_io for IO. When false, uses fms2_io for IO.
Definition: xgrid.F90:169
integer remapping_method
xgrid nml
Definition: xgrid.F90:176
subroutine get_grid_version2(grid, grid_id, grid_file)
read the center point of the grid from version 1 grid file. only the grid at the side 1 is needed,...
Definition: xgrid.F90:1373
subroutine get_side2_from_xgrid(d, grid_id, x, xmap)
Definition: xgrid.F90:3442
real(r8_kind) function, dimension(3) conservation_check_ug_side2(d, grid_id, xmap, remap_method)
conservation_check_ug - returns three numbers which are the global sum of a variable (1) on its home ...
Definition: xgrid.F90:4287
real(r8_kind) function, dimension(is:ie, js:je) grad_merid_latlon(d, lat, is, ie, js, je, isd, jsd)
This function is used to calculate the gradient along meridinal direction. Maybe need to setup a limi...
Definition: xgrid.F90:4435
logical make_exchange_reproduce
Set to .true. to make xgrid_mod reproduce answers on different numbers of PEs. This option has a cons...
Definition: xgrid.F90:152
character(len=64) interp_method
Exchange grid interpolation method. It has two options: "first_order", "second_order".
Definition: xgrid.F90:155
integer, parameter version2
mosaic grid file
Definition: xgrid.F90:149
subroutine, public setup_xmap(xmap, grid_ids, grid_domains, grid_file, atm_grid, lnd_ug_domain)
Sets up exchange grid connectivity using grid specification file and processor domain decomposition.
Definition: xgrid.F90:1517
subroutine get_grid_version1(grid, grid_id, grid_file)
read the center point of the grid from version 1 grid file. only the grid at the side 1 is needed,...
Definition: xgrid.F90:1305
subroutine put_side1_to_xgrid_ug(d, grid_id, x, xmap, complete)
Currently only support first order.
Definition: xgrid.F90:4931
subroutine put_side2_to_xgrid(d, grid_id, x, xmap)
Scatters data to exchange grid.
Definition: xgrid.F90:3340
subroutine set_frac_area_sg(f, grid_id, xmap)
Changes sub-grid portion areas and/or number.
Definition: xgrid.F90:3173
logical function in_box_nbr(i, j, grid, p)
Definition: xgrid.F90:5440
integer, parameter version1
grid spec file
Definition: xgrid.F90:148
real(r8_kind) function, dimension(3) conservation_check_side2(d, grid_id, xmap, remap_method)
conservation_check - returns three numbers which are the global sum of a variable (1) on its home mod...
Definition: xgrid.F90:4158
real(r8_kind) function, dimension(3) conservation_check_side1(d, grid_id, xmap, remap_method)
conservation_check - returns three numbers which are the global sum of a variable (1) on its home mod...
Definition: xgrid.F90:4111
subroutine, public xgrid_init(remap_method)
Initialize the xgrid_mod.
Definition: xgrid.F90:523
real(r8_kind), dimension(:,:), allocatable, public area_atm_model
Area elements used inside each model.
Definition: xgrid.F90:179
real(r8_kind) function, dimension(is:ie, js:je) grad_zonal_latlon(d, lon, lat, is, ie, js, je, isd, jsd)
This function is used to calculate the gradient along zonal direction. Maybe need to setup a limit fo...
Definition: xgrid.F90:4395
subroutine, public get_xmap_grid_area(id, xmap, area)
This routine is used to get the grid area of component model with id.
Definition: xgrid.F90:4367
Returns three numbers which are the global sum of a variable.
Definition: xgrid.F90:252
For an unstructured grid, returns three numbers which are the global sum of a variable (1) on its hom...
Definition: xgrid.F90:261
Sums data from exchange grid to model grid.
Definition: xgrid.F90:205
get_from_xgrid for unstructured grids.
Definition: xgrid.F90:223
Scatters data from model grid onto exchange grid.
Definition: xgrid.F90:193
put_to_xgrid for unstructured grids.
Definition: xgrid.F90:214
Sets sub-grid area and numbering in the given exchange grid.
Definition: xgrid.F90:230
Private type used for exchange grid communication.
Definition: xgrid.F90:403
Type to hold pointers for grid boxes.
Definition: xgrid.F90:288
Private type to hold all data needed from given grid for an exchange grid.
Definition: xgrid.F90:304
Private type for overlap exchange grid data.
Definition: xgrid.F90:388
Private type for exchange grid data.
Definition: xgrid.F90:368
Private type for exchange grid data.
Definition: xgrid.F90:380
Private type for cell indices and data in the exchange grid.
Definition: xgrid.F90:269
Type for an exchange grid, holds pointers to included grids and any necessary data.
Definition: xgrid.F90:413