comparison
BaseMetric
Bases: ABC
references = np.squeeze(np.atleast_1d(reference_values))
instance-attribute
computed = np.squeeze(np.atleast_1d(computed_values))
instance-attribute
check = False
instance-attribute
__init__(reference_values, computed_values)
report(file_path=None)
abstractmethod
one_line_report()
abstractmethod
LegacyMetric
Bases: BaseMetric
Legacy (AI2) metric used for original FV3 port.
This metric attempts to smooth error comparison around 0.
It further tries to deal with close-to-0 breakdown of absolute
error by allowing near_zero threshold to be specified by hand.
eps = eps
instance-attribute
success = self._compute_errors(ignore_near_zero_errors, near_zero)
instance-attribute
check = np.all(self.success)
instance-attribute
__init__(reference_values, computed_values, eps, ignore_near_zero_errors, near_zero)
one_line_report()
report(file_path=None)
MultiModalFloatMetric
Bases: BaseMetric
Combination of absolute, relative & ULP comparison for floats
This metric attempts to combine well known comparison on floats to leverage a robust 32/64 bit float comparison on large accumulating floating errors.
ULP is used to clear noise (ULP<=1.0 passes) Absolute errors for large amplitude