You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If I had a dataset where the measurement uncertainty varies across various measurement points, how can I take that into account?
From the execute doctring in OrdinaryKriging
This is now the method that performs the main kriging calculation.
Note that currently measurements (i.e., z values) are considered
'exact'. This means that, when a specified coordinate for interpolation
is exactly the same as one of the data points, the variogram evaluated
at the point is forced to be zero. Also, the diagonal of the kriging
matrix is also always forced to be zero. In forcing the variogram
evaluated at data points to be zero, we are effectively saying that
there is no variance at that point (no uncertainty,
so the value is 'exact').
In the future, the code may include an extra 'exact_values' boolean
flag that can be adjusted to specify whether to treat the measurements
as 'exact'. Setting the flag to false would indicate that the
variogram should not be forced to be zero at zero distance
(i.e., when evaluated at data points). Instead, the uncertainty in
the point will be equal to the nugget. This would mean that the
diagonal of the kriging matrix would be set to
the nugget instead of to zero.
Questions:
We may be able to set the diagonal of the kriging matrix to unequal values of the measurement uncertainity. What does this imply to the nugget parameter of the variogram estimated?
What happens in the case where we supply the variogram parameters, but specify unequal values in the diagonal of the kriging matrix?
The text was updated successfully, but these errors were encountered:
If I had a dataset where the measurement uncertainty varies across various measurement points, how can I take that into account?
I am not sure, and haven't looked at the underlying math for some time. Hoping @bsmurphy would know the answer.
I imagine, supporting user provided weights (sample dependent) could be a way to specify sample uncertainty but again I'm not sure what's the convention for this in other krigging software..
I've been thinking about this, and to my knowledge taking into account heterogeneous data errors (or even a specific uniform error) isn't clearly defined in the usual math behind kriging. I don't think putting the error values on the diagonal of the kriging matrix is the right way to handle this (although I could be wrong). Also not sure how to implement data weighting, although I need to think about that more. Seems like some kind of Monte Carlo thing might in fact be the best way go -- run a bunch of kriging realizations with data randomly perturbed within the error bars and then look at the spread of the resulting kriged values.
I need to think about this more tho, and do some research/reading. One way or another, handling uncertainty in measurements (as in #30, at least by implementing continuous part kriging to smooth out the nugget effect) would be great for v2.
If I had a dataset where the measurement uncertainty varies across various measurement points, how can I take that into account?
From the
execute
doctring in OrdinaryKrigingQuestions:
nugget
parameter of the variogram estimated?The text was updated successfully, but these errors were encountered: