You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The weights reported by the get_emperical method in the SMCFilter class will usually not add up to one. I think it would be nice to have the option to directly get the normalized weights (maybe that should also be the default) to comply with the literature (see for example An Introduction to Sequential Monte Carlo , page 130).
The weights are actually normalized in this line. However, the result is saved in a local variable instead of updating the state weights variable. It seems like the values of the log_weights variable are only updated to be in a range between 0 and 1 using this command.
What is the reason behind not directly normalizing the weights variable?
The easiest fix for the issue would be to add an optional argument to the get_emperical function that if true normalizes the weights, e.g.:
defget_empirical(self, normalize_weights=True):
""" :param bool normalize_weights: If True, normalize the log weights before creating the empirical distribution. :returns: a marginal distribution over all state tensors. :rtype: a dictionary with keys which are latent variables and values which are :class:`~pyro.distributions.Empirical` objects. """ifnormalize_weights:
# Normalize the log weightslog_weights=self.state._log_weights-self.state._log_weights.logsumexp(-1)
else:
log_weights=self.state._log_weightsreturn {
key: dist.Empirical(value, log_weights)
forkey, valueinself.state.items()
}
The text was updated successfully, but these errors were encountered:
Our general stance is against unnecessary normalization, in the statistics field in general. The logsumexp of the log_weights is a meaningful quantity, and can be used for a variety of tasks:
as a loss to backprop through
as an expert log weight in a mixture of experts model, e.g. an SMCFilter and a non-normalized Gaussian (as in our pyro.ops.Gaussian library or Funsor)
Sorry, I misread. I do think it's fine to implement a .normalized_weights() method or property, as long as we preserve the original unnormalized weights.
The weights reported by the get_emperical method in the SMCFilter class will usually not add up to one. I think it would be nice to have the option to directly get the normalized weights (maybe that should also be the default) to comply with the literature (see for example An Introduction to Sequential Monte Carlo , page 130).
The weights are actually normalized in this line. However, the result is saved in a local variable instead of updating the state weights variable. It seems like the values of the log_weights variable are only updated to be in a range between 0 and 1 using this command.
What is the reason behind not directly normalizing the weights variable?
The easiest fix for the issue would be to add an optional argument to the get_emperical function that if true normalizes the weights, e.g.:
The text was updated successfully, but these errors were encountered: