You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We currently start with the smallest permutation in size, try every permutation at that size, then progressively increase the size. This guarantees that we find the permutation with the smallest size.
However, if the error decreases monotonically as size increases (more bits are added leading to better precision), perhaps we could use binary search instead of a linear search, dramatically speeding up the optimization pass. To that end, each group of permutations with the same size could be searched as a bucket and the buckets can use binary search.
If the error is sometimes smaller with less bits, the above will not hold true and binary search may yield a sub-optimal result. Depending on how close of an approximation it yields, perhaps it would be good enough. The full linear search could be used with the highest compression level (or high and above).
The text was updated successfully, but these errors were encountered:
As it turns out, due to floating point rounding, once the error tapers off near its lowest value, adding more bits can sometimes cause the error to rise slightly. This can occur due to floating point rounding/noise. As such, our data is near but not quite sorted.
Perhaps we can leverage this in some other way. The error tapers typically when little accuracy is added which is when the noise creeps. For most joints, we would stop iterating and skip those permutations long before we reach them. This would be an issue only when the error tapers above our threshold.
We could perhaps search for a decent guess of the error by starting in the middle. We can then binary search to find our best permutation guess. It might not be the best one due to the issue above but once we have this permutation size and its error, we can search exhaustively more quickly by discarding most permutations at the first sample: most permutations that are larger will use fewer bits and yield a higher error. We could cache which permutations we tested and their error to avoid performing repeated work during the exhaustive search.
We currently start with the smallest permutation in size, try every permutation at that size, then progressively increase the size. This guarantees that we find the permutation with the smallest size.
However, if the error decreases monotonically as size increases (more bits are added leading to better precision), perhaps we could use binary search instead of a linear search, dramatically speeding up the optimization pass. To that end, each group of permutations with the same size could be searched as a bucket and the buckets can use binary search.
If the error is sometimes smaller with less bits, the above will not hold true and binary search may yield a sub-optimal result. Depending on how close of an approximation it yields, perhaps it would be good enough. The full linear search could be used with the highest compression level (or high and above).
The text was updated successfully, but these errors were encountered: