Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Showing that random-fu is x4 faster than mwc and thus monad-bayes #321

Closed
wants to merge 1 commit into from

Conversation

idontgetoutmuch
Copy link
Member

ghc samplePerformance.hs 
[1 of 1] Compiling Main             ( samplePerformance.hs, samplePerformance.o )
Linking samplePerformance ...

./samplePerformance 
benchmarking dists/StdGen/stdNormal/single sample
time                 45.20 ns   (45.16 ns .. 45.25 ns)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 45.28 ns   (45.22 ns .. 45.41 ns)
std dev              304.5 ps   (173.5 ps .. 545.5 ps)

benchmarking dists/StdGen/stdNormal/single sample MWC
time                 157.0 ns   (156.9 ns .. 157.2 ns)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 157.0 ns   (156.9 ns .. 157.3 ns)
std dev              600.7 ps   (338.0 ps .. 990.2 ps)

benchmarking dists/StdGen/stdNormal/single sample monad bayes
time                 157.2 ns   (156.9 ns .. 157.6 ns)
                     0.999 R²   (0.998 R² .. 1.000 R²)
mean                 158.5 ns   (157.1 ns .. 163.6 ns)
std dev              8.239 ns   (1.448 ns .. 17.33 ns)
variance introduced by outliers: 71% (severely inflated)

@idontgetoutmuch
Copy link
Member Author

We may be comparing apples and oranges here and one thing I plan to try is replacing mwc-random by random-fu.

@idontgetoutmuch
Copy link
Member Author

I'd also like to benchmark sampling in Haskell against other languages. But is a PR here the best place to do these experiments?

@turion
Copy link
Collaborator

turion commented Oct 13, 2023

I'd also like to benchmark sampling in Haskell against other languages. But is a PR here the best place to do these experiments?

Yes, I guess it is, but in a separate PR (and a separate benchmarking suite).

# Make sure you have direnv >= 2.30
use flake --extra-experimental-features nix-command --extra-experimental-features flakes
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is that change intentional?

Comment on lines -1 to -14
(
import
(
let
lock = builtins.fromJSON (builtins.readFile ./flake.lock);
in
fetchTarball {
url = "https://github.com/edolstra/flake-compat/archive/${lock.nodes.flake-compat.locked.rev}.tar.gz";
sha256 = lock.nodes.flake-compat.locked.narHash;
}
)
{src = ./.;}
)
.defaultNix
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess this was also not intentional?

@idontgetoutmuch
Copy link
Member Author

idontgetoutmuch commented Oct 13, 2023

We may be comparing apples and oranges here and one thing I plan to try is replacing mwc-random by random-fu.

@turion I just tried

newtype SamplerU g m a = SamplerU {runSamplerU :: ReaderT g m a} deriving (Functor, Applicative, Monad, MonadIO)

instance (StatefulGen g m, MonadReader g m) => MonadDistribution (SamplerU g m) where
  random = SamplerU undefined

  uniform a b = SamplerU undefined
  normal m s = undefined -- SamplerU (ReaderT $ (RF.sample (RF.normal m s)))
  gamma shape scale = SamplerU undefined
  beta a b = SamplerU undefined

  bernoulli p = undefined
  categorical ps = undefined
  geometric p = undefined

but have type errors (if I uncomment the definition for normal). I have to give up for the weekend as we have visitors but you had some time, maybe you could take a butcher's.

EDIT: But this seems to work (at least it type checks):

  normal m s = SamplerU (ReaderT $ RF.runRVar $ RF.normal m s)

@idontgetoutmuch
Copy link
Member Author

Replaced by #323

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants