Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Weight Initialization, GNum class and NFData instances #69

Open
wants to merge 206 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 167 commits
Commits
Show all changes
206 commits
Select commit Hold shift + click to select a range
8a3e10d
Num instance for Gradients
schnecki Feb 14, 2018
344ef71
NMult class for scalar multiplication and some instances
schnecki Feb 14, 2018
e1bf57c
removed fromInteger/fromRational implementations as they make no sense
schnecki Feb 15, 2018
55aff8d
reexporting of sinusoid
schnecki Feb 15, 2018
bab5598
started BatchNorm layer
schnecki Feb 15, 2018
c6a7e0e
update BatchNorm
schnecki Feb 16, 2018
0d2eaca
Num and Fractional instance of Softmax
schnecki Feb 27, 2018
7c04133
Gradients fromInteger implementation
schnecki Feb 28, 2018
80be420
FullyConnected uniform initialization by nr of input nodes
schnecki Feb 28, 2018
3745a78
FullyConnected uniform initialization by nr of input nodes
schnecki Feb 28, 2018
55f6386
GNum class for calculating with networks and gradients
schnecki Mar 27, 2018
13b1529
added gFromRational
schnecki Mar 28, 2018
6c9242b
updated comments
schnecki Mar 28, 2018
92cfb18
Merge branch 'master' of https://github.com/schnecki/grenade
schnecki Mar 28, 2018
6c95f4f
added GNum inst. for Trivial,removed unnecessary code for Network
schnecki Mar 28, 2018
30b9424
typo
schnecki Mar 28, 2018
507baf0
rnf (Control.DeepSeq.NFData) instances for Layers, Network and Gradients
schnecki Apr 22, 2018
4bed0e1
weight init test app
schnecki Jun 5, 2018
a7912ac
weight initialization implemented
schnecki Jun 15, 2018
48c2b55
weight initialization implemented (comment update)
schnecki Jun 15, 2018
6015d37
moved to MWC random
schnecki Jun 18, 2018
aeff5fc
added missing GNum instances
schnecki Jun 18, 2018
07cb638
Merge branch 'master' into master
schnecki Jun 18, 2018
ad4bd69
removed unnecessary constraint in createRandom
schnecki Jun 27, 2018
bf211f3
NFData instance for Tapes
schnecki Nov 29, 2018
fa70b4b
Learning Parameters NFData instance
schnecki Dec 1, 2018
8c8b444
natVal imports and no max versions in cabal file
schnecki Jan 7, 2019
fee9cab
serialize instance for LearningParameters
schnecki Jun 17, 2019
c86360a
version for bytestring
schnecki Jun 21, 2019
47a2918
removed versions for grenade-examples
schnecki Jun 21, 2019
af5e31a
removed upper version numbers
schnecki Jun 25, 2019
09cc81e
Merge branch 'master' of https://github.com/schnecki/grenade
schnecki Jun 25, 2019
d3c8955
Merge branch 'master' of github.com:HuwCampbell/grenade into HEAD
schnecki Oct 12, 2019
e9da6f9
safe type of network using typeable (WIP)
schnecki Nov 27, 2019
7e90b0c
travis: Use ghc 8.6.5
erikd Jan 18, 2020
045640f
grenade.cabal: Add upper bound for typelits-witnesses
erikd Jan 18, 2020
2b5dc83
Bump upper bounds of dependencies
erikd Jan 18, 2020
3126857
Update .gitignore
erikd Jan 19, 2020
bf6ddad
Test.Grenade.Recurrent.Layers.LSTM: Fix compiler warning
erikd Jan 19, 2020
fed6eb2
Drop support for GHC < 8.0
erikd Jan 19, 2020
b70f6a8
Grenade.Core.Shape: Fix for ghc 8.8
erikd Jan 18, 2020
6d0b8c3
travis: Add ghc 8.8.2 and switch to cabal-3.0
erikd Jan 22, 2020
c793c75
Fix compiler warning
erikd Jan 23, 2020
14ec0de
examples: Use CPP to support multiple GHC versions
erikd Jan 26, 2020
a161f2d
Add stack.yaml
Nolrai Mar 29, 2020
8eaf943
Num instance for Gradients
schnecki Feb 14, 2018
9317354
NMult class for scalar multiplication and some instances
schnecki Feb 14, 2018
dce10f4
removed fromInteger/fromRational implementations as they make no sense
schnecki Feb 15, 2018
be9dfa3
reexporting of sinusoid
schnecki Feb 15, 2018
d7ff47c
started BatchNorm layer
schnecki Feb 15, 2018
65df503
update BatchNorm
schnecki Feb 16, 2018
724051e
Num and Fractional instance of Softmax
schnecki Feb 27, 2018
2d83c65
Gradients fromInteger implementation
schnecki Feb 28, 2018
dbca695
FullyConnected uniform initialization by nr of input nodes
schnecki Feb 28, 2018
88526b3
FullyConnected uniform initialization by nr of input nodes
schnecki Feb 28, 2018
e09d09d
GNum class for calculating with networks and gradients
schnecki Mar 27, 2018
3ff58b1
updated comments
schnecki Mar 28, 2018
738d3f7
added gFromRational
schnecki Mar 28, 2018
4a47df9
added GNum inst. for Trivial,removed unnecessary code for Network
schnecki Mar 28, 2018
0f9f316
typo
schnecki Mar 28, 2018
39b2c7a
rnf (Control.DeepSeq.NFData) instances for Layers, Network and Gradients
schnecki Apr 22, 2018
9747bc5
weight init test app
schnecki Jun 5, 2018
f21214d
weight initialization implemented
schnecki Jun 15, 2018
db16e28
weight initialization implemented (comment update)
schnecki Jun 15, 2018
a3272b8
moved to MWC random
schnecki Jun 18, 2018
3c5b169
added missing GNum instances
schnecki Jun 18, 2018
b47b9e1
removed unnecessary constraint in createRandom
schnecki Jun 27, 2018
b5187d9
NFData instance for Tapes
schnecki Nov 29, 2018
94d0359
Learning Parameters NFData instance
schnecki Dec 1, 2018
030b52e
natVal imports and no max versions in cabal file
schnecki Jan 7, 2019
c7d647d
serialize instance for LearningParameters
schnecki Jun 17, 2019
0767aa9
version for bytestring
schnecki Jun 21, 2019
3066581
safe type of network using typeable (WIP)
schnecki Nov 27, 2019
1d5e03a
improved readability
schnecki Apr 10, 2020
40d2fa6
rebase to latest grenade version
schnecki Apr 10, 2020
8bf8423
Merge branch 'master' of https://github.com/schnecki/grenade
schnecki Apr 10, 2020
87b7baa
renamed weight init example
schnecki Apr 10, 2020
9261d2d
removed scaling of Concat layers in GNum instance
schnecki Apr 10, 2020
4ef3678
Fixed NoStarIsType by using it in the package.yaml
schnecki Apr 10, 2020
406d165
removed GHC.Natural dependency
schnecki Apr 10, 2020
2a7a23f
using latest singleton and GHC.TypeLits.Witnesses libraries
schnecki Apr 10, 2020
51388e9
Dynamic Layer Module
schnecki Apr 13, 2020
2596987
changed travis for GHC >= 8.6
schnecki Apr 13, 2020
864585e
stack travis fiel
schnecki Apr 13, 2020
244cf63
update travis and readme
schnecki Apr 13, 2020
1cde620
travis added blas and lapack
schnecki Apr 13, 2020
e86db36
ToDynamicLayer, FromDynamicLayer implementations for (De-)convolution…
schnecki Apr 13, 2020
363b72d
Elu instance
schnecki Apr 13, 2020
da7286a
To/FromDynamicLayer activation functions
schnecki Apr 13, 2020
3f93d45
travis update
schnecki Apr 13, 2020
96b6c06
Dynamic layer generation and specification. Boot files for deserialis…
schnecki Apr 14, 2020
8995242
example for dynamic specifications
schnecki Apr 14, 2020
d016df8
removed unnecessary code
schnecki Apr 14, 2020
a4065f0
changed travis file
schnecki Apr 14, 2020
be8b24c
changed travis file
schnecki Apr 14, 2020
e94bdbb
code cleanup
schnecki Apr 14, 2020
4d3fc78
Remove use of RecordWildCards
erikd Apr 11, 2020
328913b
removed Werror from tests
schnecki Apr 14, 2020
87be036
fixed show instance of SpecNet and using typeRef for serialization
schnecki Apr 15, 2020
7cbd956
changed travis file to only use Werror on library
schnecki Apr 15, 2020
9b48da3
travis file update
schnecki Apr 15, 2020
327c791
travis file update
schnecki Apr 15, 2020
ce73bd2
travis file update
schnecki Apr 15, 2020
8d692de
New class to move momentum dependency in layers and introduced optimi…
schnecki Apr 15, 2020
759ec9f
fixed FullyConnected test
schnecki Apr 15, 2020
adbc113
added benchmark for feedforward
schnecki Apr 16, 2020
16cc03b
spec instance for reshape
schnecki Apr 16, 2020
ae09dbd
Adam implementation for feedforward ANNs
schnecki Apr 16, 2020
dfcb921
added some Strict Data Fields
schnecki Apr 16, 2020
aabd0e1
added default implementation for Adam
schnecki Apr 16, 2020
e6da085
Forcing in gan-mnist example
schnecki Apr 16, 2020
5712024
benchmarks for Adam optimizer
schnecki Apr 16, 2020
d6996aa
choose between Float and Double, more performant implementation for Adam
schnecki Apr 17, 2020
1dfdae8
typo
schnecki Apr 17, 2020
8d22035
set default to use Double precision
schnecki Apr 17, 2020
dc8765e
updated Readme, removed unnecessary commented code
schnecki Apr 17, 2020
3be6038
moved Werror to under debug flag
schnecki Apr 17, 2020
388d0c6
Serialize instance for Optimizer
schnecki Apr 17, 2020
f25e8eb
Serialize instance for Optimizer
schnecki Apr 17, 2020
6bd0ced
Serialize instance for Optimizer
schnecki Apr 17, 2020
dae0f07
Serialize instance for Optimizer
schnecki Apr 17, 2020
1bb49bd
Serialize instance for Optimizer
schnecki Apr 17, 2020
ffc1690
Serialize instance for Optimizer
schnecki Apr 17, 2020
14a2e2d
Serialize instance for Optimizer
schnecki Apr 17, 2020
37c9edc
Merge branch 'master' of github.com:schnecki/grenade
schnecki Apr 17, 2020
79f574c
singletons version
schnecki Apr 17, 2020
a2fa7da
serialize instance of opt for singletons < 2.6
schnecki Apr 17, 2020
3c7b607
NFData instance for Optimizer
schnecki Apr 17, 2020
542df00
travis update to ignore unused imports
schnecki Apr 17, 2020
e1d9c99
optimizer
schnecki Apr 17, 2020
e4cb167
optimizer concrete instances
schnecki Apr 17, 2020
0a0eecb
removed overlapping instances
schnecki Apr 17, 2020
c238495
removed unnecessary import
schnecki Apr 17, 2020
e9351e1
exporting instances
schnecki Apr 17, 2020
c865ac9
singleton instances added
schnecki Apr 17, 2020
28044dc
CPP directive for import
schnecki Apr 17, 2020
cded508
travis ubdate
schnecki Apr 17, 2020
692d2f9
Renamed user defined F (Double or Float) variable to RealNum
schnecki Apr 18, 2020
21dc228
changed travis to run with Werror on greande package only
schnecki Apr 18, 2020
f5d8078
set precision on prop_auto_diff to 4 after an error with travis
schnecki Apr 18, 2020
8fbd586
more constraints for SpecConcreteNetwork
schnecki Apr 18, 2020
d6bb0f8
more constraints for SpecConcreteNetwork
schnecki Apr 18, 2020
aaa617c
implemented dropout layer and fixed dimensions to 1 if not used
schnecki Apr 18, 2020
d7b6ab7
Dropout fix
schnecki Apr 19, 2020
b464018
fixed all GNum instances, adapted rnf for ListStore
schnecki Apr 20, 2020
d62515d
removed unnecessary import
schnecki Apr 20, 2020
5e9b24c
refactored DynamicNetwork in own subfolder
schnecki Apr 20, 2020
484ac7e
function to update network settings
schnecki Apr 20, 2020
21f9b7b
interface for building models
schnecki Apr 20, 2020
5c6eedb
fixed bug introduced in last commit
schnecki Apr 20, 2020
849319c
fixed reshape specification bug
schnecki Apr 21, 2020
82462cd
Typeable constraints for DynamicNetwork
schnecki Apr 22, 2020
1ee4947
Serializable gradients and NFData and Serialize for SpecNetwork
schnecki Apr 23, 2020
ed74a4a
added option to print/not print built network specification
schnecki Apr 23, 2020
d42edc3
ListStore updated rnf instances
schnecki Apr 25, 2020
86cc90c
implemented Gradient clipping
schnecki May 2, 2020
2fdaf4a
Weight Decay for Adam optimizer
schnecki May 2, 2020
e80ce7c
Weight Decay for Adam optimizer
schnecki May 2, 2020
c8e0bb9
added LeakyRelu activiation function
schnecki May 4, 2020
4ee5060
Added Gelu activation function
schnecki May 7, 2020
fc4d5c5
readme update
schnecki May 7, 2020
a4e84a0
fixed readme
schnecki May 7, 2020
4ed7225
fixed readme + weight initialization analysis report
schnecki May 9, 2020
f4ce7ef
added parallel strategies in Network.hs
schnecki May 18, 2020
486bb34
removed redundant constraints
schnecki May 18, 2020
1857786
Eq and Ord instances for Optimizer
schnecki Jun 2, 2020
ee06cdd
Createable network constraints
schnecki Jun 2, 2020
5bb80d4
using lts-16.20
schnecki Nov 6, 2020
3326d0b
update travis
schnecki Nov 6, 2020
215bdac
removed extra-deps
schnecki Nov 6, 2020
7617127
readded extra-deps
schnecki Nov 6, 2020
508492c
wip hablas/cblas
schnecki Nov 7, 2020
672d114
slicedRelu
schnecki Nov 9, 2020
82e7571
fast implementation for vector based shapes
schnecki Nov 10, 2020
4b02105
update for CBLAS implementation
schnecki Nov 11, 2020
ff0f362
direct calls to BLAS (not CBLAS)
schnecki Nov 12, 2020
09f3a46
updated travis settings
schnecki Nov 12, 2020
4bb325b
renamed CBLAS -> BLAS, travis Werror on latest only
schnecki Nov 12, 2020
8932b11
fixed test building problems
schnecki Nov 12, 2020
473f099
improvement of BLAS
schnecki Nov 12, 2020
0f9699e
inplace operations for all shapes
schnecki Nov 12, 2020
dfa149c
CUDA implementation of Adam optimizer
schnecki Nov 18, 2020
5524bac
using unsafeWith gves a speedup
schnecki Nov 18, 2020
17bc71b
update travis to install nvidia-cuda-toolkit
schnecki Nov 19, 2020
ece9327
eficient shape conversion functions
schnecki Nov 19, 2020
5f25319
travis update
schnecki Nov 19, 2020
de6f8d6
update travis cuda path
schnecki Nov 19, 2020
a644404
n2 implementation Num Shape.
schnecki Nov 19, 2020
ed86d91
wip new function: sumG
schnecki Nov 20, 2020
f7d6831
minor updates
schnecki Nov 20, 2020
ca697d2
fixed relu
schnecki Nov 30, 2020
3597fb0
minor updates
schnecki Dec 1, 2020
06c49ef
minor
schnecki Dec 9, 2020
9f73f44
updated serialise
schnecki Dec 10, 2020
a7f42fd
export of CUDA_HOME in .travis.yml
schnecki Dec 17, 2020
dad784f
switched to travis-ci.com
schnecki Dec 18, 2020
ca0e035
fixed bench.hs intialisation
schnecki Dec 30, 2020
a7f3ce4
leaky Tanh Layer
schnecki Jan 4, 2021
e477597
LeakyTanh layer for all shapes
schnecki Jan 5, 2021
22c47ca
bug fix specification LeakyTanh
schnecki Jan 5, 2021
940d999
sumVectors CPU function
schnecki Jan 9, 2021
7349e3c
added cbits files to package.yaml
schnecki Jan 11, 2021
a6ab424
removed zipWithVectorInPlaceSnd from GNum
schnecki Feb 10, 2021
b3e6d10
disable CUDA
schnecki Mar 15, 2021
027e9c1
update to new c2hs version (bug fix)
schnecki Oct 2, 2021
dd11474
GHC 9
schnecki Feb 16, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 15 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
@@ -1,4 +1,18 @@
cabal.project.local
.cabal-sandbox/
cabal.sandbox.config
dist/

dist-newstyle/
.stack-work/
/.dir-locals.el
/TAGS
/stack.yaml.lock
grenade.cabal
/examples/TAGS
/.emacs.desktop.lock
/.emacs.desktop
/cbits/TAGS
/emacs.desktop
/examples/grenade-examples.cabal
/examples/main/mnist_test.csv
/examples/main/mnist_train.csv
103 changes: 82 additions & 21 deletions .travis.yml
Original file line number Diff line number Diff line change
@@ -1,31 +1,92 @@
# NB: don't set `language: haskell` here
# Adapted from https://github.com/commercialhaskell/stack
language: nix
sudo: false

# The following enables several GHC versions to be tested; often it's enough to test only against the last release in a major GHC version. Feel free to omit lines listings versions you don't need/want testing for.
env:
- CABALVER=1.24 GHCVER=8.0.2
- CABALVER=2.0 GHCVER=8.2.2
- CABALVER=2.0 GHCVER=8.4.4
- CABALVER=2.0 GHCVER=8.6.3
cache:
directories:
- $HOME/.ghc
- $HOME/.cabal
- $HOME/.stack
- $TRAVIS_BUILD_DIR/.stack-work

matrix:
fast_finish: true
include:
# Add build targets here
- env: BUILD=stack ARGS=""
compiler: ": #stack default"
addons: {apt: {packages: [ libblas3,liblapack3,liblapack-dev,libblas-dev,pkg-config]}}

- env: BUILD=stack ARGS="--resolver lts-15.8"
compiler: ": #stack 8.8.3"
addons: {apt: {packages: [ libblas3,liblapack3,libblas-dev,liblapack-dev,pkg-config]}}

- env: BUILD=stack ARGS="--resolver lts-15.3"
compiler: ": #stack 8.8.2"
addons: {apt: {packages: [ libblas3,liblapack3,libblas-dev,liblapack-dev,pkg-config]}}

- env: BUILD=stack ARGS="--resolver lts-14.27"
compiler: ": #stack 8.6.5"
addons: {apt: {packages: [ libblas3,liblapack3,libblas-dev,liblapack-dev,pkg-config]}}

- env: BUILD=stack ARGS="--resolver nightly"
compiler: ": #stack nightly"
addons: {apt: {packages: [ libblas3,liblapack3,libblas-dev,liblapack-dev,pkg-config]}}

allow_failures:
- env: BUILD=stack ARGS="--resolver nightly"
- env: BUILD=stack ARGS="--resolver lts-14.27"

# Note: the distinction between `before_install` and `install` is not important.
before_install:
- travis_retry sudo add-apt-repository -y ppa:hvr/ghc
- travis_retry sudo apt-get update
- travis_retry sudo apt-get install cabal-install-$CABALVER ghc-$GHCVER happy-1.19.5 alex-3.1.7 libblas-dev liblapack-dev
- export PATH=/opt/alex/3.1.7/bin:/opt/happy/1.19.5/bin:/opt/ghc/$GHCVER/bin:/opt/cabal/$CABALVER/bin:$HOME/.cabal/bin:$PATH
# Using compiler above sets CC to an invalid value, so unset it
- unset CC

# We want to always allow newer versions of packages when building on GHC HEAD
- CABALARGS=""
- if [ "x$GHCVER" = "xhead" ]; then CABALARGS=--allow-newer; fi

# Download and unpack the stack executable
- export PATH=/opt/ghc/$GHCVER/bin:/opt/cabal/$CABALVER/bin:$HOME/.local/bin:/opt/alex/$ALEXVER/bin:/opt/happy/$HAPPYVER/bin:$HOME/.cabal/bin:$PATH
- mkdir -p ~/.local/bin
- |
if [ `uname` = "Darwin" ]
then
travis_retry curl --insecure -L https://get.haskellstack.org/stable/osx-x86_64.tar.gz | tar xz --strip-components=1 --include '*/stack' -C ~/.local/bin
else
travis_retry curl -L https://get.haskellstack.org/stable/linux-x86_64.tar.gz | tar xz --wildcards --strip-components=1 -C ~/.local/bin '*/stack'
fi

# Use the more reliable S3 mirror of Hackage
mkdir -p $HOME/.cabal
echo 'remote-repo: hackage.haskell.org:http://hackage.fpcomplete.com/' > $HOME/.cabal/config
echo 'remote-repo-cache: $HOME/.cabal/packages' >> $HOME/.cabal/config

# Install blas and lapack
# - travis_retry sudo apt-get update
# - travis_retry sudo apt-get install libblas-dev liblapack-dev

install:
- cabal --version
- echo "$(ghc --version) [$(ghc --print-project-git-commit-id 2> /dev/null || echo '?')]"
- travis_retry cabal update
- cabal sandbox init
- cabal install --enable-tests --enable-benchmarks
- if [ -f configure.ac ]; then autoreconf -i; fi
- |
stack --no-terminal --install-ghc $ARGS test --bench --only-dependencies


notifications:
email: false
# script:
# - |
# PKG_CONFIG_PATH="${VIRTUAL_ENV}/lib/pkgconfig:${PKG_CONFIG_PATH}"
# echo $PKG_CONFIG_PATH
# export PKG_CONFIG_PATH
# stack --no-terminal $ARGS test --bench --no-run-benchmarks --haddock --no-haddock-deps

# Here starts the actual work to be performed for the package under test; any command which exits with a non-zero exit code causes the build to fail.
script:
- cabal build
- cabal test --show-details=streaming
- echo "$(ghc --version) [$(ghc --print-project-git-commit-id 2> /dev/null || echo '?')]"
- |
set -ex
# Run tests --ghc-options=-Werror
# Werror on grenade only!
stack build grenade --no-terminal $ARGS --ghc-options=-Werror
stack test --no-terminal $ARGS --ghc-options=-Wno-unused-imports
stack test --no-terminal $ARGS --ghc-options=-Wno-unused-imports
set +ex

87 changes: 78 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,80 @@
Grenade
=======

[![Build Status](https://api.travis-ci.org/HuwCampbell/grenade.svg?branch=master)](https://travis-ci.org/HuwCampbell/grenade)
[![Hackage page (downloads and API reference)][hackage-png]][hackage]
[![Hackage-Deps][hackage-deps-png]][hackage-deps]

[![Build Status](https://api.travis-ci.org/schnecki/grenade.svg?branch=master)](https://travis-ci.org/schnecki/grenade)
<!-- [![Hackage page (downloads and API reference)][hackage-png]][hackage] -->
<!-- [![Hackage-Deps][hackage-deps-png]][hackage-deps] -->

This is a fork of the original Grenade library found at https://github.com/HuwCampbell/grenade,
but includes additional features:

1. **Optimizer Support**. The code has been restructured to be able to easily implement more
optimizers than just SGD with momentum and regularization. Currently we support *Adam* for
feedforward neural networks also!

2. **Weight Initialization**. Initializing the weights in different ways. Currently implemented:
Uniform, HeEtAl, Xavier. The default is Uniform! See chapter 6 of this [seminar
report](docs/seminar_report_ANN_analysis_2018.pdf "Seminar Report ANN Analysis") for a small
evaluation of the implemented weight initialization methods.

3. **Data Type Representation**: You can easily switch from `Double` to `Float` vectors and
matrices. Just provide the corresponding flag (`use-float`) when compiling (to all packages
that):

stack clean && stack build --flag=grenade-examples:use-float --flag=grenade:use-float && stack bench

Ensure you clean before changing the flags, as otherwise you might in the best case get a
compile error and in the worst case a Segmentation Fault!

Clearly `Float`s are less precise but more efficient, both in terms of *time and memory*. In
case of small ANNs `Float` should be sufficient, as long as you keep the values of the weights
small (which you should always do). This feature uses an [adapted
version](http://github.com/schnecki/hmatrix-float "github repository") of
[hmatrix](https://hackage.haskell.org/package/hmatrix-0.20.0.0 "stackage") which was especially
adapted for this project.

4. **Runtime Networks**. Dynamically specifying and build networks at runtime. This is not only a
required tool when storing the network architecture to the disk, like in a DB, and reloading it,
but it could also be a starting point for developing algorithms that adapt the network to find
the best architecture for the underlying problem. You can do that with this feature without
knowing its structure by deserializing the network specification and then feed the deserialized
network weights into the net.

However, currently this works only for feedforward networks composed of fully-connected, dropout,
deconvolution and convolution layers plus all activation functions. Example (also see
`feedforward-netinit` in example folder):
```haskell
let spec :: SpecNet
spec = specFullyConnected 40 30 |=> specRelu1D 30 |=> specFullyConnected 30 20 |=> specNil1D 20
SpecConcreteNetwork1D1D (net0 :: Network layers shapes) <- networkFromSpecificationWith HeEtAl spec
```

However, Beware! It is important to get the specification right, as otherwise the program will halt
abruptly. So at best do not use it manually, but write functions for creating specifications!

Or probably better, use the simple interface:

```haskell
buildNetViaInterface :: IO SpecConcreteNetwork
buildNetViaInterface =
buildModel $
inputLayer1D 2 >>
fullyConnected 10 >> dropout 0.89 >> relu >>
fullyConnected 4 >> relu >>
networkLayer (
inputLayer1D 4 >> fullyConnected 10 >> relu >> fullyConnected 4 >> sinusoid) >>
fullyConnected 1 >> tanhLayer
```
5. **Gradient Clipping**. You can clip gradients using the function `clipByGlobalNorm`.

6. **More Activation Functions**. This branch supports `Dropout` (which is unimplemented in the
original code), `LeakyRelu` and `Gelu` activation functions.


The following is mostly (except the installation procedure) the original description of grenade:

Description
===========

```
First shalt thou take out the Holy Pin, then shalt thou count to three, no more, no less.
Expand Down Expand Up @@ -144,22 +214,21 @@ elu, tanh, and fully connected.

Build Instructions
------------------
Grenade is most easily built with the [mafia](https://github.com/ambiata/mafia)
script that is located in the repository. You will also need the `lapack` and
This version of Grenade is most easily built with the. You will also need the `lapack` and
`blas` libraries and development tools. Once you have all that, Grenade can be
build using:

```
./mafia build
stack build
```

and the tests run using:

```
./mafia test
stack test
```

Grenade builds with ghc 7.10, 8.0, 8.2 and 8.4.
This version of Grenade builds with GHC 8.8.

Thanks
------
Expand Down
1 change: 0 additions & 1 deletion _config.yml

This file was deleted.

12 changes: 6 additions & 6 deletions bench/bench-lstm.hs
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
{-# LANGUAGE DataKinds #-}
{-# LANGUAGE BangPatterns #-}
{-# LANGUAGE ScopedTypeVariables #-}
import Criterion.Main
{-# LANGUAGE BangPatterns #-}
{-# LANGUAGE DataKinds #-}
{-# LANGUAGE ScopedTypeVariables #-}
import Criterion.Main

import Grenade
import Grenade.Recurrent
Expand All @@ -26,5 +26,5 @@ type R = Recurrent
type RecNet = RecurrentNetwork '[ R (LSTM 40 512), R (LSTM 512 40) ]
'[ 'D1 40, 'D1 512, 'D1 40 ]

lp :: LearningParameters
lp = LearningParameters 0.1 0 0
lp :: Optimizer 'SGD
lp = defOptimizer
Loading