-
Notifications
You must be signed in to change notification settings - Fork 87
/
README
90 lines (66 loc) · 3.18 KB
/
README
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
-------------------------------------------------------------------------------
Matlab Environment for Deep Architecture Learning (MEDAL) - version 0.1
-------------------------------------------------------------------------------
o o
/ \ / \ EDAL
o o o
Model Objects:
mlnn.m -- Multi-layer neural network
mlcnn.m -- Multi-layer convolutional neural network
rbm.m -- Restricted Boltzmann machine (RBM)
mcrbm.m -- Mean-covariance (3-way Factored) RBM
drbm.m -- Dynamic/conditional RBM
dbn.m -- Deep Belief Network
crbm.m -- Convolutional RBM
ae.m -- Shallow autoencoder
dae.m -- Deep Autoencoder
-------------------------------------------------------------------------------
To begin type:
>> startLearning
in the medal directory
To get an idea of how the model objects work, check out the demo script:
>> deepLearningExamples('all')
These examples are by no means optimized, but are for getting familiar with
the code.If you have any questions or bugs, send them my way:
-------------------------------------------------------------------------------
References:
*Neural Networks/Backpropagations:
Rumelhart, D. et al. "Learning representations by back-propagating errors".
Nature 323 (6088): 533–536. 1986.
*Restricted Boltzmann Machines/Contrastive Divergence
Hinton, G. E. "Training Products of Experts by Minimizing Contrastive
Divergence". Neural Computation 14 (8): 1771–1800. 2002
*Deep Belief Networks:
Bengio, Y., Lamblin, P., Popovici, P., Larochelle, H. "Greedy Layer-Wise
Training of Deep Networks" NIPS 2006
*Deep & Denoising Autoencoders
Hinton, G. E. and Salakhutdinov, R. R "Reducing the dimensionality of data with
neural networks." Science, Vol. 313. no. 5786, pp. 504 - 507, 28 July 2006.
*Pascal, V. et al. “Stacked denoising autoencoders: Learning useful
representations in a deep network with a local denoising criterion.“ The
Journal of Machine Learning Research 11:3371-3408. 2010
*Mean-Covariance/3-way Factored RBMs:
Ranzato M. et al. "Modeling Pixel Means and Covariances Using
Factorized Third-Order Boltzmann Machines." CVPR 2012.
*Dynamic/Conditional RBMs:
Taylor G. et al. "Modeling Human Motion Using Binary Latent
Variables" NIPS 2006.
*Convolutional MLNNs:
LeCun, Y., et al. "Gradient-based learning applied to document recognition".
Proceedings of the IEEE, 86(11), 2278–2324. 2008
Krizhevsky, A et al. "ImageNet Classification with Deep Convolutional Neural
Networks." NIPS 2012.
*Convolutional RBMs:
Lee, H. et al. “Convolutional deep belief networks for scalable unsupervised
learning of hierarchical representations.”, ICML 2009
*Rectified Linear Units
Nair V., Hinton GE. (2010) Rectified Linear Units Improve Restricted Boltzmann Machines. IMCL 2010.
Glorot, X. Bordes A. & Bengio Y. (2011). "Deep sparse rectifier neural
networks". AISTATS 2011.
*Dropout Regularization:
Hinton GE et al. Technical Report, Univ. of Toronto, 2012.
*General
Hinton, G. E. "A practical guide to training restricted Boltzmann machines"
Technical Report, Univ. of Toronto, 2010.
-------------------------------------------------------------------------