4th CSE Neural Networks Project.
AinShamsFlow (asf) is a Deep Learning Framework built by our Team from Ain Shams University during December 2020 - January 2021.
The Design and Interface is inspired heavily from Keras and TensorFlow.
However, we only implement everything from scratch using only simple libraries such as numpy and matplotlib.
The Design of all parts can be seen in the following UML Diagram.
This is how the Design Structure should work in our Framework.
You can read more details on our documentation.
You can install the latest available version as follows:
$ pip install ainshamsflow
you start using this project by importing the package as follows:
>>> import ainshamsflow as asf
then you can start creating a model:
>>> model = asf.models.Sequential([
... asf.layers.Dense(300, activation="relu"),
... asf.layers.Dense(100, activation="relu"),
... asf.layers.Dense( 10, activation="softmax")
... ], input_shape=(784,), name="my_model")
>>> model.print_summary()
then compile and train the model:
>>> model.compile(
... optimizer=asf.optimizers.SGD(lr=0.01),
... loss='sparsecategoricalcrossentropy',
... metrics=['accuracy']
... )
>>> history = model.fit(X_train, y_train, epochs=100)
finally you can evaluate, predict and show training statistics:
>>> history.show()
>>> model.evaluate(X_valid, y_valid)
>>> y_pred = model.predict(X_test)
A more elaborate example usage can be found in main.py or check out this demo notebook.
- Pierre Nabil
- Ahmed Taha
- Girgis Micheal
- Hazzem Mohammed
- Ibrahim Shoukry
- John Bahaa
- Michael Magdy
- Ziad Tarek
-
A Data Module to read and process datasets.
-
Dataset
- __init__()
- __bool__()
- __len__()
- __iter__()
- __next__()
- apply()
- numpy()
- batch()
- cardinality()
- concatenate()
- copy()
- enumerate()
- filter()
- map()
- range()
- shuffle()
- skip()
- split()
- take()
- unbatch()
- add_data()
- add_targets()
- normalize()
-
ImageDataGenerator
- __init__()
- flow_from_directory()
-
-
A NN Module to design different architectures.
-
Activation Functions
- Linear
- Sigmoid
- Hard Sigmoid
- Tanh
- Hard Tanh
- ReLU
- LeakyReLU
- ELU
- SELU
- Softmax
- Softplus
- Softsign
- Swish
-
Layers
-
DNN Layers:
- Dense
- BatchNorm
- Dropout
-
CNN Layers 1D: (optional)
- Conv
- Pool (Avg and Max)
- GlobalPool (Avg and Max)
- Upsample
-
CNN Layers 2D:
- Conv
- Pool (Avg and Max)
- FastConv
- FastPool (Avg and Max)
- GlobalPool (Avg and Max)
- Upsample
-
CNN Layers 3D: (optional)
- Conv
- Pool (Avg and Max)
- GlobalPool (Avg and Max)
- Upsample
-
Other Extra Functionality:
- Flatten
- Activation
- Reshape
-
-
Initializers
- Constant
- Uniform
- Normal
- Identity
-
Losses
- MSE (Mean Squared Error)
- MAE (Mean Absolute Error)
- MAPE (Mean Absolute Percentage Error)
- BinaryCrossentropy
- CategoricalCrossentropy
- SparseCategoricalCrossentropy
- HuberLoss
- LogLossLinearActivation
- LogLossSigmoidActivation
- PerceptronCriterionLoss
- SvmHingeLoss
-
Evaluation Metrics
- Accuracy
- TP (True Positives)
- TN (True Negatives)
- FP (False Positives)
- FN (False Negatives)
- Precision
- Recall
- F1Score
-
Regularizers
- L1
- L2
- L1_L2
-
Optimization Modules for training
- SGD
- Momentum
- AdaGrad
- RMSProp
- AdaDelta
- Adam
-
A Visualization Modules to track the training and testing processes
- History Class for showing training statistics
-
verbose
parameter in training - live plotting of training statistics
-
A utils module for reading and saving models
-
Adding CUDA support
-
Publish to PyPI
-
Creating a Documentation for the Project
-
This part can be found in the demo notebook mentioned above.
- Download and Split a dataset (MNIST or CIFAR-10) to training, validation and testing
- Construct an Architecture (LeNet or AlexNet) and make sure all of its components are provided in your framework.
- Train and test the model until a good accuracy is reached (Evaluation Metrics will need to be implemented in the framework also)
- Save the model into a compressed format