Purpose of this project is benchmarking performance of neural networks architectures on tabular data. For this reason was used openML autoML benchmark. New framework modules were coded for benchmark. This work is based on paper "An Open Source AutoML Benchmark".
- SNN (Self-Normalizing Neural Networks) (paper: Self-Normalizing Neural Networks)
- NODE (Neural Oblivious Decision Ensembles) (paper: Neural Oblivious Decision Ensembles for Deep Learning on Tabular Data)
- TabNet (paper: TabNet: Attentive Interpretable Tabular Learning)
We used implementations of the architectures:
-
SNN - naive architectures used and it is necessary to find better SNN architectures.
-
NODE on PyTorch - https://github.com/manujosephv/pytorch_tabular
-
TabNet on PyTorch - https://github.com/dreamquark-ai/tabnet
All architectures were wrapped in sklearn Model classes for better compatibility with benchmark. Default hyperparameters were used for TabNet. SNN hyperparameters depend on a dataset structure. So numbers of neurons in layers are proportionately to number of features in the dataset.
You can find .py
files for openML autoML benchmark in folder frameworks
. You can put them in folder frameworks
of benchmark and use like default benchmark frameworks.
Also you can view python notebooks from google colab in folder colab_notebooks
.
You can see table of results in results.csv. There are results from original paper for comparison.
NODE has four TOP-1 results. TabNet has good results for datasets with big number of samples. And it could be good idea to optimize hyperparameters of TabNet to achieve better performance. We think it is possible to say that neural networks have performance near to the best practices in autoML for tabular data.
Stepan Derevyanchenko, Anton Morozov
Radeev Nikita, Vasiliev Maxim, Kotlova Anna, Korolev Alexey, Sayk Nikita, Nazdryukhin Alexander, Minkevich Maria