Going forward, AI algorithms will be incorporated into more and more everyday applications. For example, you might want to include an image classifier in a smartphone app. To do this, you'd use a deep learning model trained on hundreds and thousands of images as part of the overall application architecture. A large part of software development in the future will be using these types of models as common parts of applications.
In this project, I have trained an image classifier to recognize different species of flowers. You can imagine using something like this in a phone app that tells you the name of the flower your camera is looking at. In practice, you'd train this classifier, then export it for use in your application. We'll be using this dataset of more than 8000 images and 102 flower categories.
When you've completed this project, you'll have an application that can be trained on any set of labelled images. Here your network will be learning about flowers and end up as a command line application. But, what you do with your new skills depends on your imagination and effort in building a dataset.
All the dependencies and required libraries are included in the file requirements.txt
- Clone the repo
$ git clone https://github.com/haardikdharma10/Image-Classifier-using-PyTorch.git
- Change your directory to the cloned repo and create a Python virtual environment named 'testenv'
$ mkvirtualenv testenv
- Now, run the following command in your Terminal/Command Prompt to install the libraries required
$ pip3 install -r requirements.txt
As the network makes use of a sophisticated deep convolutional neural network, the training process is impossible to be done by a common laptop. In order to train your models to your local machine you have three options
- Cuda - If you have an NVIDIA GPU then you can install CUDA from here. With Cuda you will be able to train your model however the process will still be time consuming
- Cloud Services - There are many cloud services that let you train your models like AWS and Google Cloud
- Coogle Colab - Google Colab gives you free access to a tesla K80 GPU for 12 hours at a time. Once 12 hours have ellapsed you can just reload and continue! The only limitation is that you have to upload the data to Google Drive and if the dataset is massive you may run out of space.
However, once a model is trained then a normal CPU can be used for the predict.py file and you will have an answer within some seconds.
- Haardik Dharma - Initial Work
- Udacity - Final project of the 'AI with Python Nanodegree'
- https://www.udacity.com/course/ai-programming-python-nanodegree--nd089
- https://pytorch.org/tutorials/
This project is licensed under the MIT License - see the LICENSE.md file for details.