By Team NaN
- Table of Contents
- 🏅Team Members
- 😎Mentors
- 🗒️Description
- 📁 File Structure
- 👨💻 Tech-Stack
- ✅ Progress
- 🛠️ Project Setup and Prerequisites
- 🎲 Usage
- 📍 Applications
- 📈 Future Prospects
- 🎮 Demo
- Aditya Mhatre - GitHub Profile, Mail
- Chirag Shelar - GitHub Profile, Mail
- Sarrah Bastawala - GitHub Profile, Mail
- Kanak Meshram - GitHub Profile, Mail
- Ravi Maurya
- Shreyas Penkar
- Tanish
This project focuses on emulating the human facial features and represent it as a 2D emoticon.
- 2 Pre-Trained models are used for evaluting the captured frames from the live webcam stream.
- A frame is sent every 1 second to the backend (python code) which returns an int which is then decoded in .js to display the emoticon on the website.
- If the first pre-trained model detects a person the frame is passed on to the second model that identifies the emotion and its degree displayed by the person.
Visit our Github repo to clone our repository and to get updates on the future prospects and progress of our project.
MimiCog
┣ 📂static
┃ ┣ styles.css
┣ 📂templates
┃ ┣ index.html
┃ ┣ secondPageWebcam.html
┣ appFlask.py
┣ coco.txt
┣ detectObjectMy.py
┣ fer2013_mini_XCEPTION.102-0.66.hdf5
┣ functions.py
┣ haarcascade_frontalface_default.xml
┣ yolov3.cfg
-
- Python
- HTML
- CSS
- JavaScript
-
- TensorFlow
- Flask
- OpenCV
- Keras
- PIL
- SocketIO
- Scikit learn
-
- Yolo v3 pre-trained on DarkNet on imagenet dataset.
- Architecture used is a fully-convolutional neural network that contains 4 residual depth-wise seperable convolutions where each convolution is followed by a batch normalization operation and a ReLU activation function.
- Object Detection using Yolo v3 (In Python).
- Emotion Detetcion using pre-trained model (In Python).
- Creating a frontend that can capture a live webcam stream (Locally).
- Sending frames of video from frontend to backend for detection and reciving detected output using flask and SocketIO.
- Displaying an Emoticon for detected object/emotion on the website.
- Take in pre-recorded video as a stream instead of live webcam.
- Refining the website.
Clone this GitHub Repository or Download it. For cloning you must have GIT installed.
Type this command in the Git Bash Terminal:
git clone https://github.com/Adi935/MimicCog
Run the following command to make sure you have the latest version of pip installed:
pip install --upgrade pip
First you will have to create a Virtual Enviornment in conda. Henceforth, run all commands in anaconda prompt.
conda create --name conda-env python
Following Dependencies must be installed to run this project. The commands for installation are:
pip install numpy
pip install opencv-python
pip install tensorflow
pip install flask
After the Project has been Setup, navigate to the directory where the repository is downloaded/cloned. Make sure you are staying in the same virtual enviornment in anaconda prompt.
Run the command
python appFlask.py
The following screen would be shown.
Go to your preferred browser and enter the URL displayed on your console to run this Project (here it is http://127.0.0.1:5000/).
- Can be used to convey emotions through a 2D emoticon without revealing your face for example in video calls, livestreams, etc.
- Simulate your surroundings and yourself in a 2D enviornment on your screen.
- Can be used in games, apps, etc. that uses emotions/expressions shown by the user.
- Make the Frontend compatiable as an extension for multiple existing browsers, websites and applications.
- Instead of just one object, the entire scene captured is simulated as a 2D enviornment.
- Detection of objects/emotions is shown in the console log.
- Relevant Emoticon is displayed
- Watch the working of the project here! Drive Link with a demo video of the project