This is final project for graduate, and it's only half part of whole completed project
This project can detect video or stream, there are up to 7 emotion can be classfied
- First step is trying to get as many image data as we can
- Normalize all the image data to the same size and grayscale
- Training the model based on preprocessed image data
- Predict outcome on video or stream
- Execute
imageScraper.py
and you can type in query string to search on google chrome, multiple query string is seperated by space - Execute
getUrls.js
to scanf through the google image page and get all of ORIGINAL (not compressed) image urls, and then links will be saved as txt - Execute
imageDownloader.py
to download parallelly all image based on the previous txt file, the output will store in a dir
- Execute
faceCutter.py
for cut out the face part parallelly, and it will save as a grayscale image - Execute
manualClassifier.py
to select a dir to classify images into 7 different categories - Execute
dataAugmentation.py
to rotate, flip, shear to increment our image dataset
- Execute
emotionTrain.py
to set up a model and train, the model's struture is inemotionNetwork.py
- Execute
realTimeEmotionDetection.py
for stream like web cam or other, this program will skip some frame to increase performance - Execute
videoEmotionDetection.py
for video, this program is parallel and slightly complicated structure that use semaphor, thread and lock - Execute
estimate.py
to see the overall accuracy of the result