This Python code uses the webcam of the device to scan and analyze what kind of emotion the person is in right now.
This packet uses NumPy, TensorFlow, Matplotlib, and OpenCV library to work.
Download the libraries above using the following commands.
pip install numpy.
pip install opencv-python.
pip install matplotlib.
pip install tensorflow.
(note TensorFlow installation doesn't work well with Python version 3.9, use Python version 3.8 for this code)
For the dataset, you can download the original pixel data in CSV format in https://www.kaggle.com/deadskull7/fer2013.
After downloading the dataset store it in the same folder as the code.
For training the model, navigate to the current directory in the command line and enter the following command.
python emotions.py --mode train.
This will train the model and then you could run the prediction, alternatively if you have already trained it once you can then use the following command to run without training again.
python emotions.py --mode display.
With a simple 4-layer CNN model, the test accuracy could reach up to 63% in 50 epochs.
The haar cascade method is used to detect the faces in every frame of the webcam feed and the region of the image containing the face is then resized to 48x48 and is then passed as input to the CNN.
The network outputs a list of softmax scores for seven different classes of emotions and the emotion with the highest score is then displayed on the screen.