Intelligent System Final Project

By Chan Elizabeth W, Gadiza Namira Putri Andjani, and Winson Wardana

  • The main program code is presented in the main.py
  • The code for training the model is presented in the 2 Jupyter notebooks. (trainimgModel.ipynb and trainemotionModel.ipynb)
  • The haarcascade_frontalface_default.xml is used to detect the frontal face. It is a stump-based 24×24 discrete AdaBoost frontal face detector. It is created by Rainer Lienhart.
  • The shape_predictor_68_face_landmarks.dat is a pre-trained facial landmark detector inside the dlib library used to estimate the location of 68 (x, y)-coordinates that map to facial structures on the face. The indexes of the 68 coordinates can be visualized on the image: https://www.pyimagesearch.com/wp-content/uploads/2017/04/facial_landmarks_68markup-1024×825.jpg
  • The Eyefilter.py is the code for coordinating and storing the eye positions from the dlib facial landmark detector. It is used to locate the eyes of the person for the filter.
  • The Additional Pre-trained model file contains our additional trained models. One for the emotion detection and the other one for the image classification.
  • The Filter file contains the images that we used as a filter for the face filter based on emotion detection.
  • The ImageTest file contains the images that we take from google and we used to test the model prediction based on the image classification.

Source code: https://github.com/ChanElizabeth/IS-Final-Project

This entry was posted in Intelligent Systems. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *