Sunday, 25 March 2012
The expression classifiers work well for discerning between when a person is smiling and not smiling.For other emotions they perform relatively poorly. I believe this is due to a lack of data, its much harder to find images of people with facial expressions other than smiling.
For discerning between smiling with teeth baring and an angry expression with teeth baring it performs surprisingly well, compared with how poorly it performs for discerning between fear , surprise and disgust. Perhaps its because these overlap?
All of the networks are using the pixel values of the images as inputs, face images are being scaled to 50x50 images which gives all the classifiers input sizes of 2500, hidden layers are between 9 and 12 neurons in size, and the output layer consists of 2 neurons one for a true value and one for a false value.