Facial expression is one of the most powerful, natural and universal signals for human beings to convey their emotional states and intentions. Numerous studies have been conducted on automatic facial expression recognition because of its practical importance in sociable robotics, medical treatment, driver fatigue surveillance, and many other human-computer interaction systems.
With the transition of facial expression recognition from laboratory-controlled to challenging in-the-wild conditions and the recent success of deep learning techniques in various fields, deep neural networks have increasingly been leveraged to learn discriminative representations for automatic FER. To facilitate deep facial expression recognition, we constructed two real-world facial expression datasets colleted from the Internet via crowdsourcing: RAF-DB with basic and compound emotions and RAF-ML with blended emotions. We further provide RAF-AU dataset with action unit coding on blended facial expressions in the wild.
RAF-DB has large diversities, large quantities, and rich annotations, including: large number of real-world images; a 7-dimensional expression distribution vector for each image; two different subsets: single-label subset and two-tab subset; 5 accurate landmark locations, 37 automatic landmark locations, bounding box, race, age range and gender attributes annotations per image; baseline classifier outputs for basic emotions and compound emotions.
In RAF-ML, we provide 4908 number of real-world images with blended emotions, 6-dimensional expression distribution vector for each image, 5 accurate landmark locations and 37 automatic landmark locations, and baseline classifier outputs for multi-label emotion recognition.
Real-world Affective Faces Action Unit Database (RAF-AU) is an extended dataset of RAF-ML with manual action unit coding. It employs a sign-based (i.e., AUs) and judgement-based (i.e., perceived emotion) approach to annotating blended facial expressions in the wild.
During annotation, two experienced coders independently FACS-coded the face images and arbitrated any disagreement. They also carefully checked and discussed if AUs emerged due to other AUs. In RAF-AU, we provide 4601 number of real-world images with 26 kinds of AUs been annotated, and baseline detection outputs for action unit detection.