Thursday, February 4, 2010

RaFD going online!

Finally, after nearly 2,5 years of work, we are happy to announce:

The Radboud Faces Database will be released on the 25th of February 2010!

We finally finished the alignment and other post-processing steps for all images of the database (8040 images in total). Currently, we are running some internal tests of the website, through which RaFD will be distributed. Finally, on the 25th we will have a mini-symposium at the Behavioural Science Institute to celebrate the official release of this long project.

For those interested, here already some details about how to get access: On the final website (www.rafd.nl; will become available also on the 25th), you can apply for access through a web-interface, where you choose login credential yourself. We will check if you are a member of an accredited university and when access is granted, you can immediately download the database (or the parts you need) from the website.

We are very positively looking forward to the 25th, and hope that the Radboud Faces Database will fit your needs and inspire a lot of new research lines!

Oliver

Wednesday, November 11, 2009

RaFD paper accepted

Dear all,

We are proud to announce that the RaFD validation paper has been accepted by Cognition & Emotion. We are very excited about this. You can download the paper at Oliver Langner's personal website.

It will not be long now before we're ready to distribute RaFD to all of you. We have Bart van Delft, an excellent programmer who is studying Artificial Intelligence right here at the Radboud University Nijmegen, working on our website. We expect the website to be finished before christmas. We hope to be done with the final post-processing of all camera angles soon too.

We are looking back at a project that took three years of hard work for several BSI researchers, we should not let this go by silently.

Best,

Ron

Wednesday, September 30, 2009

RaFD near its final stage

Our last blog entry was a year old, but we have been working hard in the meanwhile to finalize the database. This is a general update post on the project.
  1. We recently reserved the url that will host the final dataset, it will be www.rafd.nl. Currently, nothing is to be seen there, but a few students from the artificial intelligence department of the Radboud University are about to make a cool site for this.
  2. The validation data have of course long been gathered and analyzed. The results look extremely good, with pretty high recognition rates. We hope to publish these data soon. A paper presenting RaFD and its validation is currently under review.
  3. The legal issues have largely been resolved. A terms-of-usage contract has been set up and translated to english. If you're interested, you can see it here. In short, these are the most important details:
    • RaFD will be usable for scientific research free of charge.
    • Each researchers needs to register and download his or her own copy.
    • RaFD can't be used for clinical treatment or other commercial purposes.
    • Researchers need to be working at an accredited university.
  4. The processing of most images is finished. As a small complication, we had to change our image alignment procedure (see earlier post) a bit. Specifically, we had to include a size fitting option for the pictures of the different camera angles. This was necessary, because the zoom of the cameras was not perfectly equal on all angles.

For all the researchers that have contacted us so far for using RaFD: We do not forget you! Once RaFD is in a usable state, which should be pretty soon now, we will inform you.

The RaFD-team

Tuesday, September 23, 2008

Validation of RaFD running

Dear all,

I'd like to write shortly on the validation we're currently running on the RaFD images. The fact that we're already validating implies that we are finished with the rough selection process of images. The rough selection was done by two FACS coders, Skyler Hawk and Vasily Klucharev. Based on seeing all frontal pictures, the FACS coders chose for each model, gaze direction, and emotion those two images with the most prototypical emotional expression. Our reduced set thus contains two images per model, emotion, gaze direction, and camera angle, summing to a total of 17,000+ images.

In the validation study we collect ratings on the 3,400+ frontal images. The data are gathered in the lab of the Behavioural Science Institute. In total, about 380 participants are rating subsets of the total image set on the following dimensions:
  • attractiveness, on a 5-point scale
  • emotion prototype, forced-choice between 9 alternatives (neutral, angry, disgusted, contemptuous, fearful, happy, surprised, sad, other)
  • intensity of emotion, on a 5-point scale
  • clarity of emotion, on a 5-point scale
  • genuineness of emotion, on a 5-point scale
  • valence of emotion, on a 5-point scale
For each image in the set, we are planning to have 20 ratings from different participants on each of the dimensions. So this short post is just to inform you about what you can expect from the validation of the image set.

Oliver

Thursday, September 4, 2008

Programming the validation

We received some emails from blog visitors about which programs we use for the development of this set. Up to now we used Adobe Lightroom (which in the end, we didn't need at all), the Gimp, Matlab, and UFRaw. 

For the validation we had the choice of using one of several presentation programs, among which were Inquisit, E-Prime, Presentation and the Matlab Psychtoolbox. We actually decided to go with PsychoPy, a presentation library for Python, which makes the validation program platform independent and open source. Convenient, when you program on OS X or Linux, but need to run on Windows XP.

We needed some kind of intelligent way of distributing all images across participants, so that everybody rates the same amount of models, with the same amount of different emotions and eye gazes. Because of this, we first programmed a sequence generator in Python that generates the sequences of stimuli any participant will see before the validation even starts. Our script utilizing PsychoPy subsequently reads these sequences and shows the appropriate images according to someone's participant number. We had to program the functions to show likert scales or open ended questions ourselves, but hey, Python is fun. 

Monday, we start with data collection. After this, we analyze the ratings and will create the infrastructure to distribute the set. We'll keep you posted.

Ron

Tuesday, August 12, 2008

Update: cropping and color balancing

Just a short update. We have currently automatized cropping and color balancing of all final centered frontal images. We are now in the process of applying this to the high quality pictures in a lossless format. After this we need to replace the backgrounds with absolute white. This will give us the pictures in their definitive form, ready to be validated in September. 

In parallel we will process the four other angles, which do not need to be validated by participants.

We will have news on the availability of RaFD for BSI researchers and researchers from other institutions very soon. Stay tuned.

Thursday, July 31, 2008

Creation of RaFD presented at the 12th European Conference on Facial Expression

Job van der Schalk explains the creation of the RaFD to conference attendees

Skyler Hawk and Job van der Schalk just presented the creation of the Radboud Faces Database at the 12th European Conference on Facial Expression in Geneva, Switzerland. It was a really exciting event, and thanks to all the attendees for showing their interest!

Tuesday, July 29, 2008

Small comment: lightroom and too intelligent cameras

Dear all,

just a small sidestep on current work. We currently are busy with defining the cropping parameters for all images in lightroom in order to finally implement the optimal alignment parameters mentioned in the previous post.

Thereby I stumbled upon an interesting issue: when you define in the lightroom files that you want to crop e.g. the top of the image, what actually is cropped can differ from image to image. Searching a bit deeper, I found a "Camera Orientation " setting in the settings files and a page about intelligent camera orientation sensors on the web.

So, cameras nowadays know, how they are oriented, when a photo was taken. And obviously, lightroom knows and reads this intelligent orientation parameter to present all pictures upright. Nevertheless, it still defines 'top' for cropping relative to the camera housings top. Now, is that clever, or what?

Thursday, July 24, 2008

About aligning landmarks of the RaFD faces

Dear all,

I shortly will describe in this post, how we solved the problem of aligning landmarks of all face images for the RaFD. Making a database with really good alignment of eyes, nose etc. seemed germain to us, as for our own research we had to align the images of other databases regularly manually.

The problem
Even though we paid close attention to the positioning of our models during the photoshoot, the images throughout the session show considerable variation in positioning:


The solution
As you can imagine, doing a manual alignment for 17.000+ pictures is not really an option (and not really objective either). So we opted for an automated procedure for this and came up with a rather simple solution. We currently use the nonlinear general purpose fitting function 'fminsearch' from Matlab to optimize the correlation between a template and a target image. The optimization procedure varies two translational parameters (x-, and y-direction) plus one rotational parameter. For each iterative step, the correlation of template and target is calculated for a specific area of the images. fminsearch then varies those parameters until some matching criterion is met.

To see it working, look at this:

Wednesday, July 23, 2008

Creating standardized expressions for the RaFD

Target Face Action Units

There are lots of ways that one can express the same emotion through the face, and there are, of course, huge individual differences based on physiology, prior personal experiences, and culture, just to name a few factors. However, one of the main goals for the RaFD is to produce a standardized set of facial expressions in which people are more or less using the same facial muscles to convey the emotion. This has obvious benefits for psychological research - in facial mimicry studies, for example, this standardization means that you can be sure that participants are mimicking something that is actually there in every picture being shown to them. To my knowledge, there is currently no other existing database of static images that has attempted to have standard facial expressions available for such a large number of models.

When developing the faces used for the RaFD, Job van der Schalk and I based our target expressions upon prototypes defined by Paul Ekman and Walter Friesen, and attempted to elicit these expressions through a variation of the Directed Facial Action Task (DFAT)1. Our knowledge of the Facial Action Coding System (FACS) was invaluable to this process.

Below are the target codes (in terms of FACS Action Units) that we attempted to elicit from the models. AUs in boldface are "core AUs" that we considered to be absolutely essential to photographing before moving on to the next emotion, and were the main requisites for selecting pictures used in pilot tests.

Anger: 4CDE + 5CDE + 7 + 17 + 23/24
Contempt: Unilateral 14
Disgust: 9 + 10 + 25
Fear: 1 + 2 + 4 + 5DE + 20 + 25
Happiness: 6 + 12CDE + 25
Sadness: 1 + 4 + 15ABC + 17
Surprise: 1 + 2 + 5AB + 26

References

1. Ekman, P. ( 2007). The directed facial action task. In J. A. Coan and J. J. B. Allen (Eds.), Handbook of Emotion Elicitation and Assessment (pp. 47-53). Oxford University Press.

Coaching Models in the Photo Session

We typically had about 45 minutes to collect all of the desired photos from each model. This amounted to about 6 minutes per emotion, on average. Some poses were accomplished relatively quickly (such as joy), while more complex expressions (i.e., fear and sadness) sometimes took up as much as 1/2 of the total session time. There were at least 3 individuals working with the models during the shoots: 1) a FACS trainer who coached models on the AUs, and took photos using a remote control when the expression was satisfactory; 2) A posture checker sitting in front of 5 computer monitors, making sure that the model's head and body were correctly in frame from every angle; and 3) a posture coach stading behind the FACS trainer, who relayed information from the posture checker to the model. This allowed the FACS trainer to concentrate solely upon the face without also worrying about giving posture instructions.

Posture Checker (Oliver Langner) and FACS Trainer (Skyler Hawk)
Photograph by Bert Meelberg

FACS Trainer (Skyler Hawk) Posture Coach (Gijs Bijlstra)
Photograph by Bert Meelberg

Job van der Schalk and I have shared interests and backgrounds in dramatic arts, including acting and directing, that were essential to our jobs as FACS coaches. Not only did we have to elicit the correct AUs from models, but it was important to stay energetic, give constructive feedback, and keep them in good spirits. It's important to note that this whole process could be quite exhausting for the models, both physically and mentally. We tried to keep things going smoothly by giving short breaks, if needed. When models had trouble making certian AU combinations during the shoot, we provided further on-site instruction with the use of a hand-held mirror.

Sometimes, the exhaustion that can come with exercising the facial muscles in this way required us to move on to another expression that used completely different sets of AUs, coming back to the former expression as time allowed. This was especially true if models didn't already use these AUs in their own expressions of certain emotions. We found that 1+4 and 1+2+4 combinations required the most sustained effort, although lower-face AUs such as 15 and 20 also tended to "fade out" without continued feedback to keep things at the desired strength.

Training Models

Model Training and Rehearsal

Models typically received a photo training manual at least 24 hours in advance. This manual contained both written instructions and sample pictures drawn from two different existing databases (the JACFEE1 and Karolinska2 sets). We requested that models practice for at least an hour prior to their appointment, whenever possible.

Before beginning with the actual photo session, we made sure that models were able to make the expressions as requested by leading them through warm-up and refinement training exercises. This rehearsal session typically lasted 30 minutes. This allowed us to assess the initial capabilities of the models and to get an idea of which expressions would need more time and effort during the photo shoot. It also allowed us to try several different "tricks" that we developed, beyond the DFAT instructions, to elicit the tagret AUs. Some of these tricks worked better than others, but which ones were best really depended on the individual. Some people responded better imagery-based tasks, for example, while others were more assisted by physiology-based instruction. If you have further comments/questions about our coaching/training procedures, be sure to add a comment.

References

1. Biehl, M., Matsumoto, D., Ekman, P., Hearn, V., Heider, K., Kudoh, T., & Ton, V. (1997). Matsumoto and Ekman's Japanese and Caucasian Facial Expressions of Emotion (JACFEE): Reliability data and cross-national differences. Journal of Nonverbal Behavior, 21, 3-21.

2. Goeleven, E., De Raedt, R., Leyman, L., & Verschuere, B. (2008). The Karolinska Directed Emotional Faces: A validation study. Cognition & Emotion, 22, 1094-1118.

Tuesday, July 22, 2008

Short description of the database set

Here I'll just give a short overview of the RaFD image set.

In total, the set currently contains 72 models:
  • 21 Dutch male adults
  • 21 Dutch female adults
  • 5 Dutch male kids
  • 6 Dutch female kids
  • 19 Moroccan male adults
Each model was trained by a FACS coder, to show each of the following emotional expressions:
  • happy
  • angry
  • sad
  • contemptous
  • disgusted
  • neutral
  • fearful
  • surprised
Each emotion was shown with three different gaze directions (without changing head orientation):
  • looking left
  • looking frontal
  • looking right
Further, each picture was taken from five different camera angles simultaneously:


Just to give you an impression on the quality of the database:
  • we took each photo several times, amounting to a total of more than 50.000 images
  • as each image has a resolution between 10 and 15 Mpx, all images amount to nearly 1TB of data
  • after selecting the two best pictures for each emotion, gaze direction, and model, the final dataset will still contain 17.520 images
So, I hope this gives some good impression of what RaFD does contain.

Monday, July 21, 2008

Hello World!

The development of RaFD is financed by the Behavioural Science Institute of the Radboud University Nijmegen. As a start, I'll mention the team involved in developing the Radboud Faces Database (alphabetical order):
We would like to acknowledge Job van der Schalk for his work as FACS coder during the photoshoot and Bert Meelberg as photographer. We further thank Vasily Klucharev for helping to choose the best emotional expressions.

If you wish more information about the database, you can send an email to facedb@bsi.ru.nl. And of course check out this blog regularly. We'll report soon on the current status of RaFD and the expected release date.