Tuesday, September 23, 2008

Validation of RaFD running

Dear all,

I'd like to write shortly on the validation we're currently running on the RaFD images. The fact that we're already validating implies that we are finished with the rough selection process of images. The rough selection was done by two FACS coders, Skyler Hawk and Vasily Klucharev. Based on seeing all frontal pictures, the FACS coders chose for each model, gaze direction, and emotion those two images with the most prototypical emotional expression. Our reduced set thus contains two images per model, emotion, gaze direction, and camera angle, summing to a total of 17,000+ images.

In the validation study we collect ratings on the 3,400+ frontal images. The data are gathered in the lab of the Behavioural Science Institute. In total, about 380 participants are rating subsets of the total image set on the following dimensions:
  • attractiveness, on a 5-point scale
  • emotion prototype, forced-choice between 9 alternatives (neutral, angry, disgusted, contemptuous, fearful, happy, surprised, sad, other)
  • intensity of emotion, on a 5-point scale
  • clarity of emotion, on a 5-point scale
  • genuineness of emotion, on a 5-point scale
  • valence of emotion, on a 5-point scale
For each image in the set, we are planning to have 20 ratings from different participants on each of the dimensions. So this short post is just to inform you about what you can expect from the validation of the image set.

Oliver

Thursday, September 4, 2008

Programming the validation

We received some emails from blog visitors about which programs we use for the development of this set. Up to now we used Adobe Lightroom (which in the end, we didn't need at all), the Gimp, Matlab, and UFRaw. 

For the validation we had the choice of using one of several presentation programs, among which were Inquisit, E-Prime, Presentation and the Matlab Psychtoolbox. We actually decided to go with PsychoPy, a presentation library for Python, which makes the validation program platform independent and open source. Convenient, when you program on OS X or Linux, but need to run on Windows XP.

We needed some kind of intelligent way of distributing all images across participants, so that everybody rates the same amount of models, with the same amount of different emotions and eye gazes. Because of this, we first programmed a sequence generator in Python that generates the sequences of stimuli any participant will see before the validation even starts. Our script utilizing PsychoPy subsequently reads these sequences and shows the appropriate images according to someone's participant number. We had to program the functions to show likert scales or open ended questions ourselves, but hey, Python is fun. 

Monday, we start with data collection. After this, we analyze the ratings and will create the infrastructure to distribute the set. We'll keep you posted.

Ron