Read about the latest research published in Psychological Science, a journal of the Association for Psychological Science.
Erik Van der Burg, Ed Awh, and Christian N. L. Olivers
Recent research has suggested that only three to four visual events can be processed at a time, but does this processing limit also apply to audiovisual events? Participants viewed black and white discs placed in a circle around a fixation point. A randomly determined number of the discs then reversed color. This reversal in color was accompanied by an auditory tone. A red circle then appeared on one of the discs, and participants had to indicate if the target disc had reversed color or not. The researchers found that task performance dramatically decreased when more than one visual (color reversal) was paired with the sound. A model of the results indicated that only one visual event could be reliably linked to sound at a time, suggesting that audiovisual orienting has a different capacity limit than visual selection.
Frederick Verbruggen, Christopher D. Chambers, and Gordon D. Logan
The stop-signal task measures an individual’s inhibitory response. Researchers are often interested in the covert latency of the stop process (the stop-signal reaction time, or SSRT), which can be calculated using the integration or the mean method. Both methods have been thought to be equally reliable. In this study, researchers simulated data for the stop-signal task, creating distributions with a positive skew and distributions that reflected slowed responses over the course of the task. The researchers found that the mean method overestimated SSRT in distributions with a positive skew and underestimated SSRT when there was slowing. In comparison, the integration method was fairly accurate in both instances. The authors suggest that the mean method be abandoned in favor of the integration method, as it appears to be more accurate in these conditions.
Lynne M. Reder, Lindsay W. Victoria, Anna Manelis, Joyce M. Oates, Janine M. Dutcher, Jordan T. Bates, Shaun Cook, Howard A. Aizenstein, Joseph Quinlan, and Ferenc Gyulai
Why do we have better memory for the known than we do for the unknown? Participants took part in an encoding session in which they were shown photographs of famous or unknown people superimposed over pictures of well-known locations. Some of the locations were paired with 12 faces (high fan) and some were paired with only 3 faces (low fan). Participants’ memory for the previously presented faces was then tested. Participants were better at recognizing famous faces — but not unknown faces — that were paired with the same background during the test as they had been during encoding. This advantage was greatest for faces originally paired with low-fan backgrounds. The results suggest that it is easier to associate context with faces that have a pre-existing long-term memory representation than with faces that do not.