TNA project : Study of audiovisual integration in virtual reality



Acronym : 28-Study of audiovisual integration in virtual reality-goston

Project Lead : Trk goston From : Doctoral School of Psychology, Etvs Lornd University; Institute of Cognitive Neuroscience and Psychology, RCNS HAS, Research Group for Developmental Psychophysiology

Dates : from 3rd September 2012 to 21st September 2012

Description :



Motivation and objectives :
Motivation: Our research group is interested in the cognitive dynamics of audiovisual integration. We would like to extend our research to VR environments, since they are both realistic and can be easily manipulated for experimental purposes. In order to have access to high-level visualization facilites and to be able to design state-of-the-art experiments, we would like to collaborate with researchers in the field of VR technologies. We found VISIONAIR to a great opportunity to do this. Our experiments could give better understanding of multimodal integration and acoustic space perception in realistic VR situations. Objectives: Virtual reality environments nowadays are highly realistic, but research has mainly focused on visualization. We could greatly enhance the user experience however by using multimodal environments which, in addition to visual information, contain audio information as well. Moreover, using real and virtual audiosources simultaneously would create a whole new level of augmented reality. Realistic audio simulation until now was delivered through binaural headphones placed on the eardrums, which is uncomfortable, difficult to use and cannot be mixed with real sound sources. In our research we seek to answer the question of how we would be able to create virtual sound sources with surround systems, which the brain cannot distinguish from real ones. This way we could create an audio installation that is highly valid ecologically, comfortable and opens new areas for navigation and multimodal integration research. Nevertheless the study of audiovisual integration is highly relevant in visualization as well, because in VR environments we experience a paradox situation. On one hand, no matter how realistic the scene is, we do know it is virtual, but on the other hand, even modest virtual scenes are capable of evoking genuine feelings, e.g. vertigo by simply rocking the horizon. It is still an open question in which level of cognitive processing we perceive virtual reality as real, and in which level our brain knows that it is not. Audiovisual integration is believed to work on a perceptual level, thus phenomena such as ventriloquism could give us better insights to what extent virtual reality is reality, and to what extent it is virtual.

Teams :
The Research Group of Developmental Psychophysiology was founded by Valéria Csépe in January 2000. Our main research method is using EEG the measure event-related brain potentials (ERP), i.e. the time-locked, synchronized electrical activity of the human brain, elicited by various stimuli. Our primary research topics include the processing of auditory events, language processing, reading and number processing. These topics are studied in both adults and infants, and several other methods of cognitive psychophysiology, experimental and cognitive psychology and cognitive neuropsychology are applied.

Dates :
starting date : 03 September, 2012
ending date : 21 September, 2012

Facilities descriptions :
http://visionair-browser.g-scop.grenoble-inp.fr/visionair/Browser/Catalogs/CRVM.FR.html

Recordings & Results :
In our study the question to be answered is: does the brain differentiates real and virtual sound sources in an audiovisual experimental situation. We would like to design event related potential (ERP) experiments where either 1) spatially located sounds are presented alone or 2) sounds and visual stimuli are presented together. Sounds are delivered through real sound sources (loudspeakers) and visual stimuli appear where the speakers are. During the experiment the speakers sometimes remain silent and sounds are delivered through a surround system simulating the real sound sources (deviants). With the ERP method we could show whether the cognitive system detects the deviant stimuli or not. Before recording brain activity we have to do pilot studies to see how we should adjust thesurround system, and to see whether this phenomenon is reflected behaviourally in a VR environment.

Conclusions :
We designed two auditory-visual cross-modal (ventriloquism) experiments, where noise bursts and light-blobs were presented synchronously, but with spatial offsets. We presented sounds in two ways: using free field sounds and using a stereo speaker set. Participants were asked to localize the direction of sound sources. In the first experiment visual stimuli were displaced vertically relative to the sounds, in the second experiment we used horizontal offsets. We found that in both experiments sounds were mislocalized in the direction of the visual stimuli in each condition (ventriloquism effect), but this effect was more consistent across sound positions and was stronger when visual stimuli were displaced vertically. Moreover we found that the ventriloquism effect is strongest for centrally presented sounds. The analyses revealed a slight variation between different sound presentation modes. We explain our results from the viewpoint of multimodal interface design. These findings draw attention to the importance of cognitive features of multimodal perception in the design of virtual environment setups and may help to open new ways to more realistic surround based multimodal virtual reality simulations.




Few images :

SAIVR_Overview.JPG SAIVR_Passation.JPG
.



Visionair logo

VISIONAIR / Grenoble INP / 46 avenue Felix Viallet / F-38 031 Grenoble cedex 1 / FRANCE
Project funded by the European Commission under grant agreement 262044