Venkataraman Sundareswaran
Rockwell Automation
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Venkataraman Sundareswaran.
Journal of the Acoustical Society of America | 2006
Pavel Zahorik; Philbert Bangayan; Venkataraman Sundareswaran; Kenneth Wang; Clement Tam
The efficacy of a sound localization training procedure that provided listeners with auditory, visual, and proprioceptive/vestibular feedback as to the correct sound-source position was evaluated using a virtual auditory display that used nonindividualized head-related transfer functions (HRTFs). Under these degraded stimulus conditions, in which the monaural spectral cues to sound-source direction were inappropriate, localization accuracy was initially poor with frequent front-back reversals (source localized to the incorrect front-back hemifield) for five of six listeners. Short periods of training (two 30-min sessions) were found to significantly reduce the rate of front-back reversal responses for four of five listeners that showed high initial reversal rates. Reversal rates remained unchanged for all listeners in a control group that did not participate in the training procedure. Because analyses of the HRTFs used in the display demonstrated a simple and robust front-back cue related to energy in the 3-7-kHz bandwidth, it is suggested that the reductions observed in reversal rates following the training procedure resulted from improved processing of this front-back cue, which is perhaps a form of rapid perceptual recalibration. Reversal rate reductions were found to generalize to untrained source locations, and persisted at least 4 months following the training procedure.
international symposium on mixed and augmented reality | 2003
Venkataraman Sundareswaran; Kenneth Wang; Steven Chen; Reinhold Behringer; Joshua McGee; Clement Tam; Pavel Zahorik
Augmented reality (AR) presentations may be visual or auditory. Auditory presentation has the potential to provide hands-free and visually non-obstructing cues. Recently, we have developed a 3D audio wearable system that can be used to provide alerts and informational cues to a mobile user in such a manner as to appear to emanate from specific locations in the users environment. In order to study registration errors in 3D audio AR representations, we conducted a perceptual training experiment in which visual and auditory cues were presented to observers. The results of this experiment suggest that perceived registration errors may be reduced through head movement and through training presentations that include both visual and auditory cues.
international conference on multimedia computing and systems | 1999
Reinhold Behringer; Steven Chen; Venkataraman Sundareswaran; Kenneth Wang; Marius S. Vassiliou
Routine maintenance and error diagnostics of technical devices can be greatly enhanced by applying multimedia technology. The Rockwell Science Center is developing a system which can indicate maintenance instructions or diagnosis results for a device directly into the view of the user by utilizing augmented reality and multimedia techniques. The system can overlay 3D rendered objects, animations, and text annotations onto the live video image of a known object, captured by a movable camera. The status of device components can be queried by the user through a speech recognition system. The response is given as an animation of the relevant device module, overlaid onto the real object into the users view, and/or as auditory cues using spatialized 3D audio. The position of the user/camera relative to the device is tracked by a computer vision based tracking system. The diagnostics system also allows the user to leave spoken annotations attached to device modules for other users to retrieve. The system is implemented on a distributed network of PCs, utilizing standard commercial off-the-shelf (COTS) components.
Computers & Graphics | 1999
Reinhold Behringer; Steven Chen; Venkataraman Sundareswaran; Kenneth Wang; Marius S. Vassiliou
Abstract Augmented reality (AR), combining virtual environments with the perception of the real world, can be used to provide instructions for routine maintenance and error diagnostics of technical devices. The Rockwell Science Center (RSC) is developing a system that utilizes AR techniques to provide “X-ray vision” into real objects. The system can overlay 3D rendered objects, animations, and text annotations onto the video image of a known object. An automated speech recognition system allows the user to query the status of device components. The response is given as an animated rendition of a CAD model and/or as auditory cues using 3D audio. This diagnostics system also allows the user to leave spoken annotations attached to device modules as ASCII text. The position of the user/camera relative to the device is tracked by a computer-vision-based tracking system using fiducial markers. The system is implemented on a distributed network of PCs, utilizing standard commercial off-the-shelf components (COTS).
Enhanced and synthetic vision 2000. Conference | 2000
Reinhold Behringer; Clement Tam; Joshua McGee; Venkataraman Sundareswaran; Marius S. Vassiliou
Rockwell Science Center is investigating novel human-computer interface techniques for enhancing the situational awareness in future flight decks. One aspect is to provide intuitive displays which provide the vital information and the spatial awareness by augmenting the real world with an overlay of relevant information registered to the real world. Such Augmented Reality (AR) techniques can be employed during bad weather scenarios to permit flying in Visual Flight Rules (VFR) in conditions which would normally require Instrumental Flight Rules (IFR). These systems could easily be implemented on heads-up displays (HUD). The advantage of AR systems vs. purely synthetic vision (SV) systems is that the pilot can relate the information overlay to real objects in the world, whereas SV systems provide a constant virtual view, where inconsistencies can hardly be detected. The development of components for such a system led to a demonstrator implemented on a PC. A camera grabs video images which are overlaid with registered information, Orientation of the camera is obtained from an inclinometer and a magnetometer, position is acquired from GPS. In a possible implementation in an airplane, the on-board attitude information can be used for obtaining correct registration. If visibility is sufficient, computer vision modules can be used to fine-tune the registration by matching visual clues with database features. Such technology would be especially useful for landing approaches. The current demonstrator provides a frame-rate of 15 fps, using a live video feed as background and an overlay of avionics symbology in the foreground. In addition, terrain rendering from a 1 arc sec. digital elevation model database can be overlaid to provide synthetic vision in case of limited visibility. For true outdoor testing (on ground level), the system has been implemented on a wearable computer.
Journal of the Acoustical Society of America | 2001
Pavel Zahorik; Clement Tam; Kenneth Wang; Philbert Bangayan; Venkataraman Sundareswaran
Current low‐cost 3‐D sound displays do not use individualized head‐related transfer functions (HRTFs) to render acoustic space. As a result, sound source localization accuracy is often degraded when compared to the accuracy using real sources, or to higher quality displays using individualized HRTFs. Here, a way to improve accuracy was examined in which listeners were provided with paired auditory and visual feedback as to the correct sound source location. Sound localization accuracy was assessed for six listeners, using a large number of virtual sound sources sampled from a spherical grid surrounding the listener, before, during, and after feedback training. Feedback training markedly improved localization accuracy compared to a control group of five listeners that did not receive training. The largest improvements in accuracy resulted from listeners’ enhanced abilities to distinguish sources in front from sources behind. Further, these improvements were not transient short‐term effects, but lasted at l...
IFAC Proceedings Volumes | 2004
Reinhold Behringer; B. Gregory; Venkataraman Sundareswaran; R. Addison; R. Elsley; W. Guthmiller; J. de Marchi; R. Daily; David M. Bevly; C. Reinhart
Abstract The Sci Autonics vehicles in the DARPA Grand Challenge employ a combination of Lidar and Radar sensors for far look-ahead distance and a suite of ultrasonic and optical sensors for short-range obstacle detection. A pinhole camera is used to detect visual path boundaries. The vehicle is a 4-wheel drive ruggedized All-Terrain-Vehicle (ATV). A differential GPS in conjunction with inertial sensors provides input to the low-level vehicle control to keep the vehicle on course between a series of closely spaced waypoints.
World Aviation Congress & Exposition | 1999
Venkataraman Sundareswaran; Reinhold Behringer; Steven Chen; Kenneth Wang
Human Computer Interface (HCI) in applications for the maintenance of complex machinery such as an aircraft can be enhanced by exploiting new developments in HCI. We have developed a multimodal HCI demonstration system for maintenance applications, incorporating Augmented Reality (AR), Speech Recognition, and 3-dimensional audio technologies. The Augmented Reality interface is based on an original dynamic tracking approach to provide rapid update of the scene with graphical overlays. We enhance the use of this interface with speech recognition to control the system and to add annotations using dictation-based text information. A combination of 3-D audio, graphic animations, and text displays is used to communicate information to the user.
Archive | 2000
Venkataraman Sundareswaran; Reinhold Behringer
IWAR '98 Proceedings of the international workshop on Augmented reality : placing artificial objects in real scenes: placing artificial objects in real scenes | 1999
Venkataraman Sundareswaran; Reinhold Behringer