Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Adriana Olmos is active.

Publication


Featured researches published by Adriana Olmos.


Perception | 2004

A Biologically Inspired Algorithm for the Recovery of Shading and Reflectance Images

Adriana Olmos; Frederick A. A. Kingdom

We present an algorithm for separating the shading and reflectance images of photographed natural scenes. The algorithm exploits the constraint that in natural scenes chromatic and luminance variations that are co-aligned mainly arise from changes in surface reflectance, whereas near-pure luminance variations mainly arise from shading and shadows. The novel aspect of the algorithm is the initial separation of the image into luminance and chromatic image planes that correspond to the luminance, red–green, and blue–yellow channels of the primate visual system. The red–green and blue–yellow image planes are analysed to provide a map of the changes in surface reflectance, which is then used to separate the reflectance from shading changes in both the luminance and chromatic image planes. The final reflectance image is obtained by reconstructing the chromatic and luminance-reflectance-change maps, while the shading image is obtained by subtracting the reconstructed luminance-reflectance image from the original luminance image. A number of image examples are included to illustrate the successes and limitations of the algorithm.


Journal of Vision | 2016

Does spatial invariance result from insensitivity to change

Frederick A. A. Kingdom; David J. Field; Adriana Olmos

One of the fundamental unanswered questions in visual science regards how the visual system attains a high degree of invariance (e.g., position invariance, size invariance, etc.) while maintaining high selectivity. Although a variety of theories have been proposed, most are distinguished by the degree to which information is maintained or discarded. To test whether information is maintained or discarded, we have compared the ability of the human visual system to detect a variety of wide-field changes to natural images. The changes range from simple affine transforms and intensity changes common to our visual experience to random changes as represented by the addition of white noise. When sensitivity was measured in terms of the Euclidean distance (L(2) norm) between image pairs, we found that observers were an order of magnitude less sensitive to the geometric transformations than to added noise. A control experiment ruled out that the sensitivity difference was caused by the statistical properties of the image difference created by this transformation. We argue that the remarkable difference in sensitivity relates to the processes used by the visual system to build invariant relationships and leads to the unusual result that observers are least sensitive to those transformations most commonly experienced in the natural world.


human factors in computing systems | 2013

Listen to it yourself!: evaluating usability of what's around me? for the blind

Sabrina A. Panëels; Adriana Olmos; Jeffrey R. Blum; Jeremy R. Cooperstock

Although multiple GPS-based navigation applications exist for the visually impaired, these are typically poorly suited for in-situ exploration, require cumbersome hardware, lack support for widely accessible geographic databases, or do not take advantage of advanced functionality such as spatialized audio rendering. These shortcomings led to our development of a novel spatial awareness application that leverages the capabilities of a smartphone coupled with worldwide geographic databases and spatialized audio rendering to convey surrounding points of interest. This paper describes the usability evaluation of our system through a task-based study and a longer-term deployment, each conducted with six blind users in real settings. The findings highlight the importance of testing in ecologically valid contexts over sufficient periods to face real-world challenges, including balancing quality versus quantity for audio information, overcoming limitations imposed by sensor accuracy and quality of database information, and paying appropriate design attention to physical interaction with the device.


Journal on Multimodal User Interfaces | 2012

Eyes-Free Environmental Awareness for Navigation

Dalia El-Shimy; Florian Grond; Adriana Olmos; Jeremy R. Cooperstock

We consider the challenge of delivering location-based information through rich audio representations of the environment, and the associated opportunities that such an approach offers to support navigation tasks. This challenge is addressed by In-Situ Audio Services, or ISAS, a system intended primarily for use by the blind and visually impaired communities. It employs spatialized audio rendering to convey the relevant content, which may include information about the immediate surroundings, such as restaurants, cultural sites, public transportation locations, and other points of interest. Information is aggregated mostly from online data resources, converted using text-to-speech technology, and “displayed”, either as speech or more abstract audio icons, through a location-aware mobile device or smartphone. This is suitable not only for the specific constraints of the target population, but is equally useful for general mobile users whose visual attention is otherwise occupied with navigation. We designed and conducted an experiment to evaluate two techniques for delivering spatialized audio content to users via interactive auditory maps: the shockwave mode and the radar mode. While neither mode proved to be significantly better than the other, subjects proved competent at navigating the maps using these rendering strategies, and reacted positively to the system, demonstrating that spatial audio can be an effective technique for conveying location-based information. The results of this experiment and its implications to our project are described here.


Computer Music Journal | 2012

A high-fidelity orchestra simulator for individual musicians' practice

Adriana Olmos; Nicolas Bouillot; Trevor Knight; Nordhal Mabire; Josh Redel; Jeremy R. Cooperstock

We developed the Open Orchestra system to provide individual musicians with a high-fidelity experience of ensemble rehearsal or performance, combined with the convenience and flexibility of solo study. This builds on the theme of an immersive orchestral simulator that also supports the pedagogical objective of instructor feedback on individual recordings, as needed to improve ones performance. Unlike previous systems intended for musical rehearsal, Open Orchestra attempts to offer both an auditory and a visual representation of the rest of the orchestra, spatially rendered from the perspective of the practicing musician. We review the objectives and architecture of our system, describe the functions of our digital music stand, discuss the challenges of generating the media content needed for this system, and describe provisions for offering feedback during rehearsal.


Proceedings of the second ACM international workshop on Multimedia technologies for distance leaning | 2010

Multiple angle viewer for remote medical training

Adriana Olmos; Kevin Lachapelle; Jeremy R. Cooperstock

We present the design of an interface for a camera array that will enable mentoring and monitoring of dissections and surgical procedures for medical instructors and students. While considerable research has investigated the recording and broadcasting of surgical procedures and dissection sessions for medical instruction, little work has been reported on the integration of an interface able to display multiple viewpoints within a medical context. The interface presented here allows a designated individual, the instructor, to provide the best viewing point to observe and execute a procedure, and simultaneously, offers the remote viewer the freedom to change viewpoints.


Leonardo Music Journal | 2013

Making Sculptures Audible through Participatory Sound Design

Florian Grond; Adriana Olmos; Jeremy R. Cooperstock

ABSTRACT A research group explores rendering sculptural forms as sound using echolocation and the participation of members of the visually impaired community.


international conference on distributed smart cameras | 2011

End-user viewpoint control of live video from a medical camera array

Jeffrey R. Blum; Haijian Sun; Adriana Olmos; Jeremy R. Cooperstock

The design and implementation of a camera array for real-time streaming of medical video across a high speed research network is described. Live video output from the array, composed of 17 Gigabit Ethernet cameras, must be delivered in low-latency, simultaneously, to many students at geographically disparate locations. The students require dynamic control over their individual viewpoints not only from physical camera positions, but potentially, of a real-time interpolated view. The technology used to implement the system, the rationale for its selection, scalability issues, and potential future improvements, such as recording and offline playback, are discussed.


Archive | 2004

Mcgill calibrated colour image database

Adriana Olmos; Frederick A. A. Kingdom


Archive | 2005

Automatic Non-Photorealistic Rendering through Soft-Shading Removal: A Colour-Vision Approach

Adriana Olmos; Frederick A. A. Kingdom

Collaboration


Dive into the Adriana Olmos's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge