Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nabil Ouerhani is active.

Publication


Featured researches published by Nabil Ouerhani.


international conference on pattern recognition | 2000

Computing visual attention from scene depth

Nabil Ouerhani; Heinz Hügli

Visual attention is the ability to rapidly detect the interesting parts of a given scene. Inspired by biological vision, the principle of visual attention is used with a similar goal in computer vision. Several previous works deal with the computation of visual attention from images provided by standard video cameras, but little attention has been devoted so far to scene depth as source for visual attention. The investigation presented in this paper aims at an extension of the visual attention model to the scene depth component. The first part of the paper is devoted to the integration of depth in the computational model built around conspicuity and saliency maps. The second part is devoted to experimental work in which results of visual attention, obtained form the extended model and for various 3D scenes, are presented. The results speak for the usefulness of the enhanced computational model.


Computer Vision and Image Understanding | 2005

Assessing the contribution of color in visual attention

Timothée Jost; Nabil Ouerhani; Roman von Wartburg; René Martin Müri; Heinz Hügli

Visual attention is the ability of a vision system, be it biological or artificial, to rapidly detect potentially relevant parts of a visual scene, on which higher level vision tasks, such as object recognition, can focus. The saliency-based model of visual attention represents one of tile main attempts to simulate this visual mechanism on computers. Though biologically inspired, this model has only been partially assessed in comparison with human behavior. Our methodology consists in comparing the computational saliency map with human eye movement patterns. This paper presents an in-depth analysis of the model by assessing the contribution of different cues to visual attention. It reports the results of a quantitative comparison of human visual attention derived from fixation patterns with visual attention as modeled by different versions of the computer model. More specifically, a one-cue gray-level model is compared to a two-cues color model. The experiments conducted with over 40 images of different nature and involving 20 human subjects assess the quantitative contribution of chromatic features in visual attention.


international conference on image analysis and processing | 2001

Adaptive color image compression based on visual attention

Nabil Ouerhani; Javier Bracamonte; Heinz Hügli; Michael Ansorge; Fausto Pellandini

This paper reports an adaptive still color image compression method which produces automatically selected ROI with a higher reconstruction quality with respect to the rest of the input image. The ROI are generated on-the fly with a purely data-driven technique based on visual attention. Inspired from biological vision, the multicue visual attention algorithm detects the most visually salient regions of an image. Thus, when operating in systems with low bit rate constraints, the adaptive coding scheme favors the allocation of a higher number of bits to those image regions that are more conspicuous to the human visual system. The compressed image files produced by this adaptive method are fully compatible with the JPEG standard, which favors their widespread utilization.


Real-time Imaging | 2003

Real-time visual attention on a massively parallel SIMD architecture

Nabil Ouerhani; Heinz Hügli

Visual attention is the ability to rapidly detect the visually salient parts of a given scene on which higher level vision tasks, such as object recognition, can focus. Found in biological vision, this mechanism represents a fundamental tool for computer vision. This paper reports the first real-time implementation of the complete visual attention mechanism on a compact and low-power architecture. Specifically, the saliency-based model of visual attention was implemented on a highly parallel single instruction, multiple data architecture called ProtoEye. Conceived for general purpose low-level image processing, ProtoEye consists of a 2D array of mixed analog-digital processing elements. To reach real-time, the operations required for visual attention computation were optimally distributed on the analog and digital parts. The currently available prototype runs at a frequency of 14 images/s and operates on 64 × 64 gray level images. Extensive testing and run-time analysis of the system stress the strengths of the architecture.


Lecture Notes in Computer Science | 2003

MAPS: multiscale attention-based presegmentation of color images

Nabil Ouerhani; Heinz Hügli

This paper reports a novel Multiscale Attention-based Pre-Segmentation method (MAPS) which is built around the multi-feature, multiscale, saliency-based model of visual attention. From the saliency map, provided by the attention algorithm, MAPS first derives the spatial locations of salient regions that will be considered further in the segmentation process. Then, the salient scale and the salient feature of each salient region is determined by exploring the scale and feature spaces computed by the model of attention. A first and rough multiscale segmentation of the salient regions is performed on the corresponding salient scale. This innovative presegmentation but yet uncomplete procedure is followed by some refined segmentation that operates in the salient feature at full resolution.


international conference on pattern recognition | 2006

Robot Navigation by Panoramic Vision and Attention Guided Fetaures

Alexandre Bur; Adriana Tapus; Nabil Ouerhani; Roland Siegwar; Heinz Hiigli

In visual-based robot navigation, panoramic vision emerges as a very attractive candidate for solving the localization task. Unfortunately, current systems rely on specific feature selection processes that do not cover the requirements of general purpose robots. In order to fulfil new requirements of robot versatility and robustness to environmental changes, we propose in this paper to perform the feature selection of a panoramic vision system by means of the saliency-based model of visual attention, a model known for its universality. The first part of the paper describes a localization system combining panoramic vision and visual attention. The second part presents a series of indoor localization experiments using panoramic vision and attention guided feature detection. The results show the feasibility of the approach and illustrate some of its capabilities


joint pattern recognition symposium | 2006

Linear vs. nonlinear feature combination for saliency computation: a comparison with human vision

Nabil Ouerhani; Alexandre Bur; Heinz Hügli

In the heart of the computer model of visual attention, an interest or saliency map is derived from an input image in a process that encompasses several data combination steps. While several combination strategies are possible and the choice of a method influences the final saliency substantially, there is a real need for a performance comparison for the purpose of model improvement. This paper presents contributing work in which model performances are measured by comparing saliency maps with human eye fixations. Four combination methods are compared in experiments involving the viewing of 40 images by 20 observers. Similarity is evaluated qualitatively by visual tests and quantitatively by use of a similarity score. With similarity scores lying 100% higher, non-linear combinations outperform linear methods. The comparison with human vision thus shows the superiority of non-linear over linear combination schemes and speaks for their preferred use in computer models.


international work conference on the interplay between natural and artificial computation | 2005

Model performance for visual attention in real 3d color scenes

Heinz Hügli; Timothée Jost; Nabil Ouerhani

Visual attention is the ability of a vision system, be it biological or artificial, to rapidly detect potentially relevant parts of a visual scene. The saliency-based model of visual attention is widely used to simulate this visual mechanism on computers. Though biologically inspired, this model has been only partially assessed in comparison with human behavior. The research described in this paper aims at assessing its performance in the case of natural scenes, i.e. real 3D color scenes. The evaluation is based on the comparison of computer saliency maps with human visual attention derived from fixation patterns while subjects are looking at the scenes. The paper presents a number of experiments involving natural scenes and computer models differing by their capacity to deal with color and depth. The results point on the large range of scene specific performance variations and provide typical quantitative performance values for models of different complexity.


joint pattern recognition symposium | 2002

A Real Time Implementation of the Saliency-Based Model of Visual Attention on a SIMD Architecture

Nabil Ouerhani; Heinz Hügli; Pierre-Yves Burgi; Pierre-François Ruedi

Visual attention is the ability to rapidly detect the visually salient parts of a given scene. Inspired by biological vision, the saliency-based algorithm efficiently models the visual attention process. Due to its complexity, the saliency-based model of visual attention needs, for a real time implementation, higher computation resources than available in conventional processors. This work reports a real time implementation of this attention model on a highly parallel Single Instruction Multiple Data (SIMD) architecture called ProtoEye. Tailored for low-level image processing, ProtoEye consists of a 2D array of mixed analog-digital processing elements (PE). The operations required for visual attention computation are optimally distributed on the analog and digital parts. The analog diffusion network is used to implement the spatial filtering-based transformations such as the conspicuity operator and the competitive normalization of conspicuity maps. Whereas the digital part of Proto-Eye allows the implementation of logical and arithmetical operations, for instance, the integration of the normalized conspicuity maps into the final saliency map. Using 64×64 gray level images, the on ProtoEye implemented attention process operates in real-time. It runs at a frequency of 14 images per second.


computational intelligence in robotics and automation | 2005

Robot self-localization using visual attention

Nabil Ouerhani; Heinz Hügli

This paper presents a robot self-localization method based on visual attention. This method takes advantage of the saliency-based model of attention to automatically learn configurations of salient visual landmarks along a robot path. During navigation, the visual attention algorithms detect a set of conspicuous visual features which are compared with the learned landmark configurations in order to determine the robot position on the navigation path. More specifically, the multi-cue attention model detects the most salient visual features that are potential candidates for landmarks. These features are then characterized by a visual descriptor vector computed from various visual cues and at different scales. By tracking the detected features over time, our landmarks selection procedure automatically evaluates their robustness and retains only the most robust features as landmarks. Further, the selected landmarks are organized into a topological map that is used for self-localization during the navigation phase. The self-localization method is based on matching between the currently detected visual features configuration and the configurations of the learned landmarks. Indeed, the matching procedure yields a probabilistic measure of the whereabouts of the robot. Thanks to the multi-featured input of the attention model, our method is potentially able to deal with a wide range of navigation environments.

Collaboration


Dive into the Nabil Ouerhani's collaboration.

Top Co-Authors

Avatar

Heinz Hügli

University of Neuchâtel

View shared research outputs
Top Co-Authors

Avatar

Alexandre Bur

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Marco Aeberli

Applied Science Private University

View shared research outputs
Top Co-Authors

Avatar

Michael Muller

Applied Science Private University

View shared research outputs
Top Co-Authors

Avatar

Nuria Pazos

Applied Science Private University

View shared research outputs
Top Co-Authors

Avatar

Timothée Jost

University of Neuchâtel

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Neculai Archip

Brigham and Women's Hospital

View shared research outputs
Researchain Logo
Decentralizing Knowledge