Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alexandre Bur is active.

Publication


Featured researches published by Alexandre Bur.


international conference on pattern recognition | 2006

Robot Navigation by Panoramic Vision and Attention Guided Fetaures

Alexandre Bur; Adriana Tapus; Nabil Ouerhani; Roland Siegwar; Heinz Hiigli

In visual-based robot navigation, panoramic vision emerges as a very attractive candidate for solving the localization task. Unfortunately, current systems rely on specific feature selection processes that do not cover the requirements of general purpose robots. In order to fulfil new requirements of robot versatility and robustness to environmental changes, we propose in this paper to perform the feature selection of a panoramic vision system by means of the saliency-based model of visual attention, a model known for its universality. The first part of the paper describes a localization system combining panoramic vision and visual attention. The second part presents a series of indoor localization experiments using panoramic vision and attention guided feature detection. The results show the feasibility of the approach and illustrate some of its capabilities


joint pattern recognition symposium | 2006

Linear vs. nonlinear feature combination for saliency computation: a comparison with human vision

Nabil Ouerhani; Alexandre Bur; Heinz Hügli

In the heart of the computer model of visual attention, an interest or saliency map is derived from an input image in a process that encompasses several data combination steps. While several combination strategies are possible and the choice of a method influences the final saliency substantially, there is a real need for a performance comparison for the purpose of model improvement. This paper presents contributing work in which model performances are measured by comparing saliency maps with human eye fixations. Four combination methods are compared in experiments involving the viewing of 40 images by 20 observers. Similarity is evaluated qualitatively by visual tests and quantitatively by use of a similarity score. With similarity scores lying 100% higher, non-linear combinations outperform linear methods. The comparison with human vision thus shows the superiority of non-linear over linear combination schemes and speaks for their preferred use in computer models.


Computer Vision and Image Understanding | 2010

Dynamic visual attention on the sphere

Iva Bogdanova; Alexandre Bur; Heinz Hügli; Pierre-André Farine

In this paper we present a computational model of dynamic visual attention on the sphere which combines static (intensity,chromaticity, orientation) and motion features in order to detect salient locations in omnidirectional image sequences while working directly in spherical coordinates. We build the motion pyramid on the sphere by applying block matching and varying the block size. The spherical motion conspicuity map is obtained by fusing together the spherical motion magnitude and phase conspicuities. Furthermore, we combine this map with the static spherical saliency map in order to obtain the dynamic saliency map on the sphere. Detection of the spots of attention based on the dynamic saliency map on the sphere is applied on a sequence of real spherical images. The effect of using only the spherical motion magnitude or phase for defining the spots of attention on the sphere is examined as well. Finally, we test the spherical versus Euclidean spots detection on the omnidirectional image sequence.


electronic imaging | 2007

Motion Integration in Visual Attention Models for Predicting Simple Dynamic Scenes

Alexandre Bur; Pascal Wurtz; René Martin Müri; Heinz Hügli

Visual attention models mimic the ability of a visual system, to detect potentially relevant parts of a scene. This process of attentional selection is a prerequisite for higher level tasks such as object recognition. Given the high relevance of temporal aspects in human visual attention, dynamic information as well as static information must be considered in computer models of visual attention. While some models have been proposed for extending to motion the classical static model, a comparison of the performances of models integrating motion in different manners is still not available. In this article, we present a comparative study of various visual attention models combining both static and dynamic features. The considered models are compared by measuring their respective performance with respect to the eye movement patterns of human subjects. Simple synthetic video sequences, containing static and moving objects, are used to assess the model suitability. Qualitative and quantitative results provide a ranking of the different models.


human vision and electronic imaging conference | 2008

Dynamic visual attention: motion direction versus motion magnitude

Alexandre Bur; Pascal Wurtz; René Martin Müri; Heinz Hügli

Defined as an attentive process in the context of visual sequences, dynamic visual attention refers to the selection of the most informative parts of video sequence. This paper investigates the contribution of motion in dynamic visual attention, and specifically compares computer models designed with the motion component expressed either as the speed magnitude or as the speed vector. Several computer models, including static features (color, intensity and orientation) and motion features (magnitude and vector) are considered. Qualitative and quantitative evaluations are performed by comparing the computer model output with human saliency maps obtained experimentally from eye movement recordings. The model suitability is evaluated in various situations (synthetic and real sequences, acquired with fixed and moving camera perspective), showing advantages and inconveniences of each method as well as preferred domain of application.


picture coding symposium | 2009

Dynamic attentive system for omnidirectional video

Iva Bogdanova; Alexandre Bur; Pierre-André Farine

In this paper, we propose a dynamic attentive system for detecting the most salient regions of interest in omnidirectional video. The spot selection is based on computer modeling of dynamic visual attention. In order to operate on video sequences, the process encompasses the multiscale contrast detection of static and motion information, as well as fusion of the information in a scalar map called saliency map. The processing is performed in spherical geometry. While the static contribution collected in the static saliency map relies on our previous work, we propose a novel motion model based on block matching algorithm computed on the sphere. A spherical motion field pyramid is first estimated from two consecutive omnidirectional images by varying the block size. This latter constitutes the input of the motion model. Then, the motion saliency map is obtained by applying a multiscale motion contrast detection method in order to highlight the most salient motion regions. Finally, both static and motion saliency maps are integrated into a spherical dynamic saliency map. To illustrate the concept, the proposed attentive system is applied to real omnidirectional video sequences.


In Proceeding of European Conference on Mobile Robotics | 2005

VISUAL ATTENTION-BASED ROBOT SELF-LOCALIZATION

Nabil Ouerhani; Alexandre Bur


Lecture Notes in Computer Science | 2007

Optimal Cue Combination for Saliency Computation: A Comparison with Human Vision

Alexandre Bur; Heinz Hügli


Pattern Recognition | 2006

Linea vs. Nonlinear Feature Combination for Saliency Computation: A Comparison with Human Vision

Nabil Ouerhani; Alexandre Bur; Heinz Hügli


image and vision computing new zealand | 2007

Adaptive visual attention model

Heinz Hügli; Alexandre Bur

Collaboration


Dive into the Alexandre Bur's collaboration.

Top Co-Authors

Avatar

Heinz Hügli

University of Neuchâtel

View shared research outputs
Top Co-Authors

Avatar

Nabil Ouerhani

University of Neuchâtel

View shared research outputs
Top Co-Authors

Avatar

Iva Bogdanova

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pierre-André Farine

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Timothée Jost

University of Neuchâtel

View shared research outputs
Top Co-Authors

Avatar

Nabil Ouerhani

University of Neuchâtel

View shared research outputs
Top Co-Authors

Avatar

Roland Siegwar

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Adriana Tapus

University of Southern California

View shared research outputs
Researchain Logo
Decentralizing Knowledge