Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alexandre R. J. François is active.

Publication


Featured researches published by Alexandre R. J. François.


Computer Vision and Image Understanding | 2007

Robust real-time vision for a personal service robot

Gérard G. Medioni; Alexandre R. J. François; Matheen Siddiqui; Kwangsu Kim; Ho-Sub Yoon

We address visual perception for personal service robotic systems in the home. We start by identifying the main functional modules and their relationships. This includes self-localization, long range people detection and tracking, and short range human interaction. We then discuss various vision based tasks within each of these modules, along with our implementations of these tasks. Typical results are shown. Finally, Stevi v.1, a demonstration computer vision subsystem that performs real-time people detection and tracking from stereo color video, illustrates our modular and scalable approach to integrating modules into working systems.


international conference on software engineering | 2004

A hybrid architectural style for distributed parallel processing of generic data streams

Alexandre R. J. François

Immersive, interactive applications grouped under the concept of Immersipresence require on-line processing and mixing of multimedia data streams and structures. One critical issue seldom addressed is the integration of different solutions to technical challenges, developed independently in separate fields, into working systems, that operate under hard performance constraints. In order to realize the Immersipresence vision, a consistent, generic approach to system integration is needed, that is adapted to the constraints of research development. This paper introduces SAI, a new software architecture model for designing, analyzing and implementing applications performing distributed, asynchronous parallel processing of generic data streams. SAI provides a universal framework for the distributed implementation of algorithms and their easy integration into complex systems that exhibit desirable software engineering qualities such as efficiency, scalability, extensibility, reusability and interoperability. The SAI architectural style and its properties are described. The use of SAI and of its supporting open source middleware (MFSM) is illustrated with integrated, distributed interactive systems.


international conference on computer vision systems | 2001

A Modular Software Architecture for Real-Time Video Processing

Alexandre R. J. François; Gérard G. Medioni

An increasing number of computer vision applications require on-line processing of data streams, preferably in real-time. This trend is fueled by the mainstream availability of low cost imaging devices, and the steady increase in computing power. To meet these requirements, applications should manipulate data streams in concurrent processing environments, taking into consideration scheduling, planning and synchronization issues. Those can be solved in specialized systems using ad hoc designs and implementations, that sacrifice flexibility and generality for performance. Instead, we propose a generic, extensible, modular software architecture. The cornerstone of this architecture is the Flow Scheduling Framework (FSF), an extensible set of classes that provide basic synchronization functionality and control mechanisms to develop data-stream processing components. Applications are built in a data-flow programming model, as the specification of data streams flowing through processing nodes, where they can undergo various manipulations. We describe the details of the FSF data and processing model that supports stream synchronization in a concurrent processing framework. We demonstrate the power of our architecture for video processing with a real-time video stream segmentation application. We also show dramatic throughput improvement over sequential execution models with a port of the pyramidal Lukas-Kanade feature tracker demonstration application from the Intel Open Computer Vision library.


international conference on pattern recognition | 2002

Reconstructing mirror symmetric scenes from a single view using 2-view stereo geometry

Alexandre R. J. François; Gérard G. Medioni; Roman Waupotitsch

We address the problem of 3D reconstruction from a single perspective view of a mirror symmetric scene. We establish the fundamental result that it is geometrically equivalent to observing the scene with two cameras, the cameras being symmetrical with respect to the unknown 3D symmetry plane. All traditional tools of classical 2-view stereo can then be applied, and the concepts of fundamental/essential matrix, epipolar geometry, rectification and disparity hold. However, the problems are greatly simplified here, as the rectification process and the computation of epipolar geometry can be easily performed from the original view only. If the camera is calibrated, we show how to synthesize the symmetric image generated by the same physical camera. An Euclidean reconstruction of the scene can then be computed from the resulting stereo pair. To validate this novel formulation, we have processed many real images, and show examples of 3D reconstruction.


Image and Vision Computing | 2003

Mirror symmetry⇒2-view stereo geometry

Alexandre R. J. François; Gérard G. Medioni; Roman Waupotitsch

Abstract We address the problem of 3D reconstruction from a single perspective view of a mirror symmetric scene. We establish the fundamental result that it is geometrically equivalent to observing the scene with two cameras, the cameras being symmetrical with respect to the unknown 3D symmetry plane. All traditional tools of classical 2-view stereo can then be applied, and the concepts of fundamental/essential matrix, epipolar geometry, rectification and disparity hold. However, the problems are greatly simplified here, as the rectification process and the computation of epipolar geometry can be easily performed from the original view only. If the camera is calibrated, we show how to synthesize the symmetric image generated by the same physical camera. A Euclidean reconstruction of the scene can then be computed from the resulting stereo pair. To validate this novel formulation, we have processed many real images, and show examples of 3D reconstruction.


international conference on pattern recognition | 2000

3D structures for generic object recognition

Gérard G. Medioni; Alexandre R. J. François

We discuss the issues and challenges of generic object recognition. We argue that high-level, volumetric part-based descriptions are essential in the process of recognizing objects that might never have been observed before, and for which no exact geometric model is available. We discuss the representation scheme and its relationships to the three main tasks to solve: extracting descriptions from real images, under a wide variety of viewing conditions; learning new objects by storing their description in a database; and recognizing objects by matching their description to that of similar previously observed objects.


international conference on multimedia and expo | 2003

A handheld mirror simulation

Alexandre R. J. François; Eun-Young Elaine Kang

We present the design and construction of a handheld mirror simulation device. The perception of the world reflected through a mirror depends on the viewers position with respect to the mirror and the 3-D geometry of the world. In order to simulate a real mirror on a computer screen, images of the observed world, consistent with the viewers position, must be synthesized and displayed in real- time. Our system is build around a LCD screen manipulated by the user, a single camera fixed on the screen, and a tracking device. The continuous input video stream and tracker data is used to synthesize, in real-time, a continuous video stream displayed on the LCD screen. The synthesized video stream is a close approximation of what the user would see on the screen surface if it were a real mirror. Our system provides a generic interface for applications involving rich, first-person interaction, such as the virtual daguerreotype.


Image and Vision Computing | 2001

Interactive 3D model extraction from a single image

Alexandre R. J. François; Gérard G. Medioni

Abstract We present a system at the junction between Computer Vision and Computer Graphics, to produce a three-dimensional (3D) model of an object as observed in a single image, with a minimum of high-level interaction from a user. The input to our system is a single image. First, the user points, coarsely, at image features (edges) that are subsequently automatically and reproducibly extracted in real-time. The user then performs a high level labeling of the curves (e.g. limb edge, cross-section) and specifies relations between edges (e.g. symmetry, surface or part). NURBS are used as working representation of image edges. The objects described by the user specified, qualitative relationships are then reconstructed either as a set of connected parts modeled as Generalized Cylinders, or as a set of 3D surfaces for 3D bilateral symmetric objects. In both cases, the texture is also extracted from the image. Our system runs in real-time on a PC.


new interfaces for musical expression | 2007

Visual feedback in performer-machine interaction for musical improvisation

Alexandre R. J. François; Elaine Chew; Dennis Thurmond

This paper describes the design of Mimi, a multi-modal interactive musical improvisation system that explores the potential and powerful impact of visual feedback in performer-machine interaction. Mimi is a performer-centric tool designed for use in performance and teaching. Its key and novel component is its visual interface, designed to provide the performer with instantaneous and continuous information on the state of the system. For human improvisation, in which context and planning are paramount, the relevant state of the system extends to the near future and recent past. Mimis visual interface allows for a peculiar blend of raw reflex typically associated with improvisation, and preparation and timing more closely affiliated with score-based reading. Mimi is not only an effective improvisation partner, it has also proven itself to be an invaluable platform through which to interrogate the mental models necessary for successful improvisation.


european conference on computer vision | 1996

Generic Shape Learning and Recognition

Alexandre R. J. François; Gérard G. Medioni

We address the problem of generic shape recognition, in which exact models are not available. We propose an original approach, in which learning and recognition are intimately linked, as recognition is based on previous observation.

Collaboration


Dive into the Alexandre R. J. François's collaboration.

Top Co-Authors

Avatar

Elaine Chew

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Gérard G. Medioni

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Jie Liu

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Dennis Thurmond

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Eun-Young Elaine Kang

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Isaac Schankler

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Roman Waupotitsch

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Aaron Yang

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Alexander A. Sawchuk

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge