Madjid Maidi
University of Évry Val d'Essonne
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Madjid Maidi.
machine vision applications | 2010
Madjid Maidi; Jean-Yves Didier; Fakhreddine Ababsa; Malik Mallem
Vision-based tracking systems are widely used for augmented reality (AR) applications. Their registration can be very accurate and there is no delay between real and virtual scene. However, vision-based tracking often suffers from limited range, errors, heavy processing time and present erroneous behavior due to numerical instability. To address these shortcomings, robust method are required to overcome these problems. In this paper, we survey classic vision-based pose computations and present a method that offers increased robustness and accuracy in the context of real-time AR tracking. In this work, we aim to determine the performance of four pose estimation methods in term of errors and execution time. We developed a hybrid approach that mixes an iterative method based on the extended Kalman filter (EKF) and an analytical method with direct resolution of pose parameters computation. The direct method initializes the pose parameters of the EKF algorithm which performs an optimization of these parameters thereafter. An evaluation of the pose estimation methods was obtained using a series of tests and an experimental protocol. The analysis of results shows that our hybrid algorithm improves stability, convergence and accuracy of the pose parameters.
Virtual Reality | 2011
Mahmoud Haydar; David Roussel; Madjid Maidi; Samir Otmane; Malik Mallem
The paper presents different issues dealing with both the preservation of cultural heritage using virtual reality (VR) and augmented reality (AR) technologies in a cultural context. While the VR/AR technologies are mentioned, the attention is paid to the 3D visualization, and 3D interaction modalities illustrated through three different demonstrators: the VR demonstrators (immersive and semi-immersive) and the AR demonstrator including tangible user interfaces. To show the benefits of the VR and AR technologies for studying and preserving cultural heritage, we investigated the visualisation and interaction with reconstructed underwater archaeological sites. The base idea behind using VR and AR techniques is to offer archaeologists and general public new insights on the reconstructed archaeological sites allowing archaeologists to study directly from within the virtual site and allowing the general public to immersively explore a realistic reconstruction of the sites. Both activities are based on the same VR engine, but drastically differ in the way they present information and exploit interaction modalities. The visualisation and interaction techniques developed through these demonstrators are the results of the ongoing dialogue between the archaeological requirements and the technological solutions developed.
2009 IEEE Symposium on Computational Intelligence for Multimedia Signal and Vision Processing | 2009
Madjid Maidi; Fakhreddine Ababsa; Malik Mallem
This paper describes a multimodal tracking system to resolve occlusions in augmented reality applications. The first module of the proposed architecture is composed of a vision based system and allows identification and tracking of visible targets. When targets are partially occluded by scene elements, a second module relieves the vision based module and tracks feature points using a robust algorithm. Finally, a multi-sensors tracking approach is implemented to handle total occlusion of targets and maintains registration even if all markers are not visible. Experimental results and many evaluations have been performed to show the efficiency and robustness of the proposed multimodal approach of tracking and occlusion handling in augmented reality.
international conference on virtual reality | 2008
Mahmoud Haydar; Madjid Maidi; David Roussel; Malik Mallem; Pierre Drap; Kim Bale; Paul Chapman
This paper describes the ongoing developments in Photogrammetry and Mixed Reality for the Venus European project (Virtual ExploratioN of Underwater Sites, http://www.venus-project.eu). The main goal of the project is to provide archaeologists and the general public with virtual and augmented reality tools for exploring and studying deep underwater archaeological sites out of reach of divers. These sites have to be reconstructed in terms of environment (seabed) and content (artifacts) by performing bathymetric and photogrammetric surveys on the real site and matching points between geolocalized pictures. The base idea behind using Mixed Reality techniques is to offer archaeologists and general public new insights on the reconstructed archaeological sites allowing archaeologists to study directly from within the virtual site and allowing the general public to immersively explore a realistic reconstruction of the sites. Both activities are based on the same VR engine but drastically differ in the way they present information. General public activities emphasize the visually and auditory realistic aspect of the reconstruction while archaeologists activities emphasize functional aspects focused on the cargo study rather than realism which leads to the development of two parallel VR demonstrators. This paper will focus on several key points developed for the reconstruction process as well as both VR demonstrators (archaeological and general public) issues. The first developed key point concerns the densification of seabed points obtained through photogrammetry in order to obtain high quality terrain reproduction. The second point concerns the development of the Virtual and Augmented Reality (VR/AR) demonstrators for archaeologists designed to exploit the results of the photogrammetric reconstruction. And the third point concerns the development of the VR demonstrator for general public aimed at creating awareness of both the artifacts that were found and of the process with which they were discovered by recreating the dive process from ship to seabed.
Eurasip Journal on Image and Video Processing | 2010
Madjid Maidi; Fakhreddine Ababsa; Malik Mallem
In Augmented Reality applications, the human perception is enhanced with computer-generated graphics. These graphics must be exactly registered to real objects in the scene and this requires an effective Augmented Reality system to track the users viewpoint. In this paper, a robust tracking algorithm based on coded fiducials is presented. Square targets are identified and pose parameters are computed using a hybrid approach based on a direct method combined with the Kalman filter. An important factor for providing a robust Augmented Reality system is the correct handling of targets occlusions by real scene elements. To overcome tracking failure due to occlusions, we extend our method using an optical flow approach to track visible points and maintain virtual graphics overlaying when targets are not identified. Our proposed real-time algorithm is tested with different camera viewpoints under various image conditions and shows to be accurate and robust.
trans. computational science | 2013
Madjid Maidi; Malik Mallem; Laredj Benchikh; Samir Otmane
In automotive industry, industrial robots are widely used in production lines for many tasks such as welding, painting or assembly. Their use requires, from users, both a good manipulation and robot control. Recently, new tools have been developed to realize fast and accurate trajectories in many production sectors by using the real prototype of vehicle or a generalized design within a virtual simulation platform. However, many issues could be considered in these cases: the delay between the design of the vehicle and its production is often important, moreover, the virtual modeling presents a non realistic aspect of the real robot and vehicle, so this factor could introduce localization inacurracies in performing trajectories. Our work is registered as a part of TRI project (Teleteaching Industrial Robots) which aims to realize a demonstrator showing the interaction of industrial robots with virtual components and allowing to train users to perform successfully their tasks on a virtual representation of a production entity.
Archive | 2008
Fakhreddine Ababsa; Madjid Maidi; Jean-Yves Didier; Malik Mallem
Augmented Reality Systems (ARS) attempt to enhance humans’ perception of their indoors and outdoors working and living environments and understanding of tasks that they need to carry out. The enhancement is effected by complementing the human senses with virtual input. For example, when the human visual sense is enhanced, an ARS allows virtual objects to be superimposed on a real world by projecting the virtual objects onto real objects. This provides the human user of the ARS with additional information that he/she could not perceive with his/her senses. In order to receive the virtual input and sense the world around them augmented with real time computer-generated features, users of an ARS need to wear special equipment, such as head-mounted devices or wearable computing gears. Tracking technologies are very important in an ARS and, in fact, constitute one challenging research and development topic. Tracking technologies involve both hardware and software issues, but in this chapter we focus on tracking computation. Tracking computation refers to the problem of estimating the position and orientation of the ARS user’s viewpoint, assuming the user to carry a wearable camera. Tracking computation is crucial in order to display the composed images properly and maintain correct registration of real and virtual worlds. This tracking problem has recently become a highly active area of research in ARS. Indeed, in recent years, several approaches to vision-based tracking using a wearable camera have been proposed, that can be classified into two main categories, namely “marker-based tracking” and “marker-less tracking.” In this chapter, we provide a concise introduction to vision-based tracking for mobile ARS and present an overview of the most popular approaches recently developed in this research area. We also present several practical examples illustrating how to conceive and to evaluate such systems.
INTELLIGENT SYSTEMS AND AUTOMATION: 2nd Mediterranean Conference on Intelligent#N#Systems and Automation (CISA’09) | 2009
Mahmoud Haydar; Madjid Maidi; David Roussel; Malik Mallem
Navigation in virtual environments is a complex task which imposes a high cognitive load on the user. It consists on maintaining knowledge of current position and orientation of the user while he moves through the space. In this paper, we present a novel approach for navigation in 3D virtual environments. The method is based on the principle of skiing, and the idea is to provide to the user a total control of his navigation speed and rotation using his two hands. This technique enables user‐steered exploration by determining the direction and the speed of motion using the knowledge of the positions of the user hands. A module of speed control is included to the technique to easily control the speed using the angle between the hands. The direction of motion is given by the orthogonal axis of the segment joining the two hands. A user study will show the efficiency of the method in performing exploration tasks in complex 3D large‐scale environments. Furthermore, we proposed an experimental protocol to prove ...
international conference on informatics in control, automation and robotics | 2018
Madjid Maidi; Fakhreddine Ababsa; Malik Mallem
virtual systems and multimedia | 2007
Frederic Alcala; A. Alcocer; F. Alves; Kim Bale; J. Bateman; Andrea Caiti; M. Casenove; Jean-Christophe Chambelland; Giuseppe Chapman; Olivier Curé; Pierre Drap; Audrey Durand; K. Edmundson; L. Gambella; Pamela Gambogi; Frédéric Gauch; Klaus Hanke; Mahmoud Haydar; Julien Hué; Robert Jeansoulin; Stuart Jeffrey; Luc Long; Vanessa Loureiro; Madjid Maidi; Odile Papini; G. Pachoud; Antonio Pascoal; Julian D. Richards; David Roussel; David Scaradozzi