Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marielle Mokhtari is active.

Publication


Featured researches published by Marielle Mokhtari.


ieee virtual reality conference | 2011

Bimanual gestural interface for virtual environments

Julien-Charles Lévesque; Denis Laurendeau; Marielle Mokhtari

In this paper, a 3D bimanual gestural interface using data gloves is presented. We build upon past contributions on gestural interfaces and bimanual interactions to create an efficient and intuitive gestural interface that can be used in immersive environments. The proposed interface uses the hands in an asymmetric style, with the left hand providing the mode of interaction and the right hand acting at a finer level of detail.


Lecture Notes in Computer Science | 2001

Generic Multi-scale Segmentation and Curve Approximation Method

Marielle Mokhtari; Robert Bergevin

We propose a new complete method to extract significant description(s) of planar curves according to constant curvature segments. This method is based (i) on a multi-scale segmentation and curve approximation algorithm, defined by two grouping processes (polygonal and constant curvature approximations), leading to a multi-scale covering of the curve, and (ii) on an intra- and inter-scale classification of this multi-scale covering guided by heuristically-defined qualitative labels leading to pairs (scale, list of constant curvature segments) that best describe the shape of the curve. Experiments show that the proposed method is able to provide salient segmentation and approximation results which respect shape description and recognition criteria.


international conference on virtual, augmented and mixed reality | 2013

An Asymmetric Bimanual Gestural Interface for Immersive Virtual Environments

Julien-Charles Lévesque; Denis Laurendeau; Marielle Mokhtari

In this paper, a 3D bimanual gestural interface using data gloves is presented. We build upon past contributions on gestural interfaces and bimanual interactions to create an efficient and intuitive gestural interface that can be used in a wide variety of immersive virtual environments. Based on real world bimanual interactions, the proposed interface uses the hands in an asymmetric style, with the left hand providing the mode of interaction and the right hand acting on a finer level of detail. To validate the efficiency of this interface design, a comparative study between the proposed two-handed interface and a one-handed variant was conducted on a group of right-handed users. The results of the experiment support the bimanual interface as more efficient than the unimanual one. It is expected that this interface and the conclusions drawn from the experiments will be useful as a guide for efficient design of future bimanual gestural interfaces.


ieee virtual reality conference | 2009

A Software Architecture for Sharing Distributed Virtual Worlds

Frédéric Drolet; Marielle Mokhtari; François Bernier; Denis Laurendeau

This paper presents a generic software architecture developed to allow users located at different physical locations to share the same virtual environment and to interact with each other and the environment in a coherent and transparent manner.


international conference on virtual, augmented and mixed reality | 2013

Making Sense of Large Datasets in the Context of Complex Situation Understanding

Marielle Mokhtari; Eric Boivin; Denis Laurendeau

This paper presents exploration prototype tools (combining visualization and human-computer interaction aspects) developed for immersive displays in the context of the Image project. Image supports collaboration of users (i.e. experts, specialists, decision-makers…) for common understanding of complex situations by using a human guided feedback loop involving cutting-edge techniques for knowledge representation, scenario scripting, simulation and exploration of large datasets.


ieee virtual reality conference | 2011

IMAGE — Complex situation understanding: An immersive concept development

Marielle Mokhtari; Eric Boivin; Denis Laurendeau; Sylvain Comtois; Denis Ouellet; Julien-Charles Lévesque; Étienne Ouellet

This paper presents an immersive Human-centric/built virtual work cell for analyzing complex situations dynamically. This environment is supported by a custom open architecture, is composed of objects of complementary nature reflecting the level of Human understanding. Furthermore, it is controlled by an intuitive 3D bimanual gestural interface using data gloves.


Visualization in Biomedical Computing 1994 | 1994

Feature detection on 3D images of dental imprints

Marielle Mokhtari; Denis Laurendeau

A computer vision approach for the extraction of feature points on 3D images of dental imprints is presented. The position of feature points are needed for the measurement of a set of parameters for automatic diagnosis of malocclusion problems in orthodontics. The system for the acquisition of the 3D profile of the imprint, the procedure for the detection of the interstices between teeth, and the approach for the identification of the type of tooth are described, as well as the algorithm for the reconstruction of the surface of each type of tooth. A new approach for the detection of feature points, called the watershed algorithm, is described in detail. The algorithm is a two-stage procedure which tracks the position of local minima at four different scales and produces a final map of the position of the minima. Experimental results of the application of the watershed algorithm on actual 3D images of dental imprints are presented for molars, premolars and canines. The segmentation approach for the analysis of the shape of incisors is also described in detail.


visual analytics science and technology | 2010

Visual tools for dynamic analysis of complex situations

Marielle Mokhtari; Eric Boivin; Denis Laurendeau; Maxime Girardin

This paper presents an interactive interface synchronized with a simulation framework for exploring complex scenarios. This interface exploits visual analysis for facilitating the understanding of complex situation by human users.


Proceedings of SPIE | 2015

Collaborative interactive visualization: exploratory concept

Marielle Mokhtari; Valérie Lavigne; Frédéric Drolet

Dealing with an ever increasing amount of data is a challenge that military intelligence analysts or team of analysts face day to day. Increased individual and collective comprehension goes through collaboration between people. Better is the collaboration, better will be the comprehension. Nowadays, various technologies support and enhance collaboration by allowing people to connect and collaborate in settings as varied as across mobile devices, over networked computers, display walls, tabletop surfaces, to name just a few. A powerful collaboration system includes traditional and multimodal visualization features to achieve effective human communication. Interactive visualization strengthens collaboration because this approach is conducive to incrementally building a mental assessment of the data meaning. The purpose of this paper is to present an overview of the envisioned collaboration architecture and the interactive visualization concepts underlying the Sensemaking Support System prototype developed to support analysts in the context of the Joint Intelligence Collection and Analysis Capability project at DRDC Valcartier. It presents the current version of the architecture, discusses future capabilities to help analyst(s) in the accomplishment of their tasks and finally recommends collaboration and visualization technologies allowing to go a step further both as individual and as a team.


Proceedings of SPIE | 1995

Localization of significant 3D objects in 2D images for generic vision tasks

Marielle Mokhtari; Robert Bergevin

Computer vision experiments are not very often linked to practical applications but rather deal with typical laboratory experiments under controlled conditions. For instance, most object recognition experiments are based on specific models used under limitative constraints. Our work proposes a general framework for rapidly locating significant 3D objects in 2D static images of medium to high complexity, as a prerequisite step to recognition and interpretation when no a priori knowledge of the contents of the scene is assumed. In this paper, a definition of generic objects is proposed, covering the structures that are implied in the image. Under this framework, it must be possible to locate generic objects and assign a significance figure to each one from any image fed to the system. The most significant structure in a given image becomes the focus of interest of the system determining subsequent tasks (like subsequent robot moves, image acquisitions and processing). A survey of existing strategies for locating 3D objects in 2D images is first presented and our approach is defined relative to these strategies. Perceptual grouping paradigms leading to the structural organization of the components of an image are at the core of our approach.

Collaboration


Dive into the Marielle Mokhtari's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eric Boivin

Defence Research and Development Canada

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Valérie Lavigne

Defence Research and Development Canada

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Frédéric Drolet

Defence Research and Development Canada

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge