Marius D. Cordea
University of Ottawa
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Marius D. Cordea.
instrumentation and measurement technology conference | 2000
Marius D. Cordea; Emil M. Petriu; Nicolas D. Georganas; Dorina C. Petriu; Thomas E. Whalen
This paper discusses a 2(1/2)-D tracking method allowing real-time recovery of the three-dimensional (3-D) position and orientation of a head moving in its image plane. The described method uses a two-dimensional (2-D) elliptical head model, region- and edge-based matching algorithms, and a linear Kalman filter estimator. The resulting motion tracking system works in a realistic environment without makeup on the face, with an uncalibrated camera, normal lighting conditions, and an unknown background.
instrumentation and measurement technology conference | 2001
Marius D. Cordea; Dorina C. Petriu; Emil M. Petriu; Nicolas D. Georganas; Thomas E. Whalen
This paper discusses a 3D tracking method allowing real-time recovery of the 3D position and orientation of a moving head. The described method uses a 3D wireframe model of the head, a 2D feature-based matching algorithm, and an extended Kalman filter estimator. The resulting motion tracking system works in a realistic environment without makeup on the face, with uncalibrated camera, and unknown fighting conditions and background.
IEEE Transactions on Instrumentation and Measurement | 2008
Marius D. Cordea; Emil M. Petriu; Dorina C. Petriu
This paper describes a novel 3-D model-based tracking algorithm allowing the real-time recovery of 3-D position, orientation, and facial expressions of a moving head. The method uses a 3-D anthropometric muscle-based active appearance model (3-D AMB AAM), a feature-based matching algorithm, and an extended Kalman filter (EKF) pose and expression estimator. Our model is an extension of the classical 2-D AAM and uses a generic 3-D wireframe model of the face based on two sets of controls: the anatomically motivated muscle actuators to model facial expressions and the statistically based anthropometrical controls to model different facial types.
International Journal of Advanced Media and Communication | 2009
Qing Chen; Marius D. Cordea; Emil M. Petriu; Annamária R. Várkonyi-Kóczy; Thomas E. Whalen
The paper discusses two body-language Human Computer Interaction (HCI) modalities, namely facial expressions and hand gestures, for healthcare and smart environment applications. This is an expanded version of a paper presented at the 3rd IEEE International Workshop on Medical Measurements and Applications, 9 10 May 2008, Ottawa, ON, Canada.
IEEE Transactions on Instrumentation and Measurement | 2006
Marius D. Cordea; Emil M. Petriu
This paper describes a novel method for modeling the shape and appearance of human faces in three dimensions using a constrained three-dimensional (3-D) active appearance model (AAM). Our algorithm is an extension of the classical two-dimensional (2-D) AAM. The method uses a generic 3-D wireframe model of the face, based on two sets of controls: anatomically motivated muscle actuators to model facial expressions and statistically based anthropometrical controls to model different facial-types. The 3-D anthropometric-muscle-based model (AMBM) of the face allows representing a facial image in terms of a controlled model-parameter set, hence, providing a natural and constrained basis for face segmentation and analysis. The generated face models are consequently simpler and less memory intensive compared to the classical appearance-based models. The proposed method allows for accurate fitting results by constraining solutions to be valid instances of a face model. Extensive image-segmentation experiments have demonstrated the accuracy of the proposed algorithm against the classical AAM.
Cyberpsychology, Behavior, and Social Networking | 2003
Thomas E. Whalen; Dorina C. Petriu; Lucy Yang; Emil M. Petriu; Marius D. Cordea
Avatars, representations of people in virtual environments, are subject to human control. However, for most applications, it is impractical for a person to directly control each joint in a complex avatar. Rather, people must be allowed to specify complex behaviours with simple instructions and the avatar permitted to select the correct movements in sequence to execute the instruction. This requires a variety of technologies that are currently available. Human behaviour must be captured and stored it so that it can be retrieved at a later time for use by the avatar. This has been done successfully with a variety of haptic interfaces, with visual observation of human head movements, and with verbal behaviour in natural language applications. The behaviour must be broken into atomic actions that can be sequenced with a regular grammar, and an appropriate grammar developed. Finally, a user interface must be developed so that a person can deliver instructions to the avatar.
acm multimedia | 2001
Michel D. Bondy; Nicolas D. Georganas; Emil M. Petriu; Dorina C. Petriu; Marius D. Cordea; Thomas E. Whalen
In this paper, we describe an experimental performance-driven animation system for an avatar face using model-based video coding and audio-track driven lip animation.
instrumentation and measurement technology conference | 1999
H.J.W. Spoelder; Emil M. Petriu; Thom E. Whalen; Dorina C. Petriu; Marius D. Cordea
This paper presents development aspects of a semi-autonomous animated avatar for real-time interactive virtual environment applications.
ieee international workshop on medical measurements and applications | 2008
Qing Chen; Marius D. Cordea; Emil M. Petriu; Thomas E. Whalen; Imre J. Rudas; Annamária R. Várkonyi-Kóczy
The paper discusses two body-language human-computer interaction modalities, namely the hand gesture and facial expression, for intelligent space applications such as elderly care and smart home applications.
virtual environments human computer interfaces and measurement systems | 2004
Marius D. Cordea; Emil M. Petriu; Thomas E. Whalen
This paper describes a novel method for modeling the shape and appearance of human faces in 3D using a constrained 3D active appearance model (AAM). The method uses a generic 3D wireframe model of the face, based on two sets of controls: the anatomically motivated muscle actuators to model facial expressions and statistically based anthropometrical controls to model different facial types (3D-anthropometric-muscle-based-model, 3D-AMBM). This allows explaining a facial image in terms of a controlled model parameter set, hence providing a natural and constrained basis for face segmentation and analysis. The generated face models are consequently simpler and less memory intensive compared to the classical appearance based models. Additionally, our method achieves accurate fitting results by constraining solutions to be valid instances of a face model.