Chris Joslin
Carleton University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Chris Joslin.
instrumentation and measurement technology conference | 2005
Chris Joslin; Ayman El-Sawah; Qing Chen; Nicolas D. Georganas
In this paper we introduce our method for enabling dynamic gesture recognition for hand gestures. Like a number of other research work focusing on gesture recognition we use a camera to track the motions and interpret these in terms of actual meaningful gestures; however we emphasise the tracking of fingers as well as the hand in order to cover a much wider range of gestures. The recognition is processed as part of three key stages, with a fourth in development. The first stage processes the visual information from the camera, and identifies the key regions and elements (such as the hand and fingers), this classified information is passed to a 2D to 3D module that transforms the 2D classified information into a full 3D space applying it to a calibrated hand model using inverse projection matrices and inverse kinematics. Simplifying this model into posture curvature information we apply this to a hidden Markov model (HMM). This model is used to identify and differentiate between different gestures, even ones using the same finger combinations. We briefly discuss our current development in the application of context awareness to this scenario, which is used in combination with the HMM in order to apply a different semantic to each gesture. This is especially useful due to the huge overlap in semantics specifically appropriated to hand gestures
canadian conference on computer and robot vision | 2007
Ayman El-Sawah; Chris Joslin; Nicolas D. Georganas; Emil M. Petriu
In this paper we present a framework for 3D hand tracking and dynamic gesture recognition using a single camera. Hand tracking is performed in a two step process: we first generate 3D hand posture hypothesis using geometric and kinematics inverse transformations, and then validate the hypothesis by projecting the postures on the image plane and comparing the projected model with the ground truth using a probabilistic observation model. Dynamic gesture recognition is performed using a Dynamic Bayesian Network model. The framework utilizes elements of soft computing to resolve the ambiguity inherent in vision-based tracking by producing a fuzzy hand posture output by the hand tracking module and feeding back potential posture hypothesis from the gesture recognition module.
Computer Communications | 2003
Chris Joslin; Igor S. Pandzic; Nadia Magnenat Thalmann
Networked Collaborative Virtual Environments (NCVEs) systems allow multiple geographically distant users to share the same 3D virtual environment using a network. The paper presents an overview of the developments in the field of NCVE in the past decade with an introduction of research challenges and solutions for such systems and a brief presentation of systems that brought major developments to the NCVE field. As a case study, we present a new generation NCVE system VPARK.
IEEE Computer Graphics and Applications | 2001
Chris Joslin; Tom Molet; Nadia Magnenat Thalmann; Joaquim Esmerado; Daniel Thalmann; Ian J. Palmer; Nicholas Chilton; Rae A. Earnshaw
We present a networked virtual environment (NVE) system and an attraction building system based on Windows NT that enables users to introduce their own scenario-based applications into a shared virtual environment. Our goal was to develop and integrate several modules into a system capable of animating realistic virtual humans in a real-time performance. This includes modeling and representing virtual humans with high realism and simulating human face and body movements in real time. Realism becomes quite important in NVEs, where the communication among participants is crucial for their sense of presence.
Computers & Graphics | 2004
Thomas Di Giacomo; Chris Joslin; Stephane Garchery; HyungSeok Kim; Nadia Magnenat-Thalmann
Abstract While level of detail (LoD) methods for the representation of 3D models are efficient and established tools to manage the trade-off between speed and quality of the rendering, LoD for animation has not yet been intensively studied by the community, and especially virtual humans animation has not been focused in the past. Animation, a major step for immersive and credible virtual environments, involves heavy computations and as such, it needs a control on its complexity to be embedded into real-time systems. Today, it becomes even more critical and necessary to provide such a control with the emergence of powerful new mobile devices and their increasing use for cyberworlds. With the help of suitable middleware solutions, executables are becoming more and more multi-platform. However, the adaptation of content, for various network and terminal capabilities—as well as for different user preferences, is still a key feature that needs to be investigated. It would ensure the adoption of “Multiple Target Devices Single Content” concept for virtual environments, and it would in theory provide the possibility of such virtual worlds in any possible condition without the need for multiple content. It is on this issue that we focus, with a particular emphasis on 3D objects and animation. This paper presents some theoretical and practical methods for adapting a virtual humans representation and animation stream, both for their skeleton-based body animation and their deformation-based facial animation, we also discuss practical details to the integration of our methods into MPEG-21 and MPEG-4 architectures.
The Visual Computer | 2006
HyungSeok Kim; Chris Joslin; Thomas Di Giacomo; Stephane Garchery; Nadia Magnenat-Thalmann
The goal of this research was the creation of an adaptation mechanism for the delivery of three-dimensional content. The adaptation of content, for various network and terminal capabilities – as well as for different user preferences, is a key feature that needs to be investigated. Current state-of-the art research of the adaptation shows promising results for specific tasks and limited types of content, but is still not well-suited for massive heterogeneous environments. In this research, we present a method for transmitting adapted three-dimensional content to multiple target devices. This paper presents some theoretical and practical methods for adapting three-dimensional content, which includes shapes and animation. We also discuss practical details of the integration of our methods into MPEG-21 and MPEG-4 architectures.
virtual reality software and technology | 2003
Chris Joslin; Nadia Magnenat-Thalmann
Sound rendering requires that many different aspects are considered simultaneously, especially when rendering a real-time virtual environment. In 3D sound rendering, much the same as for graphics, one of the major influencing factors is the number of reflective polygons in a scene and due to the increase in the ability of most common graphics cards this number can now be very high, especially when scene designers produce an optimum scene using other optimizing tools such as Polygon Cruncher or Rational Reducer. In addition, the use of programs such as Lightscape™ [20], which is used to produce realistic lighting, by using per vertex shading, increases the number of polygons in a scene by several factors. Therefore a strong, pre-processing method is proposed that dramatically reduces the number of polygons in the scene to a suitable level for real-time sound rendering. The method can also be combined with other methods (e.g. scene partitioning) for even lower CPU usage.
ieee virtual reality conference | 2001
Sumedha Kshirsagar; Chris Joslin; Won-Sook Lee; Nadia Magnenat-Thalmann
We present our system for personalized face and speech communication over the Internet. The overall system consists of three parts: the cloning of real human faces to use as the representative avatars; the Networked Virtual Environment System performing the basic task of network and device management; and the speech system which includes a text-to-speech engine and a real time phoneme extraction engine from natural speech. The combination of these three elements provides a system to allow real humans, represented by their virtual counterparts, to communicate with each other even when they are geographically remote. In addition to this, all elements present use MPEG-4 as a common communication and animation standard and were designed and tested on the Windows operating system (OS). The paper presents the main aim of the work, the methodology and the resulting communication system.
IEEE International Workshop on Haptic Audio Visual Environments and their Applications | 2005
Qing Chen; Ayman El-Sawah; Chris Joslin; Nicolas D. Georganas
A dynamic gesture interface for virtual environments based on hidden Markov models (HMMs) is introduced in this paper. The HMMs are employed to represent the continuous dynamic gestures, and their parameters are learned from the training data collected from the CyberGlove. To avoid the gesture spotting problem, we employed the standard deviation of the angle variation for each finger joint to describe the dynamic characters of the gestures. A prototype which applies 3 different dynamic gestures to control the rotation directions of a 3D cube is implemented to test the effectiveness of the proposed method.
virtual reality software and technology | 2000
Chris Joslin; Tom Molet; Nadia Magnenat-Thalmann
In this paper we present our Networked Virtual Environment (NVE) System, called W-VLNET (Windows Virtual Life Network), which has been developed on the Windows NT Operating System (OS). This paper emphasizes the Real-Time aspect of this NVE system, the advanced interactivity that the system provides and its ability to transfer data across the Internet so that geographically distant users can collaborate with each other. Techniques for communication, scene management, facial and body animation, and general user interaction modules are detailed in this paper. The use of VRML97 and MPEG4 SHNC is overviewed to stress the compatibility of the system with other similar Virtual Reality systems. The software provides realistic virtual actors as well as sets of applicable high-level actions in real-time. Related issues on obtaining actor models and animating them in real-time are presented. We also introduce a case study to show an example of how the system can be used.