Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christopher Conly is active.

Publication


Featured researches published by Christopher Conly.


pervasive technologies related to assistive environments | 2015

A survey on vision-based fall detection

Zhong Zhang; Christopher Conly; Vassilis Athitsos

Falls are a major cause of fatal injury for the elderly population. To improve the quality of living for seniors, a wide range of monitoring systems with fall detection functionality have been proposed over recent years. This article is a survey of systems and algorithms which aim at automatically detecting cases where a human falls and may have been injured. Existing fall detection methods can be categorized as using sensors, or being exclusively vision-based. This literature review focuses on vision-based methods.


pervasive technologies related to assistive environments | 2014

Sign language recognition using dynamic time warping and hand shape distance based on histogram of oriented gradient features

Pat Jangyodsuk; Christopher Conly; Vassilis Athitsos

Recognizing sign language is a very challenging task in computer vision. One of the more popular approaches, Dynamic Time Warping (DTW), utilizes hand trajectory information to compare a query sign with those in a database of examples. In this work, we conducted an American Sign Language (ASL) recognition experiment on Kinect sign data using DTW for sign trajectory similarity and Histogram of Oriented Gradient (HoG) [5] for hand shape representation. Our results show an improvement over the original work of [14], achieving an 82% accuracy in ranking signs in the 10 matches. In addition to our method that improves sign recognition accuracy, we propose a simple RGB-D alignment tool that can help roughly approximate alignment parameters between the color (RGB) and depth frames.


pervasive technologies related to assistive environments | 2013

Toward a 3D body part detection video dataset and hand tracking benchmark

Christopher Conly; Paul Doliotis; Pat Jangyodsuk; Rommel Alonzo; Vassilis Athitsos

The purpose of this paper is twofold. First, we introduce our Microsoft Kinect--based video dataset of American Sign Language (ASL) signs designed for body part detection and tracking research. This dataset allows researchers to experiment with using more than 2-dimensional (2D) color video information in gesture recognition projects, as it gives them access to scene depth information. Not only can this make it easier to locate body parts like hands, but without this additional information, two completely different gestures that share a similar 2D trajectory projection can be difficult to distinguish from one another. Second, as an accurate hand locator is a critical element in any automated gesture or sign language recognition tool, this paper assesses the efficacy of one popular open source user skeleton tracker by examining its performance on random signs from the above dataset. We compare the hand positions as determined by the skeleton tracker to ground truth positions, which come from manual hand annotations of each video frame. The purpose of this study is to establish a benchmark for the assessment of more advanced detection and tracking methods that utilize scene depth data. For illustrative purposes, we compare the results of one of the methods previously developed in our lab for detecting a single hand to this benchmark.


conference on computers and accessibility | 2012

3D point of gaze estimation using head-mounted RGB-D cameras

Christopher McMurrough; Christopher Conly; Vassilis Athitsos; Fillia Makedon

This paper presents a low-cost, wearable headset for 3D Point of Gaze (PoG) estimation in assistive applications. The device consists of an eye tracking camera and forward facing RGB-D scene camera which, together, provide an estimate of the user gaze vector and its intersection with a 3D point in space. The resulting system is able to compute the 3D PoG in real-time using inexpensive and readily available hardware components.


international symposium on visual computing | 2014

Evaluating depth-based computer vision methods for fall detection under occlusions

Zhong Zhang; Christopher Conly; Vassilis Athitsos

Falls are one of the major risks for seniors living alone at home. Fall detection has been widely studied in the computer vision community, especially since the advent of affordable depth sensing technology like the Kinect. Most existing methods assume that the whole fall process is visible to the camera. This is not always the case, however, since the end of the fall can be completely occluded by a certain object, like a bed. For a system to be usable in real life, the occlusion problem must be addressed. To quantify the challenges and assess performance in this topic, we present an occluded fall detection benchmark dataset containing 60 occluded falls for which the end of the fall is completely occluded. We also evaluate four existing fall detection methods using a single depth camera [1–4] on this benchmark dataset.


pervasive technologies related to assistive environments | 2014

Hand detection on sign language videos

Zhong Zhang; Christopher Conly; Vassilis Athitsos

For gesture and sign language recognition, hand shape and hand motion are the primary sources of information that differentiate one sign from another. Building an efficient and reliable hand detector is therefore an important step in recognizing signs and gestures. In this paper we evaluate three hand detection methods on three sign language data sets: a skin and motion detector [1], hand detection using multiple proposals [12], and chains model [9].


Proceedings of the 2nd international Workshop on Sensor-based Activity Recognition and Interaction | 2015

A review and quantitative comparison of methods for kinect calibration

Wei Xiang; Christopher Conly; Christopher McMurrough; Vassilis Athitsos

To utilize the full potential of RGB-D devices, calibration must be performed to determine the intrinsic and extrinsic parameters of the color and depth sensors and to reduce lens and depth distortion. After doing so, the depth pixels can be mapped to color pixels and both data streams can be simultaneously utilized. This work presents an overview and quantitative comparison of RGB-D calibration techniques and examines how the resolution and number of images affects calibration.


Proceedings of the 4th Workshop on Eye Gaze in Intelligent Human Machine Interaction | 2012

Multi-modal object of interest detection using eye gaze and RGB-D cameras

Christopher McMurrough; Jonathan Rich; Christopher Conly; Vassilis Athitsos; Fillia Makedon

This paper presents a low-cost, wearable headset for mobile 3D Point of Gaze (PoG) estimation in assistive applications. The device consists of an eye tracking camera and forward facing RGB-D scene camera which are able to provide an estimate of the user gaze vector and its intersection with a 3D point in space. A computational approach that considers object 3D information and visual appearance together with the visual gaze interactions of the user is also given to demonstrate the utility of the device. The resulting system is able to identify, in real-time, known objects within a scene that intersect with the user gaze vector.


pervasive technologies related to assistive environments | 2015

An integrated RGB-D system for looking up the meaning of signs

Christopher Conly; Zhong Zhang; Vassilis Athitsos

Users of written languages have the ability to quickly and easily look up the meaning of an unknown word. Those who use sign languages, however, lack this advantage, and it can be a challenge to find the meaning of an unknown sign. While some sign-to-written language dictionaries do exist, they are cumbersome and slow to use. We present an improved American Sign Language video dictionary system that allows a user to perform an unknown sign in front of a sensor and quickly retrieve a ranked list of similar signs with a video example of each. Earlier variants of the system required the use of a separate piece of software to record the query sign, as well as user intervention to provide bounding boxes for the hands and face in the first frame of the sign. The system presented here integrates all functionality into one piece of software and automates head and hand detection with the use of an RGB-D sensor, eliminating some of the shortcomings of the previous system, while improving match accuracy and shortening the time required to perform a query.


international conference on pattern recognition | 2016

Leveraging intra-class variations to improve large vocabulary gesture recognition

Christopher Conly; Alex Dillhoff; Vassilis Athitsos

Large vocabulary gesture recognition using a training set of limited size is a challenging problem in computer vision. With few examples per gesture class, researchers often employ exemplar-based methods such as Dynamic Time Warping (DTW). This paper makes two contributions in the area of exemplar-based gesture recognition: 1) it introduces Multiple-Pass DTW (MP-DTW), a method in which scores from multiple DTW passes focusing on different gesture properties are combined, and 2) it introduces a new set of features modeling intra-class variation of several gesture properties that can be used in conjunction with MP-DTW or DTW. We demonstrate that these techniques provide substantial improvement over DTW in both user-dependent and user-independent experiments on American Sign Language (ASL) datasets, even when using noisy data generated by RGB-D skeleton detectors. We further show that using these techniques in a large vocabulary system with a limited training set provides significantly better results compared to Long Short-Term Memory (LSTM) network and Hidden Markov Model (HMM) approaches.

Collaboration


Dive into the Christopher Conly's collaboration.

Top Co-Authors

Avatar

Vassilis Athitsos

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christopher McMurrough

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar

Fillia Makedon

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar

Alex Dillhoff

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar

Pat Jangyodsuk

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar

Himanshu Pahwa

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar

Jonathan Rich

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar

K.R.Rao

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar

Paul Doliotis

University of Texas at Arlington

View shared research outputs
Researchain Logo
Decentralizing Knowledge