Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dirk Colbry is active.

Publication


Featured researches published by Dirk Colbry.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2006

Matching 2.5D face scans to 3D models

Xiaoguang Lu; Anil K. Jain; Dirk Colbry

The performance of face recognition systems that use two-dimensional images depends on factors such as lighting and subjects pose. We are developing a face recognition system that utilizes three-dimensional shape information to make the system more robust to arbitrary pose and lighting. For each subject, a 3D face model is constructed by integrating several 2.5D face scans which are captured from different views. 2.5D is a simplified 3D (x,y,z) surface representation that contains at most one depth value (z direction) for every point in the (x, y) plane. Two different modalities provided by the facial scan, namely, shape and texture, are utilized and integrated for face matching. The recognition engine consists of two components, surface matching and appearance-based matching. The surface matching component is based on a modified iterative closest point (ICP) algorithm. The candidate list from the gallery used for appearance matching is dynamically generated based on the output of the surface matching component, which reduces the complexity of the appearance-based matching stage. Three-dimensional models in the gallery are used to synthesize new appearance samples with pose and illumination variations and the synthesized face images are used in discriminant subspace analysis. The weighted sum rule is applied to combine the scores given by the two matching components. Experimental results are given for matching a database of 200 3D face models with 598 2.5D independent test scans acquired under different pose and some lighting and expression changes. These results show the feasibility of the proposed matching scheme.


Robotics and Autonomous Systems | 2003

Autominder: an intelligent cognitive orthotic system for people with memory impairment

Martha E. Pollack; Laura E. Brown; Dirk Colbry; Colleen E. McCarthy; Cheryl Orosz; Bart Peintner; Sailesh Ramakrishnan; Ioannis Tsamardinos

The world’s population is aging at a phenomenal rate. Certain types of cognitive decline, in particular some forms of memory impairment, occur much more frequently in the elderly. This paper describes Autominder, a cognitive orthotic system intended to help older adults adapt to cognitive decline and continue the satisfactory performance of routine activities, thereby potentially enabling them to remain in their own homes longer. Autominder achieves this goal by providing adaptive, personalized reminders of (basic, instrumental, and extended) activities of daily living. Cognitive orthotic systems on the market today mainly provide alarms for prescribed activities at fixed times that are specified in advance. In contrast, Autominder uses a range of AI techniques to model an individual’s daily plans, observe and reason about the execution of those plans, and make decisions about whether and when it is most appropriate to issue reminders. Autominder is currently deployed on a mobile robot, and is being developed as part of the Initiative on Personal Robotic Assistants for the Elderly (the Nursebot project).


international conference on pattern recognition | 2004

Three-dimensional model based face recognition

Xiaoguang Lu; Dirk Colbry; Anil K. Jain

The performance of face recognition systems that use two-dimensional (2D) images is dependent on consistent conditions such as lighting, pose and facial expression. We are developing a multi-view face recognition system that utilizes three-dimensional (3D) information about the face to make the system more robust to these variations. This work describes a procedure for constructing a database of 3D face models and matching this database to 2.5D face scans which are captured from different views, using coordinate system invariant properties of the facial surface. 2.5D is a simplified 3D (x, y, z) surface representation that contains at most one depth value (z direction) for every point in the (x, y) plane. A robust similarity metric is defined for matching, based on an iterative closest point (ICP) registration process. Results are given for matching a database of 18 3D face models with 113 2.5D face scans.


computer vision and pattern recognition | 2005

Detection of Anchor Points for 3D Face Veri.cation

Dirk Colbry; George C. Stockman; Anil K. Jain

This paper outlines methods to detect key anchor points in 3D face scanner data. These anchor points can be used to estimate the pose and then match the test image to a 3D face model. We present two algorithms for detecting face anchor points in the context of face verification; One for frontal images and one for arbitrary pose. We achieve 99% success in finding anchor points in frontal images and 86% success in scans with large variations in pose and changes in expression. These results demonstrate the challenges in 3D face recognition under arbitrary pose and expression. We are currently working on robust ?tting algorithms to localize more precisely the anchor points for arbitrary pose images.


Lecture Notes in Computer Science | 2004

Matching 2.5D Scans for Face Recognition

Xiaoguang Lu; Dirk Colbry; Anil K. Jain

The performance of face recognition systems that use two-dimensional images is dependent on consistent conditions such as lighting, pose, and facial appearance. We are developing a face recognition system that uses three-dimensional depth information to make the system more robust to these arbitrary conditions. We have developed a face matching system that automatically correlates points in three dimensions between two 2.5D range images of different views. A hybrid Iterative Closest Point (ICP) scheme is proposed to integrate two classical ICP algorithms for fine registration of the two scans. A robust similarity metric is defined for matching purpose. Results are provided on a preliminary database of 10 subjects (one training image per subject) containing frontal face images of neutral expression with a testing database of 63 scans that varied in pose, expression and lighting.


ambient intelligence | 2009

Recognition of hand movements using wearable accelerometers

Narayanan Chatapuram Krishnan; Colin Juillard; Dirk Colbry; Sethuraman Panchanathan

Accelerometer based activity recognition systems have typically focused on recognizing simple ambulatory activities of daily life, such as walking, sitting, standing, climbing stairs, etc. In this work, we developed and evaluated algorithms for detecting and recognizing short duration hand movements (lift to mouth, scoop, stir, pour, unscrew cap). These actions are a part of the larger and complex Instrumental Activities of Daily Life (IADL) making a drink and drinking. We collected data using small wireless tri-axial accelerometers worn simultaneously on different parts of the hand. Acceleration data for training was collected from 5 subjects, who also performed the two IADLs without being given specific instructions on how to complete them. Feature vectors (mean, variance, correlation, spectral entropy and spectral energy) were calculated and tested on three classifiers (AdaBoost, HMM, k-NN). AdaBoost showed the best performance, with an overall accuracy of 86% for detecting each of these hand actions. The results show that although some actions are recognized well with the generalized classifer trained on the subject-independent data, other actions require some amount of subject-specific training. We also observed an improvement in the performance of the system when multiple accelerometers placed on the right hand were used.


ieee international workshop on haptic audio visual environments and games | 2008

Using a haptic belt to convey non-verbal communication cues during social interactions to individuals who are blind

Troy L. McDaniel; Sreekar Krishna; Vineeth Nallure Balasubramanian; Dirk Colbry; Sethuraman Panchanathan

Good social skills are important and provide for a healthy, successful life; however, individuals with visual impairments are at a disadvantage when interacting with sighted peers due to inaccessible non-verbal cues. This paper presents a haptic (vibrotactile) belt to assist individuals who are blind or visually impaired by communicating non-verbal cues during social interactions. We focus on non-verbal communication pertaining to the relative location of the communicators with respect to the user in terms of direction and distance. Results from two experiments show that the haptic belt is effective in using vibration location and duration to communicate the relative direction and distance, respectively, of an individual in the userpsilas visual field.


human factors in computing systems | 2009

Using tactile rhythm to convey interpersonal distances to individuals who are blind

Troy L. McDaniel; Sreekar Krishna; Dirk Colbry; Sethuraman Panchanathan

This paper presents a scheme for using tactile rhythms to convey interpersonal distance to individuals who are blind or visually impaired, with the goal of providing access to non-verbal cues during social interactions. A preliminary experiment revealed that subjects could identify the proposed tactile rhythms and found them intuitive for the given application. Future work aims to improve recognition results and increase the number of interpersonal distances conveyed by incorporating temporal change information into the proposed methodology.


real-time systems symposium | 2012

A High-Fidelity Temperature Distribution Forecasting System for Data Centers

Jinzhu Chen; Rui Tan; Yu Wang; Guoliang Xing; Xiaorui Wang; Xiaodong Wang; Bill Punch; Dirk Colbry

Data centers have become a critical computing infrastructure in the era of cloud computing. Temperature monitoring and forecasting are essential for preventing overheating-induced server shutdowns and improving a data centers energy efficiency. This paper presents a novel cyber-physical approach for temperature forecasting in data centers, which integrates Computational Fluid Dynamics (CFD) modeling, in situ wireless sensing, and real-time data-driven prediction. To ensure the forecasting fidelity, we leverage the realistic physical thermodynamic models of CFD to generate transient temperature distribution and calibrate it using sensor feedback. Both simulated temperature distribution and sensor measurements are then used to train a real-time prediction algorithm. As a result, our approach significantly reduces the computational complexity of online temperature modeling and prediction, which enables a portable, noninvasive thermal monitoring solution that does not rely on the infrastructure of monitored data center. We extensively evaluated our system on a rack of 15 servers and a test bed of five racks and 229 servers in a production data center. Our results show that our system can predict the temperature evolution of servers with highly dynamic workloads at an average error of 0.52C, within a duration up to 10 minutes.


computer vision and pattern recognition | 2007

Canonical Face Depth Map: A Robust 3D Representation for Face Verification

Dirk Colbry; George C. Stockman

The canonical face depth map (CFDM) is a standardized representation for storing and manipulating 3D data from human faces. Our algorithm automates the process of transforming a 3D face scan into its canonical representation, eliminating the need for hand-labeled anchor points. The presented algorithm is designed to be a robust, fully automatic preprocessor for any 3D face recognition algorithm. The experimental results presented here demonstrate that our CFDM is robust to noise and occlusion, and we show that using such a canonical representation can improve the efficiency efface recognition algorithms and reduce memory requirements. Producing the CFDM takes, on average, 0.85 seconds for 320 times 240 pixel scans, and 3.8 seconds for 640 times 480 pixel scans (using a dual AMD Opteron 275, 2.2 GHz, with 2 MB Cache, and 1 GIG RAM). The CFDM enables both 2D and 3D image processing methods - such as convolution and PCA - to be readily used for feature localization and face recognition.

Collaboration


Dive into the Dirk Colbry's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wolfgang Bauer

Michigan State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Irina Sagert

Michigan State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Terrance Strother

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Alec Staber

Michigan State University

View shared research outputs
Top Co-Authors

Avatar

Anil K. Jain

Michigan State University

View shared research outputs
Top Co-Authors

Avatar

Jim Howell

Michigan State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge