Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Edison Thomaz is active.

Publication


Featured researches published by Edison Thomaz.


ubiquitous computing | 2015

A practical approach for recognizing eating moments with wrist-mounted inertial sensing

Edison Thomaz; Irfan A. Essa; Gregory D. Abowd

Recognizing when eating activities take place is one of the key challenges in automated food intake monitoring. Despite progress over the years, most proposed approaches have been largely impractical for everyday usage, requiring multiple on-body sensors or specialized devices such as neck collars for swallow detection. In this paper, we describe the implementation and evaluation of an approach for inferring eating moments based on 3-axis accelerometry collected with a popular off-the-shelf smartwatch. Trained with data collected in a semi-controlled laboratory setting with 20 subjects, our system recognized eating moments in two free-living condition studies (7 participants, 1 day; 1 participant, 31 days), with F-scores of 76.1% (66.7% Precision, 88.8% Recall), and 71.3% (65.2% Precision, 78.6% Recall). This work represents a contribution towards the implementation of a practical, automated system for everyday food intake monitoring, with applicability in areas ranging from health research and food journaling.


human factors in computing systems | 2015

Barriers and Negative Nudges: Exploring Challenges in Food Journaling

Felicia Cordeiro; Daniel A. Epstein; Edison Thomaz; Elizabeth Bales; Arvind Krishnaa Jagannathan; Gregory D. Abowd; James Fogarty

Although food journaling is understood to be both important and difficult, little work has empirically documented the specific challenges people experience with food journals. We identify key challenges in a qualitative study combining a survey of 141 current and lapsed food journalers with analysis of 5,526 posts in community forums for three mobile food journals. Analyzing themes in this data, we find and discuss barriers to reliable food entry, negative nudges caused by current techniques, and challenges with social features. Our results motivate research exploring a wider range of approaches to food journal design and technology.


international symposium on wearable computers | 2015

Predicting daily activities from egocentric images using deep learning

Daniel Castro; Steven Hickson; Vinay Bettadapura; Edison Thomaz; Gregory D. Abowd; Henrik I. Christensen; Irfan A. Essa

We present a method to analyze images taken from a passive egocentric wearable camera along with the contextual information, such as time and day of week, to learn and predict everyday activities of an individual. We collected a dataset of 40,103 egocentric images over a 6 month period with 19 activity classes and demonstrate the benefit of state-of-the-art deep learning techniques for learning and predicting daily activities. Classification is conducted using a Convolutional Neural Network (CNN) with a classification method we introduce called a late fusion ensemble. This late fusion ensemble incorporates relevant contextual information and increases our classification accuracy. Our technique achieves an overall accuracy of 83.07% in predicting a persons activity across the 19 activity classes. We also demonstrate some promising results from two additional users by fine-tuning the classifier with one day of training data.


workshop on applications of computer vision | 2015

Leveraging Context to Support Automated Food Recognition in Restaurants

Vinay Bettadapura; Edison Thomaz; Aman Parnami; Gregory D. Abowd; Irfan A. Essa

The pervasiveness of mobile cameras has resulted in a dramatic increase in food photos, which are pictures reflecting what people eat. In this paper, we study how taking pictures of what we eat in restaurants can be used for the purpose of automating food journaling. We propose to leverage the context of where the picture was taken, with additional information about the restaurant, available online, coupled with state-of-the-art computer vision techniques to recognize the food being consumed. To this end, we demonstrate image-based recognition of foods eaten in restaurants by training a classifier with images from restaurants online menu databases. We evaluate the performance of our system in unconstrained, real-world settings with food images taken in 10 restaurants across 5 different types of food (American, Indian, Italian, Mexican and Thai).


Proceedings of the 4th International SenseCam & Pervasive Imaging Conference on | 2013

Feasibility of identifying eating moments from first-person images leveraging human computation

Edison Thomaz; Aman Parnami; Irfan A. Essa; Gregory D. Abowd

There is widespread agreement in the medical research community that more effective mechanisms for dietary assessment and food journaling are needed to fight back against obesity and other nutrition-related diseases. However, it is presently not possible to automatically capture and objectively assess an individuals eating behavior. Currently used dietary assessment and journaling approaches have several limitations; they pose a significant burden on individuals and are often not detailed or accurate enough. In this paper, we describe an approach where we leverage human computation to identify eating moments in first-person point-of-view images taken with wearable cameras. Recognizing eating moments is a key first step both in terms of automating dietary assessment and building systems that help individuals reflect on their diet. In a feasibility study with 5 participants over 3 days, where 17,575 images were collected in total, our method was able to recognize eating moments with 89.68% accuracy.


ubiquitous computing | 2013

Technological approaches for addressing privacy concerns when recognizing eating behaviors with wearable cameras

Edison Thomaz; Aman Parnami; Jonathan Bidwell; Irfan A. Essa; Gregory D. Abowd

First-person point-of-view (FPPOV) images taken by wearable cameras can be used to better understand peoples eating habits. Human computation is a way to provide effective analysis of FPPOV images in cases where algorithmic approaches currently fail. However, privacy is a serious concern. We provide a framework, the privacy-saliency matrix, for understanding the balance between the eating information in an image and its potential privacy concerns. Using data gathered by 5 participants wearing a lanyard-mounted smartphone, we show how the framework can be used to quantitatively assess the effectiveness of four automated techniques (face detection, image cropping, location filtering and motion filtering) at reducing the privacy-infringing content of images while still maintaining evidence of eating behaviors throughout the day.


intelligent user interfaces | 2015

Inferring Meal Eating Activities in Real World Settings from Ambient Sounds: A Feasibility Study

Edison Thomaz; Cheng Zhang; Irfan A. Essa; Gregory D. Abowd

Dietary self-monitoring has been shown to be an effective method for weight-loss, but it remains an onerous task despite recent advances in food journaling systems. Semi-automated food journaling can reduce the effort of logging, but often requires that eating activities be detected automatically. In this work we describe results from a feasibility study conducted in-the-wild where eating activities were inferred from ambient sounds captured with a wrist-mounted device; twenty participants wore the device during one day for an average of 5 hours while performing normal everyday activities. Our system was able to identify meal eating with an F-score of 79.8% in a person-dependent evaluation, and with 86.6% accuracy in a person-independent evaluation. Our approach is intended to be practical, leveraging off-the-shelf devices with audio sensing capabilities in contrast to systems for automated dietary assessment based on specialized sensors.


international conference on multimodal interfaces | 2015

Detecting Mastication: A Wearable Approach

Abdelkareem Bedri; Apoorva Verlekar; Edison Thomaz; Valerie Avva; Thad Starner

We explore using the Outer Ear Interface (OEI) to recognize eating activities. OEI contains a 3D gyroscope and a set of proximity sensors encapsulated in an off-the-shelf earpiece to monitor jaw movement by measuring ear canal deformation. In a laboratory setting with 20 participants, OEI could distinguish eating from other activities, such as walking, talking, and silently reading, with over 90% accuracy (user independent). In a second study, six subjects wore the system for 6 hours each while performing their normal daily activities. OEI correctly classified five minute segments of time as eating or non-eating with 93% accuracy (user dependent).


international symposium on wearable computers | 2015

A wearable system for detecting eating activities with proximity sensors in the outer ear

Abdelkareem Bedri; Apoorva Verlekar; Edison Thomaz; Valerie Avva; Thad Starner

This paper presents an approach for automatically detecting eating activities by measuring deformations in the ear canal walls due to mastication activity. These deformations are measured with three infrared proximity sensors encapsulated in an off-the-shelf earpiece. To evaluate our method, we conducted a user study in a lab setting where 20 participants were asked to perform eating and non-eating activities. A user dependent analysis demonstrated that eating could be detected with 95.3% accuracy. This result indicates that proximity sensing offers an alternative to acoustic and inertial sensing in eating detection while providing benefits in terms of privacy and robustness to noise.


ubiquitous computing | 2012

Recognizing water-based activities in the home through infrastructure-mediated sensing

Edison Thomaz; Vinay Bettadapura; Gabriel Reyes; Megha Sandesh; Grant Schindler; Thomas Plötz; Gregory D. Abowd; Irfan A. Essa

Activity recognition in the home has been long recognized as the foundation for many desirable applications in fields such as home automation, sustainability, and healthcare. However, building a practical home activity monitoring system remains a challenge. Striking a balance between cost, privacy, ease of installation and scalability continues to be an elusive goal. In this paper, we explore infrastructure-mediated sensing combined with a vector space model learning approach as the basis of an activity recognition system for the home. We examine the performance of our single-sensor water-based system in recognizing eleven high-level activities in the kitchen and bathroom, such as cooking and shaving. Results from two studies show that our system can estimate activities with overall accuracy of 82.69% for one individual and 70.11% for a group of 23 participants. As far as we know, our work is the first to employ infrastructure-mediated sensing for inferring high-level human activities in a home setting.

Collaboration


Dive into the Edison Thomaz's collaboration.

Top Co-Authors

Avatar

Gregory D. Abowd

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Irfan A. Essa

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Aman Parnami

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Abdelkareem Bedri

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Vinay Bettadapura

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Apoorva Verlekar

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Cheng Zhang

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gabriel Reyes

Georgia Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge