Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Paulo Lopez-Meyer is active.

Publication


Featured researches published by Paulo Lopez-Meyer.


Physiological Measurement | 2008

Non-invasive monitoring of chewing and swallowing for objective quantification of ingestive behavior

Edward Sazonov; Stephanie Schuckers; Paulo Lopez-Meyer; Oleksandr Makeyev; Nadezhda Sazonova; Edward L. Melanson; Michael R. Neuman

A methodology of studying of ingestive behavior by non-invasive monitoring of swallowing (deglutition) and chewing (mastication) has been developed. The target application for the developed methodology is to study the behavioral patterns of food consumption and producing volumetric and weight estimates of energy intake. Monitoring is non-invasive based on detecting swallowing by a sound sensor located over laryngopharynx or by a bone-conduction microphone and detecting chewing through a below-the-ear strain sensor. Proposed sensors may be implemented in a wearable monitoring device, thus enabling monitoring of ingestive behavior in free-living individuals. In this paper, the goals in the development of this methodology are two-fold. First, a system comprising sensors, related hardware and software for multi-modal data capture is designed for data collection in a controlled environment. Second, a protocol is developed for manual scoring of chewing and swallowing for use as a gold standard. The multi-modal data capture was tested by measuring chewing and swallowing in 21 volunteers during periods of food intake and quiet sitting (no food intake). Video footage and sensor signals were manually scored by trained raters. Inter-rater reliability study for three raters conducted on the sample set of five subjects resulted in high average intra-class correlation coefficients of 0.996 for bites, 0.988 for chews and 0.98 for swallows. The collected sensor signals and the resulting manual scores will be used in future research as a gold standard for further assessment of sensor design, development of automatic pattern recognition routines and study of the relationship between swallowing/chewing and ingestive behavior.


IEEE Transactions on Biomedical Engineering | 2010

Automatic Detection of Swallowing Events by Acoustical Means for Applications of Monitoring of Ingestive Behavior

Edward Sazonov; Oleksandr Makeyev; Stephanie Schuckers; Paulo Lopez-Meyer; Edward L. Melanson; Michael R. Neuman

Our understanding of etiology of obesity and overweight is incomplete due to lack of objective and accurate methods for monitoring of ingestive behavior (MIB) in the free-living population. Our research has shown that frequency of swallowing may serve as a predictor for detecting food intake, differentiating liquids and solids, and estimating ingested mass. This paper proposes and compares two methods of acoustical swallowing detection from sounds contaminated by motion artifacts, speech, and external noise. Methods based on mel-scale Fourier spectrum, wavelet packets, and support vector machines are studied considering the effects of epoch size, level of decomposition, and lagging on classification accuracy. The methodology was tested on a large dataset (64.5 h with a total of 9966 swallows) collected from 20 human subjects with various degrees of adiposity. Average weighted epoch-recognition accuracy for intravisit individual models was 96.8%, which resulted in 84.7% average weighted accuracy in detection of swallowing events. These results suggest high efficiency of the proposed methodology in separation of swallowing sounds from artifacts that originate from respiration, intrinsic speech, head movements, food ingestion, and ambient noise. The recognition accuracy was not related to body mass index, suggesting that the methodology is suitable for obese individuals.


international conference of the ieee engineering in medicine and biology society | 2011

Automatic Detection of Temporal Gait Parameters in Poststroke Individuals

Paulo Lopez-Meyer; George D. Fulk; Edward Sazonov

Approximately one-third of people who recover from a stroke require some form of assistance to walk. Repetitive task-oriented rehabilitation interventions have been shown to improve motor control and function in people with stroke. Our long-term goal is to design and test an intensive task-oriented intervention that will utilize the two primary components of constrained-induced movement therapy: massed, task-oriented training and behavioral methods to increase use of the affected limb in the real world. The technological component of the intervention is based on a wearable footwear-based sensor system that monitors relative activity levels, functional utilization, and gait parameters of affected and unaffected lower extremities. The purpose of this study is to describe a methodology to automatically identify temporal gait parameters of poststroke individuals to be used in assessment of functional utilization of the affected lower extremity as a part of behavior enhancing feedback. An algorithm accounting for intersubject variability is capable of achieving estimation error in the range of 2.6-18.6% producing comparable results for healthy and poststroke subjects. The proposed methodology is based on inexpensive and user-friendly technology that will enable research and clinical applications for rehabilitation of people who have experienced a stroke.


Obesity | 2009

Toward Objective Monitoring of Ingestive Behavior in Free-living Population

Edward Sazonov; Stephanie Schuckers; Paulo Lopez-Meyer; Oleksandr Makeyev; Edward L. Melanson; Michael R. Neuman; James O. Hill

Understanding of eating behaviors associated with obesity requires objective and accurate monitoring of food intake patterns. Accurate methods are available for measuring total energy expenditure and its components in free‐living populations, but methods for measuring food intake in free‐living people are far less accurate and involve self‐reporting or subjective monitoring. We suggest that chews and swallows can be used for objective monitoring of ingestive behavior. This hypothesis was verified in a human study involving 20 subjects. Chews and swallows were captured during periods of quiet resting, talking, and meals of varying size. The counts of chews and swallows along with other derived metrics were used to build prediction models for detection of food intake, differentiation between liquids and solids, and for estimation of the mass of ingested food. The proposed prediction models were able to detect periods of food intake with >95% accuracy and a fine time resolution of 30 s, differentiate solid foods from liquids with >91% accuracy, and predict mass of ingested food with >91% accuracy for solids and >83% accuracy for liquids. In earlier publications, we have shown that chews and swallows can be captured by noninvasive sensors that could be developed into a wearable device. Thus, the proposed methodology could lead to the development of an innovative new way of assessing human eating behavior in free‐living conditions.


Biomedical Signal Processing and Control | 2012

Automatic food intake detection based on swallowing sounds.

Oleksandr Makeyev; Paulo Lopez-Meyer; Stephanie Schuckers; Walter G. Besio; Edward Sazonov

This paper presents a novel fully automatic food intake detection methodology, an important step toward objective monitoring of ingestive behavior. The aim of such monitoring is to improve our understanding of eating behaviors associated with obesity and eating disorders. The proposed methodology consists of two stages. First, acoustic detection of swallowing instances based on mel-scale Fourier spectrum features and classification using support vector machines is performed. Principal component analysis and a smoothing algorithm are used to improve swallowing detection accuracy. Second, the frequency of swallowing is used as a predictor for detection of food intake episodes. The proposed methodology was tested on data collected from 12 subjects with various degrees of adiposity. Average accuracies of >80% and >75% were obtained for intra-subject and inter-subject models correspondingly with a temporal resolution of 30s. Results obtained on 44.1 hours of data with a total of 7305 swallows show that detection accuracies are comparable for obese and lean subjects. They also suggest feasibility of food intake detection based on swallowing sounds and potential of the proposed methodology for automatic monitoring of ingestive behavior. Based on a wearable non-invasive acoustic sensor the proposed methodology may potentially be used in free-living conditions.


Annals of Biomedical Engineering | 2010

Detection of Food Intake from Swallowing Sequences by Supervised and Unsupervised Methods

Paulo Lopez-Meyer; Oleksandr Makeyev; Stephanie Schuckers; Edward L. Melanson; Michael R. Neuman; Edward Sazonov

Studies of food intake and ingestive behavior in free-living conditions most often rely on self-reporting-based methods that can be highly inaccurate. Methods of Monitoring of Ingestive Behavior (MIB) rely on objective measures derived from chewing and swallowing sequences and thus can be used for unbiased study of food intake with free-living conditions. Our previous study demonstrated accurate detection of food intake in simple models relying on observation of both chewing and swallowing. This article investigates methods that achieve comparable accuracy of food intake detection using only the time series of swallows and thus eliminating the need for the chewing sensor. The classification is performed for each individual swallow rather than for previously used time slices and thus will lead to higher accuracy in mass prediction models relying on counts of swallows. Performance of a group model based on a supervised method (SVM) is compared to performance of individual models based on an unsupervised method (K-means) with results indicating better performance of the unsupervised, self-adapting method. Overall, the results demonstrate that highly accurate detection of intake of foods with substantially different physical properties is possible by an unsupervised system that relies on the information provided by the swallowing alone.


Journal of Neurologic Physical Therapy | 2012

Identifying activity levels and steps of people with stroke using a novel shoe-based sensor.

George D. Fulk; S. Ryan Edgar; Rebecca Bierwirth; Phil Hart; Paulo Lopez-Meyer; Edward Sazonov

Background/Purpose: Advances in sensor technologies provide a method to accurately assess activity levels of people with stroke in their community. This information could be used to determine the effectiveness of rehabilitation interventions as well as provide behavior-enhancing feedback. The purpose of this study was to assess the accuracy of a novel shoe-based sensor system (SmartShoe) to identify different functional postures and steps in people with stroke. The SmartShoe system consists of five force-sensitive resistors built into a flexible insole and an accelerometer on the back of the shoe. Pressure and acceleration data are sent via Bluetooth to a smart phone. Methods: Participants with stroke wore the SmartShoe while they performed activities of daily living (ADLs) in sitting, standing, and walking positions. Data from four participants were used to develop a multilayer perceptron artificial neural network (ANN) to identify sitting, standing, and walking. A signal-processing algorithm used data from the pressure sensors to estimate the number of steps taken while walking. The accuracy, precision, and recall of the ANN for identifying the three functional postures were calculated with data from a different set of participants. Agreement between steps identified by SmartShoe and actual steps taken was analyzed by the Bland Altman method. Results: The SmartShoe was able to accurately identify sitting, standing, and walking. Accuracy, precision, and recall were all greater than 95%. The mean difference between steps identified by SmartShoe and actual steps was less than one step. Discussion: The SmartShoe was able to accurately identify different functional postures, using a unique combination of pressure and acceleration data, of people with stroke as they performed different ADLs. There was a strong level of agreement between actual steps taken and steps identified by the SmartShoe. Further study is needed to determine whether the SmartShoe could be used to provide valid information on activity levels of people with stroke while they go about their daily lives in their home and community.


international conference on biometrics theory applications and systems | 2010

Quality in face and iris research ensemble (Q-FIRE)

Peter A. Johnson; Paulo Lopez-Meyer; Nadezhda Sazonova; Fang Hua; Stephanie Schuckers

Identification of individuals using biometric information has found great success in many security and law enforcement applications. Up until the present time, most research in the field has been focused on ideal conditions and most available databases are constructed in these ideal conditions. There has been a growing interest in the perfection of these technologies at a distance and in less than ideal conditions, i.e. low lighting, out-of-focus blur, off angles, etc. This paper presents a dataset consisting of face and iris videos obtained at distances of 5 to 25 feet and in conditions of varying quality. The purpose of this database is to set a standard for quality measurement in face and iris data and to provide a means for analyzing biométrie systems in less than ideal conditions. The structure of the dataset as well as a quantified metric for quality measurement based on a 25 subject subset of the dataset is presented.


international conference on biometrics | 2012

Impact of out-of-focus blur on face recognition performance based on modular transfer function

Fang Hua; Peter A. Johnson; Nadezhda Sazonova; Paulo Lopez-Meyer; Stephanie Schuckers

It is well recognized that face recognition performance is impacted by the image quality. As face recognition is increasingly used in semi-cooperative or unconstrained applications, quantifying the impact of degraded image quality can provide the basis for improving recognition performance. This study uses a range of real out-of-focus blur obtained by controlled changes of the focal plane across face video sequences during acquisition from the Q-FIRE dataset. The modulation transfer function (MTF) method for measuring sharpness is presented and compared with other sharpness measurements with a reference of the co-located optical chart. Face recognition performance is then examined at eleven sharpness levels based on the MTF quality metrics. Experimental results show the MTF quality metrics better quantify a range of blur compared to the optical chart and offer a useful range of interest for face recognition performance. This paper demonstrates the applicability of an image blur quality metric as auxiliary information to supplement face recognition systems through the analysis of a unique database.


IEEE Transactions on Biomedical Engineering | 2013

Monitoring of Cigarette Smoking Using Wearable Sensors and Support Vector Machines

Paulo Lopez-Meyer; Stephen T. Tiffany; Yogendra Patil; Edward Sazonov

Cigarette smoking is a serious risk factor for cancer, cardiovascular, and pulmonary diseases. Current methods of monitoring of cigarette smoking habits rely on various forms of self-report that are prone to errors and under reporting. This paper presents a first step in the development of a methodology for accurate and objective assessment of smoking using noninvasive wearable sensors (Personal Automatic Cigarette Tracker - PACT) by demonstrating feasibility of automatic recognition of smoke inhalations from signals arising from continuous monitoring of breathing and hand-to-mouth gestures by support vector machine classifiers. The performance of subject-dependent (individually calibrated) models was compared to performance of subject-independent (group) classification models. The models were trained and validated on a dataset collected from 20 subjects performing 12 different activities representative of everyday living (total duration 19.5 h or 21411 breath cycles). Precision and recall were used as the accuracy metrics. Group models obtained 87% and 80% of average precision and recall, respectively. Individual models resulted in 90% of average precision and recall, indicating a significant presence of individual traits in signal patterns. These results suggest the feasibility of monitoring cigarette smoking by means of a wearable and noninvasive sensor system in free living conditions.

Collaboration


Dive into the Paulo Lopez-Meyer's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Oleksandr Makeyev

University of Rhode Island

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael R. Neuman

Case Western Reserve University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stephen T. Tiffany

State University of New York System

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge