Yaofeng Yue
University of Pittsburgh
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yaofeng Yue.
Public Health Nutrition | 2014
Wenyan Jia; Hsin-Chen Chen; Yaofeng Yue; Zhaoxin Li; John D. Fernstrom; Yicheng Bai; Chengliu Li; Mingui Sun
OBJECTIVE Accurate estimation of food portion size is of paramount importance in dietary studies. We have developed a small, chest-worn electronic device called eButton which automatically takes pictures of consumed foods for objective dietary assessment. From the acquired pictures, the food portion size can be calculated semi-automatically with the help of computer software. The aim of the present study is to evaluate the accuracy of the calculated food portion size (volumes) from eButton pictures. DESIGN Participants wore an eButton during their lunch. The volume of food in each eButton picture was calculated using software. For comparison, three raters estimated the food volume by viewing the same picture. The actual volume was determined by physical measurement using seed displacement. SETTING Dining room and offices in a research laboratory. SUBJECTS Seven lab member volunteers. RESULTS Images of 100 food samples (fifty Western and fifty Asian foods) were collected and each food volume was estimated from these images using software. The mean relative error between the estimated volume and the actual volume over all the samples was -2·8 % (95 % CI -6·8 %, 1·2 %) with sd of 20·4 %. For eighty-five samples, the food volumes determined by computer differed by no more than 30 % from the results of actual physical measurements. When the volume estimates by the computer and raters were compared, the computer estimates showed much less bias and variability. CONCLUSIONS From the same eButton pictures, the computer-based method provides more objective and accurate estimates of food volume than the visual estimation method.
northeast bioengineering conference | 2012
Yicheng Bai; Chengliu Li; Yaofeng Yue; Wenyan Jia; Jie Li; Zhi-Hong Mao; Mingui Sun
A wearable computer, called eButton, has been developed for evaluation of the human lifestyle. This ARM-based device acquires multimodal data from a camera module, a motion sensor, an orientation sensor, a light sensor and a GPS receiver. Its performance has been tested both in our laboratory and by human subjects in free-living conditions. Our results indicate that eButton can record real-world data reliably, providing a powerful tool for the evaluation of lifestyle for a broad range of applications.
Journal of Medical Systems | 2015
Zhen Li; Zhiqiang Wei; Yaofeng Yue; Hao Wang; Wenyan Jia; Lora E. Burke; Thomas Baranowski; Mingui Sun
Human activity recognition is important in the study of personal health, wellness and lifestyle. In order to acquire human activity information from the personal space, many wearable multi-sensor devices have been developed. In this paper, a novel technique for automatic activity recognition based on multi-sensor data is presented. In order to utilize these data efficiently and overcome the big data problem, an offline adaptive-Hidden Markov Model (HMM) is proposed. A sensor selection scheme is implemented based on an improved Viterbi algorithm. A new method is proposed that incorporates personal experience into the HMM model as a priori information. Experiments are conducted using a personal wearable computer eButton consisting of multiple sensors. Our comparative study with the standard HMM and other alternative methods in processing the eButton data have shown that our method is more robust and efficient, providing a useful tool to evaluate human activity and lifestyle.
Measurement Science and Technology | 2013
Hsin Chen Chen; Wenyan Jia; Yaofeng Yue; Zhaoxin Li; Yung-Nien Sun; John D. Fernstrom; Mingui Sun
Dietary assessment is important in health maintenance and intervention in many chronic conditions, such as obesity, diabetes, and cardiovascular disease. However, there is currently a lack of convenient methods for measuring the volume of food (portion size) in real-life settings. We present a computational method to estimate food volume from a single photographical image of food contained in a typical dining plate. First, we calculate the food location with respect to a 3D camera coordinate system using the plate as a scale reference. Then, the food is segmented automatically from the background in the image. Adaptive thresholding and snake modeling are implemented based on several image features, such as color contrast, regional color homogeneity and curve bending degree. Next, a 3D model representing the general shape of the food (e.g., a cylinder, a sphere, etc.) is selected from a pre-constructed shape model library. The position, orientation and scale of the selected shape model are determined by registering the projected 3D model and the food contour in the image, where the properties of the reference are used as constraints. Experimental results using various realistically shaped foods with known volumes demonstrated satisfactory performance of our image based food volume measurement method even if the 3D geometric surface of the food is not completely represented in the input image.
international conference of the ieee engineering in medicine and biology society | 2012
Wenyan Jia; Yaofeng Yue; John D. Fernstrom; Zhengnan Zhang; Yongquan Yang; Mingui Sun
A novel method to estimate the 3D location of a circular feature from a 2D image is presented and applied to the problem of objective dietary assessment from images taken by a wearable device. Instead of using a common reference (e.g., a checkerboard card), we use a food container (e.g., a circular plate) as a necessary reference before the volumetric measurement. In this paper, we establish a mathematical model formulating the system involving a camera and a circular object in a 3D space and, based on this model, the food volume is calculated. Our experiments showed that, for 240 pictures of a variety of regular objects and food replicas, the relative error of the image-based volume estimation was less than 10% in 224 pictures.
northeast bioengineering conference | 2010
Yaofeng Yue; Wenyan Jia; John D. Fernstrom; Robert J. Sclabassi; Madely H. Fernstrom; Ning Yao; Mingui Sun
Accurate estimation of food volume plays an important role in dietary assessment using digital photographs. In this paper, we present a new approach based on a circular object (e.g., a dining plate or a coin) as a physical reference to determine food portion size. A geometrical model relating the circular object and its image is built. An algorithm to estimate food volume using the geometric model is developed. Experimental results have shown high reliability and accuracy of this approach.
northeast bioengineering conference | 2011
Zhengnan Zhang; Yongquan Yang; Yaofeng Yue; John D. Fernstrom; Wenyan Jia; Mingui Sun
In dietary studies, an accurate tool for diet assessment is highly required. In this paper, we present a new approach to the estimation of the food volume from a single input image based on the virtual reality (VR) technology. A virtual reality model is built for the estimation process and an algorithm is developed for the calculation of food volume. Experimental results have indicated high accuracy and robustness in the food volume estimation.
international conference of the ieee engineering in medicine and biology society | 2012
Yaofeng Yue; Wenyan Jia; Mingui Sun
Food portion size measurement combined with a database of calories and nutrients is important in the study of metabolic disorders such as obesity and diabetes. In this work, we present a convenient and accurate approach to the calculation of food volume by measuring several dimensions using a single 2-D image as the input. This approach does not require the conventional checkerboard based camera calibration since it is burdensome in practice. The only prior requirements of our approach are: 1) a circular container with a known size, such as a plate, a bowl or a cup, is present in the image, and 2) the picture is taken under a reasonable assumption that the camera is always held level with respect to its left and right sides and its lens is tilted down towards foods on the dining table. We show that, under these conditions, our approach provides a closed form solution to camera calibration, allowing convenient measurement of food portion size using digital pictures.
northeast bioengineering conference | 2011
Yongquan Yang; Yaofeng Yue; Zhiqiang Wei; Sclabassi Robert J; Wenyan Jia; Mingui Sun
In order to understand the etiology of obesity related to peoples diet, we have developed a method to calculate food volume (portion size) from a single digital image in which a dining plate was used as a reference. In order to evaluate the error of this method, we observe the relationship between the relative error under different food imaging scenarios, such as angles of camera rotation and distances between food and camera. Based on our result, an error reduction scheme is proposed to improve the accuracy in image-based food volume estimation.
Journal of Food Engineering | 2012
Wenyan Jia; Yaofeng Yue; John D. Fernstrom; Ning Yao; Robert J. Sclabassi; Madelyn H. Fernstrom; Mingui Sun