Wenyan Jia
University of Pittsburgh
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Wenyan Jia.
Journal of The American Dietetic Association | 2010
Mingui Sun; John D. Fernstrom; Wenyan Jia; Steven A. Hackworth; Ning Yao; Yuecheng Li; Chengliu Li; Madelyn H. Fernstrom; Robert J. Sclabassi
Dietary reporting by individuals is subject to error (1–3). Therefore, a research program has been initiated to develop a small electronic device to record food intake automatically. This device, which contains a miniature camera, a microphone, and several other sensors, can be worn on a lanyard around the neck. It collects visual data immediately in front of the participant and stores them on a memory card in the device. The data are transferred regularly to the dietitian’s computer for further processing and analysis. The device is designed to be almost completely passive to the participant, and thus hopefully will not intrude on or alter the participant’s eating activities. In addition to this function, in the future the device will have other functions, such as the measurement of physical activity, human behavior, and environmental exposure (e.g., pollutants).
Public Health Nutrition | 2014
Wenyan Jia; Hsin-Chen Chen; Yaofeng Yue; Zhaoxin Li; John D. Fernstrom; Yicheng Bai; Chengliu Li; Mingui Sun
OBJECTIVE Accurate estimation of food portion size is of paramount importance in dietary studies. We have developed a small, chest-worn electronic device called eButton which automatically takes pictures of consumed foods for objective dietary assessment. From the acquired pictures, the food portion size can be calculated semi-automatically with the help of computer software. The aim of the present study is to evaluate the accuracy of the calculated food portion size (volumes) from eButton pictures. DESIGN Participants wore an eButton during their lunch. The volume of food in each eButton picture was calculated using software. For comparison, three raters estimated the food volume by viewing the same picture. The actual volume was determined by physical measurement using seed displacement. SETTING Dining room and offices in a research laboratory. SUBJECTS Seven lab member volunteers. RESULTS Images of 100 food samples (fifty Western and fifty Asian foods) were collected and each food volume was estimated from these images using software. The mean relative error between the estimated volume and the actual volume over all the samples was -2·8 % (95 % CI -6·8 %, 1·2 %) with sd of 20·4 %. For eighty-five samples, the food volumes determined by computer differed by no more than 30 % from the results of actual physical measurements. When the volume estimates by the computer and raters were compared, the computer estimates showed much less bias and variability. CONCLUSIONS From the same eButton pictures, the computer-based method provides more objective and accurate estimates of food volume than the visual estimation method.
design automation conference | 2014
Mingui Sun; Lora E. Burke; Zhi-Hong Mao; Yiran Chen; Hsin-Chen Chen; Yicheng Bai; Yuecheng Li; Chengliu Li; Wenyan Jia
Recent advances in mobile devices have made profound changes in peoples daily lives. In particular, the impact of easy access of information by the smartphone has been tremendous. However, the impact of mobile devices on healthcare has been limited. Diagnosis and treatment of diseases are still initiated by occurrences of symptoms, and technologies and devices that emphasize on disease prevention and early detection outside hospitals are under-developed. Besides healthcare, mobile devices have not yet been designed to fully benefit people with special needs, such as the elderly and those suffering from certain disabilities, such blindness. In this paper, an overview of our research on a new wearable computer called eButton is presented. The concepts of its design and electronic implementation are described. Several applications of the eButton are described, including evaluating diet and physical activity, studying sedentary behavior, assisting the blind and visually impaired people, and monitoring older adults suffering from dementia.
ieee conference on electromagnetic field computation | 2010
Yinghong Ma; Zhi-Hong Mao; Wenyan Jia; Chengliu Li; Jiawei Yang; Mingui Sun
Hand tracking is useful in human-computer interface. In this work, permanent magnets and contactless magnetic sensors are used to track finger motion. A magnet patch is affixed to each fingernail to mark the location of the fingertip. When fingers move, the combined magnetic fields produced by the magnets at fingertips are recorded by a set of magnetic sensors around a wristband. The recorded data are fed to a source localization algorithm to reconstruct the fingertip locations and estimate hand posture.
northeast bioengineering conference | 2012
Yicheng Bai; Chengliu Li; Yaofeng Yue; Wenyan Jia; Jie Li; Zhi-Hong Mao; Mingui Sun
A wearable computer, called eButton, has been developed for evaluation of the human lifestyle. This ARM-based device acquires multimodal data from a camera module, a motion sensor, an orientation sensor, a light sensor and a GPS receiver. Its performance has been tested both in our laboratory and by human subjects in free-living conditions. Our results indicate that eButton can record real-world data reliably, providing a powerful tool for the evaluation of lifestyle for a broad range of applications.
Journal of Medical Systems | 2015
Zhen Li; Zhiqiang Wei; Yaofeng Yue; Hao Wang; Wenyan Jia; Lora E. Burke; Thomas Baranowski; Mingui Sun
Human activity recognition is important in the study of personal health, wellness and lifestyle. In order to acquire human activity information from the personal space, many wearable multi-sensor devices have been developed. In this paper, a novel technique for automatic activity recognition based on multi-sensor data is presented. In order to utilize these data efficiently and overcome the big data problem, an offline adaptive-Hidden Markov Model (HMM) is proposed. A sensor selection scheme is implemented based on an improved Viterbi algorithm. A new method is proposed that incorporates personal experience into the HMM model as a priori information. Experiments are conducted using a personal wearable computer eButton consisting of multiple sensors. Our comparative study with the standard HMM and other alternative methods in processing the eButton data have shown that our method is more robust and efficient, providing a useful tool to evaluate human activity and lifestyle.
Measurement Science and Technology | 2013
Hsin Chen Chen; Wenyan Jia; Yaofeng Yue; Zhaoxin Li; Yung-Nien Sun; John D. Fernstrom; Mingui Sun
Dietary assessment is important in health maintenance and intervention in many chronic conditions, such as obesity, diabetes, and cardiovascular disease. However, there is currently a lack of convenient methods for measuring the volume of food (portion size) in real-life settings. We present a computational method to estimate food volume from a single photographical image of food contained in a typical dining plate. First, we calculate the food location with respect to a 3D camera coordinate system using the plate as a scale reference. Then, the food is segmented automatically from the background in the image. Adaptive thresholding and snake modeling are implemented based on several image features, such as color contrast, regional color homogeneity and curve bending degree. Next, a 3D model representing the general shape of the food (e.g., a cylinder, a sphere, etc.) is selected from a pre-constructed shape model library. The position, orientation and scale of the selected shape model are determined by registering the projected 3D model and the food contour in the image, where the properties of the reference are used as constraints. Experimental results using various realistically shaped foods with known volumes demonstrated satisfactory performance of our image based food volume measurement method even if the 3D geometric surface of the food is not completely represented in the input image.
international conference of the ieee engineering in medicine and biology society | 2013
Zhen Li; Zhiqiang Wei; Wenyan Jia; Mingui Sun
In order to evaluate peoples lifestyle for health maintenance, this paper presents a segmentation method based on multi-sensor data recorded by a wearable computer called eButton. This device is capable of recording more than ten hours of data continuously each day in multimedia forms. Automatic processing of the recorded data is a significant task. We have developed a two-step summarization method to segment large datasets automatically. At the first step, motion sensor signals are utilized to obtain candidate boundaries between different daily activities in the data. Then, visual features are extracted from images to determine final activity boundaries. It was found that some simple signal measures such as the combination of a standard deviation measure of the gyroscope sensor data at the first step and an image HSV histogram feature at the second step produces satisfactory results in automatic daily life event segmentation. This finding was verified by our experimental results.
international conference of the ieee engineering in medicine and biology society | 2012
Wenyan Jia; Yaofeng Yue; John D. Fernstrom; Zhengnan Zhang; Yongquan Yang; Mingui Sun
A novel method to estimate the 3D location of a circular feature from a 2D image is presented and applied to the problem of objective dietary assessment from images taken by a wearable device. Instead of using a common reference (e.g., a checkerboard card), we use a food container (e.g., a circular plate) as a necessary reference before the volumetric measurement. In this paper, we establish a mathematical model formulating the system involving a camera and a circular object in a 3D space and, based on this model, the food volume is calculated. Our experiments showed that, for 240 pictures of a variety of regular objects and food replicas, the relative error of the image-based volume estimation was less than 10% in 224 pictures.
international conference of the ieee engineering in medicine and biology society | 2010
Hong Zhang; Lu Li; Wenyan Jia; John D. Fernstrom; Robert J. Sclabassi; Mingui Sun
A new image based activity recognition method for a person wearing a video camera below the neck is presented in this paper. The wearable device is used to capture video data in front of the wearer. Although the wearer never appears in the video, his or her physical activity is analyzed and recognized using the recorded scene changes resulting from the motion of the wearer. Correspondence features are extracted from adjacent frames and inaccurate matches are removed based on a set of constraints imposed by the camera model. Motion histograms are defined and calculated within a frame and we define a new feature called accumulated motion distribution derived from motion statistics in each frame. A Support Vector Machine (SVM) classifier is trained with this feature and used to classify physical activities in different scenes. Our results show that different types of activities can be recognized in low resolution, field acquired real-world video.