Chi-Chung Lo
National Chiao Tung University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Chi-Chung Lo.
Pervasive and Mobile Computing | 2012
Chi-Chung Lo; Lan-Yin Hsu; Yu-Chee Tseng
A growing number of location-based applications are based on indoor positioning, and much of the research effort in this field has focused on the pattern-matching approach. This approach relies on comparing a pre-trained database (or radio map) with the received signal strength (RSS) of a mobile device. However, such methods are highly sensitive to environmental dynamics. A number of solutions based on added anchor points have been proposed to overcome this problem. This paper proposes an approach using existing beacons to measure the RSS from other beacons as a reference, which we call inter-beacon measurement, for the calibration of radio maps on the fly. This approach is feasible because most current beacons (such as Wi-Fi and ZigBee stations) have both transmitting and receiving capabilities. This approach would relieve the need for additional anchor points that deal with environmental dynamics. Simulation and experimental results are presented to verify our claims.
personal, indoor and mobile radio communications | 2011
Chi-Chung Lo; Chen-Pin Chiu; Yu-Chee Tseng; Sheng-An Chang; Lun-Chia Kuo
Inertial sensors for pedestrian dead-reckoning (PDR) have been attracting considerable attention recently. Since accelerometers are prone to the accumulation of errors, a “Zero Velocity Update” (Z-UPT) technique [1], [2] was proposed as a means to calibrate the velocity of pedestrians. However, these inertial sensors must be mounted on the bottom of the foot, resulting in excessive vibration and errors when measuring speed or orientation. This paper proposes a self-calibrating PDR solution using two inertial sensors in conjunction with a novel concept called “Walking Velocity Update” (W-UPT). One inertial sensor is mounted on the lower leg to identify a point suitable for calibrating the walking velocity of the user (when its pitch value becomes zero), while another sensor is mounted on the upper body to track the velocity and orientation. We have developed a working prototype and tested the proposed system using actual data.
wearable and implantable body sensor networks | 2014
Sz-Pin Huang; Jun-Wei Qiu; Chi-Chung Lo; Yu-Chee Tseng
Indoor positioning has been intensively studied recently due to the exploding demands of indoor mobile applications. While numerous works have employed wireless signals or dead-reckoning techniques, wearable computing poses new opportunities as well as challenges to the localization problem. This research studies the wearable localization problem by proposing a particle filter-based scheme to fuse the inputs from wearable inertial and visual sensors on human body. Specifically, the filter takes inertial measurements, wireless signals, visual landmarks, and indoor floor plans as inputs for location tracking. The inertial signals imply human body movements, the wireless signals indicate a rough absolute region inside a building, while the visual landmarks provide relative angles viewed from particular positions to these markers. Furthermore, a head-mounted display provides intuitive and friendly interfaces to users. The proposed system has also been prototyped and tested in our campus, and the experiments demonstrate an average localization error of about one meter.
international computer symposium | 2010
Chi-Chung Lo; Shih-Chin Lin; Sheng-Po Kuo; Yu-Chee Tseng; Shin-Yun Peng; Shang-Ming Huang; Yu-Neng Hung; Chin-Fu Hung
Location-based services are regarded as a killer application of mobile networks. Among all RF-based localization techniques, the pattern-matching scheme is probably the most widely accepted approach. A key factor to its success is the accuracy concern and the calibration efforts to collect its training data. In this paper, we propose a community-based approach to reduce the calibration effort. We show how to get some volunteers (called co-trainers) to help add more training data to our location database. We also show how to rate the credit level of a co-trainer and the trust level of a piece of training data contributed by a co-trainer. We believe that our framework can greatly reduce the calibration effort of the pattern-matching localization scheme.
international conference on mobile systems, applications, and services | 2013
Chi-Chung Lo; Sz-Pin Huang; Yi Cheng Ren; Yu-Chee Tseng
1. ABSTRACT Taking self-portrait on a smartphone is a lot of fun and can be easy when we know how (Fig. 1a). In addition, thank to self-timer, giving a delay between pressing the shutter release and the shutter’s firing, we are able to take photos of ourselves when nobody on hand to take it (Fig. 1b). However, there are rare cases that we can get satisfied snapshots at our first try. The reason is that we usually had no idea whether we were in the right position of the camera frame (Figs. 1c, 2a and 2c). Although the front camera can help us to check position in the frame, its generated snapshot quality is much lower than that taken by the back camera. In this demo we will introduce a self-portrait App – “Yes, right there!” The App enables a camera to prevent faces from being cut out of the camera frame by giving suggestions to users until they are in a suitable position in the frame, as shown in Fig. 1d, Fig. 2b, and Fig. 2d. The suggestions are voice commands including “Raise your hand!”, “Come closer!”, “Please move to the left”, and so on. They depend on the face’s position in the camera frame [1] and the inertial sensing data on the phone. In the end, a good picture is then taken after the beep sounds. More concretely, “Yes, right there!” supports two modes: selfportrait mode and self-timer mode. In self-portrait mode, our App initially asks users to take favorite photos as pre-training references. In the meantime, the application measures the yaw, pitch, and roll values from accelerometer, electronic compass, and gyro. Note that the users usually like to take photos with a special angle. Once a user wants to take self-portrait, she points the lens at herself so that her face is reflected in the camera frame. The App detects the face’s position in the camera frame and measures the yaw, pitch, and roll values. Then the face’s position and the inertial sensing data are being compared against the pre-training references while the App suggests the user to change her posture by voice commands. If the face moves to the right position and the yaw, pitch, and roll values are suitable, a good picture is then taken automatically; otherwise, the App repeats the suggestions. Note that the self-portrait mode can be easily extended for multiple users. In self-timer mode, the user firstly specifies an area where she wants her face to appear in the frame. When the App works, it compares the position of her face against the area specified, and suggests the user to change her location by voice commands. As shown in Fig. 2a, the user’s face is detected appearing in the grid 1 (outside the specified area in the frame). The voice interactive function is triggered to lead the user by saying “Please move to the left and then come closer, etc.” until she moves to the suitable position (Fig. 2b). When the App works for multiple users, as shown in Fig. 2c, it detects that a user in the left side is not in the area specified (gird 5), and then the voice interactive function will give suggestions (e.g., The user in the right side please moves to the left, etc.) until all the users are appearing in the specified area in the frame (Fig. 2d). Fig. 3 shows our system model. We will distribute Android phones to demo visitors, allowing real-time interaction with user interface on these phones. We will also show how “Yes, right there!” recommends users taking selfportrait without assistance from others. Moreover, users can set timer for different waiting intervals, and specify the number and range of the grid which is considered as suitable position. To the best of our knowledge, this is the first work that shows how to help users take self-portrait by detecting the face’s position in the camera frame and measuring the inertial sensing data on the phone.
2013 First International Black Sea Conference on Communications and Networking (BlackSeaCom) | 2013
Chi-Chung Lo; Sz-Pin Huang; Yi Ren; Yu-Chee Tseng
Taking self-portrait on a smart device, such as a smart phone and a digital camera, is a lot of fun and can be easy when we know how. However, there are rare cases that we can get satisfied snapshots at our first try. Furthermore, the snapshots are usually stored on the camera. The user would not able to check the snapshots immediately on the remote side since she stands away the camera. In this paper, we propose a self-portrait application which enables a smart device to prevent faces from being cut out of the camera frame by giving suggestions to users until they are in a suitable position in the frame. In the same time, we enable the smart device to share the photos automatically to the remote device by machine-to-machine technique. The remote device can also control the camera to take photos again. The proposed application has been implemented on an Android smart phone.
Archive | 2012
Chi-Chung Lo; Yu-Chee Tseng; Chung-Wei Lin; Lun-Chia Kuo; Tsung-Ching Lin
Archive | 2010
Chi-Chung Lo; Sheng-Po Kuo; Jui-Hao Chu; Yu-Chee Tseng; Lun-Chia Kuo; Chao-Yu Chen
service-oriented computing and applications | 2014
Hsin-Hsien Peng; Chi-Chung Lo; Tsung-Ching Lin; Yu-Chee Tseng
ieee sensors | 2012
Chi-Chung Lo; Yi-Hsiu Chen; Yu-Chee Tseng; Shang-Ming Huang; Yu-Neng Hung; Chiu-Mei Tseng; Yeh-Chin Ho