Kuan-Wen Chen
National Taiwan University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kuan-Wen Chen.
computer vision and pattern recognition | 2008
Kuan-Wen Chen; Chih-Chuan Lai; Yi-Ping Hung; Chu-Song Chen
This paper proposes an adaptive learning method for tracking targets across multiple cameras with disjoint views. Two visual cues are usually employed for tracking targets across cameras: spatio-temporal cue and appearance cue. To learn the relationships among cameras, traditional methods used batch-learning procedures or hand-labeled correspondence, which can work well only within a short period of time. In this paper, we propose an unsupervised method which learns both spatio-temporal relationships and appearance relationships adaptively and can be applied to long-term monitoring. Our method performs target tracking across multiple cameras while also considering the environment changes, such as sudden lighting changes. Also, we improve the estimation of spatio-temporal relationships by using the prior knowledge of camera network topology.
IEEE Transactions on Multimedia | 2011
Kuan-Wen Chen; Chih-Chuan Lai; Pei-Jyun Lee; Chu-Song Chen; Yi-Ping Hung
To track targets across networked cameras with disjoint views, one of the major problems is to learn the spatio-temporal relationship and the appearance relationship, where the appearance relationship is usually modeled as a brightness transfer function. Traditional methods learning the relationships by using either hand-labeled correspondence or batch-learning procedure are applicable when the environment remains unchanged. However, in many situations such as lighting changes, the environment varies seriously and hence traditional methods fail to work. In this paper, we propose an unsupervised method which learns adaptively and can be applied to long-term monitoring. Furthermore, we propose a method that can avoid weak links and discover the true valid links among the entry/exit zones of cameras from the correspondence. Experimental results demonstrate that our method outperforms existing methods in learning both the spatio-temporal and the appearance relationship, and can achieve high tracking accuracy in both indoor and outdoor environment.
IEEE Transactions on Multimedia | 2011
Kuan-Wen Chen; Chih-Wei Lin; Tzu-Hsuan Chiu; Mike Y. Chen; Yi-Ping Hung
Large-scale and high-resolution monitoring systems are ideal for many visual surveillance applications. However, existing approaches have insufficient resolution and low frame rate per second, or have high complexity and cost. We take inspiration from the human visual system and propose a multi-resolution design, e-Fovea, which provides peripheral vision with a steerable fovea that is in higher resolution. In this paper, we firstly present two user studies, with a total of 36 participants, to compare e-Fovea to two existing multi-resolution visual monitoring designs. The user study results show that for visual monitoring tasks, our e-Fovea design with steerable focus is significantly faster than existing approaches and preferred by users. We then present our design and implementation of e-Fovea, which combines both multi-resolution camera input and multi-resolution steerable projector output. Finally, we present our deployment of e-Fovea in three installations to demonstrate its feasibility.
international conference on pattern recognition | 2014
Chih-Chuan Lai; Yu-Ting Chen; Kuan-Wen Chen; Shen-Chi Chen; Sheng-Wen Shih; Yi-Ping Hung
In this work, we develop an appearance-based gaze tracking system allowing user to move their head freely. The main difficulty of the appearance-based gaze tracking method is that the eye appearance is sensitive to head orientation. To overcome the difficulty, we propose a 3-D gaze tracking method combining head pose tracking and appearance-based gaze estimation. We use a random forest approach to model the neighbor structure of the joint head pose and eye appearance space, and efficiently select neighbors from the collected high dimensional data set. L1-optimization is then used to seek for the best solution for regression from the selected neighboring samples. Experiment results shows that it can provide robust binocular gaze tracking results with less constraints but still provides moderate estimation accuracy of gaze estimation.
international conference on multimedia and expo | 2014
Lyn Chao-ling Chen; Kuan-Wen Chen; Yi-Ping Hung
The purpose of this study is to develop a non-invasive sleep monitoring system to distinguish sleep disturbances based on multiple sensors. Unlike clinical sleep monitoring which records biological information such as EEG, EOG, and EMG, in this study, we aim to identify occurrences of events from a sleep environment. A device with an infrared depth sensor, a RGB camera, and a four-microphone array is used to detect three types of events: motion events, lighting events, and sound events. Given streams of depth signals and color images, we build two background models to detect movements and lighting effects, and audio signals are scored simultaneously. Moreover, we classify events by an epoch approach algorithm and provide a graphical sleep diagram for browsing corresponding video clips. Experimental results in sleep condition show the efficiency and reliability of our system, and it is convenient and cost-effective to be used in home context.
international conference on pattern recognition | 2010
Kuan-Wen Chen; Yi-Ping Hung
For target tracking across multiple cameras with disjoint views, previous works usually employed multiple cues and focused on learning a better matching model of each cue, separately. However, none of them had discussed how to integrate these cues to improve performance, to our best knowledge. In this paper, we look into the multi-cue integration problem and propose an unsupervised learning method since a complicated training phase is not always viable. In the experiments, we evaluate several types of score fusion methods and show that our approach learns well and can be applied to large camera networks more easily.
international conference on computer graphics and interactive techniques | 2017
Jia-Wei Lin; Ping-Hsuan Han; Jiun-Yu Lee; Yang-Sheng Chen; Ting-Wei Chang; Kuan-Wen Chen; Yi-Ping Hung
Recently, virtual reality (VR) becomes more and more popular and provides users an immersive experience with a head-mounted display (HMD). However, in some applications, users have to interact with physical objects while immersed in VR. With a non-see-through HMD, it is difficult to perceive visual information from the real world. Users must recall the spatial layout of the real surroundings and grope around to find the physical objects. After locating the objects, it is still inconvenient to use them without any visual feedback, which would detract the immersive experience.
the internet of things | 2014
Kuan-Wen Chen; Hsin-Mu Tsai; Chih-Hung Hsieh; Shou-De Lin; Chieh-Chih Wang; Shao-Wen Yang; Shao-Yi Chien; Chia-Han Lee; Yu-Chi Su; Chun-Ting Chou; Yuh-Jye Lee; Hsing-Kuo Pao; Ruey-Shan Guo; Chung-Jen Chen; Ming-Hsuan Yang; Bing-Yu Chen; Yi-Ping Hung
In this paper, we propose a framework to develop an M2M-based (machine-to-machine) proactive driver assistance system. Unlike traditional approaches, we take the benefits of M2M in intelligent transportation system (ITS): 1) expansion of sensor coverage, 2) increase of time allowed to react, and 3) mediation of bidding for right of way, to help driver avoiding potential traffic accidents. To develop such a system, we divide it into three main parts: 1) driver behavior modeling and prediction, which collects grand driving data to learn and predict the future behaviors of drivers; 2) M2M-based neighbor map building, which includes sensing, communication, and fusion technologies to build a neighbor map, where neighbor map mentions the locations of all neighboring vehicles; 3) design of passive information visualization and proactive warning mechanism, which researches on how to provide user-needed information and warning signals to drivers without interfering their driving activities.
international conference on multimedia and expo | 2013
Chao-Ching Shih; Shen-Chi Chen; Cheng-Feng Hung; Kuan-Wen Chen; Shih-Yao Lin; Chih-Wei Lin; Yi-Ping Hung
We propose a tampering detection method using two-stage scene matching for real application with high efficiency and low false alarm rate. In the first stage, we use the intensity of edges as the main cue to detect the camera tampering events. Instead of using the entire edge points of the images, we sample the most significant edge points to represent the scene. Analyzing the edge variation with only the sample points, we discover that the events of camera tampering can be detected with low computation cost. Whenever the first stage detects the tampering event, the second stage is triggered to reduce false alarms. In the second stage, we propose an illumination change detector which can check the consistency of the scene structure using cell-based matching method. The experimental results demonstrate that our system can detect the camera tampering precisely and minimize false alarm even when the illumination changes dramatically or large crowds passing through the scene.
acm multimedia | 2010
Kuan-Wen Chen; Chih-Wei Lin; Mike Y. Chen; Yi-Ping Hung
This paper presents e-Fovea, a system that combines both multi-resolution camera input and multi-resolution steerable projector output to support large-scale and high-resolution visual monitoring. e-Fovea utilizes a design similar to the human eyes, which provides peripheral vision with a steerable fovea that is in higher resolution. e-Fovea is implemented using a steerable telephoto camera and a wide-angle camera. The telephoto image is displayed using a projector with a steerable mirror, and overlaid on the wide-angle image that is displayed using a second projector. We have deployed e-Fovea in two installations to demonstrate its feasibility. We have also conducted two user studies, with a total of 36 participants, to compare e-Fovea to two existing multi-resolution visual monitoring designs. The user study results show that for visual monitoring tasks, our e-Fovea design with steerable focus is significantly faster than existing approaches and preferred by users