Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shen-Chi Chen is active.

Publication


Featured researches published by Shen-Chi Chen.


IEEE Transactions on Information Forensics and Security | 2015

Abandoned Object Detection via Temporal Consistency Modeling and Back-Tracing Verification for Visual Surveillance

Kevin Lin; Shen-Chi Chen; Chu-Song Chen; Daw-Tung Lin; Yi-Ping Hung

This paper presents an effective approach for detecting abandoned luggage in surveillance videos. We combine short- and long-term background models to extract foreground objects, where each pixel in an input image is classified as a 2-bit code. Subsequently, we introduce a framework to identify static foreground regions based on the temporal transition of code patterns, and to determine whether the candidate regions contain abandoned objects by analyzing the back-traced trajectories of luggage owners. The experimental results obtained based on video images from 2006 Performance Evaluation of Tracking and Surveillance and 2007 Advanced Video and Signal-based Surveillance databases show that the proposed approach is effective for detecting abandoned luggage, and that it outperforms previous methods.


international conference on pattern recognition | 2014

Appearance-Based Gaze Tracking with Free Head Movement

Chih-Chuan Lai; Yu-Ting Chen; Kuan-Wen Chen; Shen-Chi Chen; Sheng-Wen Shih; Yi-Ping Hung

In this work, we develop an appearance-based gaze tracking system allowing user to move their head freely. The main difficulty of the appearance-based gaze tracking method is that the eye appearance is sensitive to head orientation. To overcome the difficulty, we propose a 3-D gaze tracking method combining head pose tracking and appearance-based gaze estimation. We use a random forest approach to model the neighbor structure of the joint head pose and eye appearance space, and efficiently select neighbors from the collected high dimensional data set. L1-optimization is then used to seek for the best solution for regression from the selected neighboring samples. Experiment results shows that it can provide robust binocular gaze tracking results with less constraints but still provides moderate estimation accuracy of gaze estimation.


international conference on multimedia and expo | 2012

2D Face Alignment and Pose Estimation Based on 3D Facial Models

Shen-Chi Chen; Chia-Hsiang Wu; Shih-Yao Lin; Yi-Ping Hung

Face alignment and head pose estimation has become a thriving research field with various applications for the past decade. Several approaches process on 2D texture image but most of them perform decently only with small pose variation. Recently, many approaches apply depth information to align objects. However, applications are restricted because depth cameras are more expensive than common cameras, and many original image resources contain no depth information. Therefore, we propose a 3D face alignment algorithm in 2D image based on Active Shape Model, and use Speeded-Up Robust Features (SURF) descriptors as local texture model. We train a 3D shape model with different view-based local texture models from a 3D database, and then fit a face in a 2D image by these models. We also improve the performance by two-stage search strategy. Furthermore, the head pose can be estimated by the alignment result of the proposed 3D model. Finally, we demonstrate some applications applied by our method.


acm multimedia | 2013

AirTouch panel: a re-anchorable virtual touch panel

Shih-Yao Lin; Chuen-Kai Shie; Shen-Chi Chen; Yi-Ping Hung

To achieve maximum mobility, device-less approaches for home appliance remote control have received increasing attention in recent years. In this paper, we propose a screen-less virtual touch panel, called AirTouch Panel, which can be positioned at any place with various orientations around users. The proposed virtual touch panel provides a potential ability to remotely control the home appliances, such as television, air conditioner, and so on. The proposed system allows users to anchor the panel at the place with comfortable poses. If the users want to change panels position or orientation, they only need to re-anchor it, and then the panel will be reset. In this paper, our main contribution is to design a re-anchorable virtual panel for digital home remote control. Most importantly, we explore the design of such imaginary interface through two user studies. In our user studies, we analyze task completion time, satisfaction rate, and the number of miss-clicks. We are interested in the feasibility issues, for example, proper click gesture, panel size and button size, etc. Moreover, based on the AirTouch Panel, we also developed an intelligent TV to demonstrate the usability for controlling home appliance.


international conference on multimedia and expo | 2013

Real-time camera tampering detection using two-stage scene matching

Chao-Ching Shih; Shen-Chi Chen; Cheng-Feng Hung; Kuan-Wen Chen; Shih-Yao Lin; Chih-Wei Lin; Yi-Ping Hung

We propose a tampering detection method using two-stage scene matching for real application with high efficiency and low false alarm rate. In the first stage, we use the intensity of edges as the main cue to detect the camera tampering events. Instead of using the entire edge points of the images, we sample the most significant edge points to represent the scene. Analyzing the edge variation with only the sample points, we discover that the events of camera tampering can be detected with low computation cost. Whenever the first stage detects the tampering event, the second stage is triggered to reduce false alarms. In the second stage, we propose an illumination change detector which can check the consistency of the scene structure using cell-based matching method. The experimental results demonstrate that our system can detect the camera tampering precisely and minimize false alarm even when the illumination changes dramatically or large crowds passing through the scene.


international conference on pattern recognition | 2014

Left-Luggage Detection from Finite-State-Machine Analysis in Static-Camera Videos

Kevin Lin; Shen-Chi Chen; Chu-Song Chen; Daw-Tung Lin; Yi-Ping Hung

We present an abandoned object detection system in this paper. A finite-state-machine model is introduced to extract stationary foregrounds in a scene for visual surveillance, where the state value of each pixel is inferred via the cooperation of short-term and long-term background models constructed in the proposed approach. To identify the left-luggage event, we then verify whether the static foregrounds are abandoned objects through the analysis of owners moving trajectory back-tracked to the static foreground locations. Experimental results reveal that the proposed approach tackles the problem well on publicly available datasets.


international conference on image processing | 2013

Target-driven video summarization in a camera network

Shen-Chi Chen; Kevin Lin; Shih-Yao Lin; Kuan-Wen Chen; Chih-Wei Lin; Chu-Song Chen; Yi-Ping Hung

Nowadays, ever expanding camera network makes it difficult to find the suspect from lengthy video records. This paper proposes a target-driven video summarization framework which provides two-step Filtered Summarized Video (FSV) for tracing suspects. Before the target is identified, users can find the target efficiently using the firststep FSV of any arbitrary camera. The first-step FSV filters all the attributes of the target including the time information and the targets categories. After identifying the target, the second-step FSV with additional spatio-temporal & appearance cues are triggered in the neighbor cameras. To enhance the accuracy of the object classification for FSV, we propose a Perspective Dependent Model (PDM) which consists of many grid-based models. Finally, the experimental results show that grid-based model is more robust than general detectors and the user study demonstrates better performance for target finding and tracking in camera network for surveillance.


ACM Transactions on Multimedia Computing, Communications, and Applications | 2015

Large-Area, Multilayered, and High-Resolution Visual Monitoring Using a Dual-Camera System

Chih-Wei Lin; Kuan-Wen Chen; Shen-Chi Chen; Cheng-Wu Chen; Yi-Ping Hung

Large-area, high-resolution visual monitoring systems are indispensable in surveillance applications. To construct such systems, high-quality image capture and display devices are required. Whereas high-quality displays have rapidly developed, as exemplified by the announcement of the 85-inch 4K ultrahigh-definition TV by Samsung at the 2013 Consumer Electronics Show (CES), high-resolution surveillance cameras have progressed slowly and remain not widely used compared with displays. In this study, we designed an innovative framework, using a dual-camera system comprising a wide-angle fixed camera and a high-resolution pan-tilt-zoom (PTZ) camera to construct a large-area, multilayered, and high-resolution visual monitoring system that features multiresolution monitoring of moving objects. First, we developed a novel calibration approach to estimate the relationship between the two cameras and calibrate the PTZ camera. The PTZ camera was calibrated based on the consistent property of distinct pan-tilt angle at various zooming factors, accelerating the calibration process without affecting accuracy; this calibration process has not been reported previously. After calibrating the dual-camera system, we used the PTZ camera and synthesized a large-area and high-resolution background image. When foreground targets were detected in the images captured by the wide-angle camera, the PTZ camera was controlled to continuously track the user-selected target. Last, we integrated preconstructed high-resolution background and low-resolution foreground images captured using the wide-angle camera and the high-resolution foreground image captured using the PTZ camera to generate a large-area, multilayered, and high-resolution view of the scene.


international conference on acoustics, speech, and signal processing | 2015

Location-aware object detection via coherent region grouping

Shen-Chi Chen; Kevin Lin; Chu-Song Chen; Yi-Ping Hung

We present a scene adaptation algorithm for object detection. Our method discovers scene-dependent features discriminative to classifying foreground objects into different categories. Unlike previous works suffering from insufficient training data collected online, our approach incorporated with a similarity grouping procedure can automatically gather more consistent training examples from a neighbour area. Experimental results show that the proposed method outperforms several related works with higher detection accuracies.


green computing and communications | 2014

Object Detection for Neighbor Map Construction in an IoV System

Kuan-Wen Chen; Shen-Chi Chen; Kevin Lin; Ming-Hsuan Yang; Chu-Song Chen; Yi-Ping Hung

Many applications of machine-to-machine (M2M) based intelligent transportation systems highly rely on the accurate estimation of neighbor map, where neighbor map mentions the locations of all nearby vehicles and pedestrians. To build the neighbor map, it usually integrates multiple sensors, such as GPS, odometer, inertial measurement unit (IMU), laser scanners, cameras, and RGB-D cameras. In this paper, we build a M2M framework to estimate the neighbor map and focus on the improvement of vehicle and pedestrian detection of most popular sensors, camera. We propose a novel grid-based object detection approach and deal with cameras on both roadside units and vehicles. It adapts to the environments and achieves high accuracy, and can be used to improve the performance of neighbor map estimation.

Collaboration


Dive into the Shen-Chi Chen's collaboration.

Top Co-Authors

Avatar

Yi-Ping Hung

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Shih-Yao Lin

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Kevin Lin

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kuan-Wen Chen

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Chih-Wei Lin

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Chuen-Kai Shie

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

C.H. Lin

National Tsing Hua University

View shared research outputs
Top Co-Authors

Avatar

C.S. Shih

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Chia-Hsiang Wu

National Taiwan University

View shared research outputs
Researchain Logo
Decentralizing Knowledge