Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chia-Han Lee is active.

Publication


Featured researches published by Chia-Han Lee.


IEEE Journal of Selected Topics in Signal Processing | 2015

On-Line Multi-View Video Summarization for Wireless Video Sensor Network

Shun-Hsing Ou; Chia-Han Lee; V. Srinivasa Somayazulu; Yen-Kuang Chen; Shao-Yi Chien

Battery lifetime is critical for wireless video sensors. To enable battery-powered wireless video sensors, low-power design is required. In this paper, we consider applying multi-view summarization to wireless video sensors to remove redundant contents such that the compression and transmission power can be reduced. A low-complexity online multi-view video summarization scheme is proposed. Experiments show that the proposed summarization method successfully reduces the video content while keeping important events. A power analysis of the system also shows that a significant amount of energy can be saved.


IEEE Journal on Emerging and Selected Topics in Circuits and Systems | 2013

Power Consumption Analysis for Distributed Video Sensors in Machine-to-Machine Networks

Shao-Yi Chien; Teng-Yuan Cheng; Shun-Hsing Ou; Chieh-Chuan Chiu; Chia-Han Lee; V. S. Somayazulu; Yen-Kuang Chen

Among different sensors used in machine-to-machine (M2M) networks, video sensors can provide the richest information. However, much higher bandwidth and power consumption limit the feasibility of wide deployment. To integrate visual data acquisition ability into M2M networks, the design of low-power distributed video sensors is the key. This paper presents the power analysis of distributed video sensors. The power profile of a distributed video sensor node is first shown from measurement results, followed by detailed discussion of video coding engine selection, where the state-of-the-art H.264/AVC coding engine is compared with distributed video coding engine. Moreover, the role of video content analysis is addressed. This paper can provide a fundamental basis for future distributed video sensor design.


Infrared Physics & Technology | 2001

High performance InAs/GaAs quantum dot infrared photodetectors with AlGaAs current blocking layer

Shiang-Yu Wang; Shir-Kuan Lin; Hsien-Shun Wu; Chia-Han Lee

Abstract Low dark current InAs/GaAs quantum dot infrared photodetector (QDIP) is demonstrated. The dark current reduced for over three orders of magnitude by introducing a thin AlGaAs current blocking layer. This thin AlGaAs can reduce the dark current much more than the response signal. The responsivity at 0.5 V is 0.08 A/W with peak at 6.5 μm and the corresponding detectivity about 2.5×109 cmxa0Hz1/2/W1/2. It is the highest detectivity reported for QDIP at 77 K.


international conference on image processing | 2012

Hybrid distributed video coding with frame level coding mode selection

Chieh-Chuan Chiu; Shao-Yi Chien; Chia-Han Lee; V. S. Somayazulu; Yen-Kuang Chen

Distributed video coding (DVC), a new video coding paradigm based on Slepian-Wolf and Wyner-Ziv theories, is a promising solution for implementing low-power and low-cost distributed wireless video sensors since most of the computation load is moved from the encoder to the decoder. In this paper, the hardware architecture design of an efficient distributed video coding system, hybrid DVC with frame-level coding mode selection, is proposed. With the fully block-pipelined architecture, coding mode pre-decision, and specially-designed LDPC code engine, the proposed hardware is an efficient solution for distributed video sensors with high rate-distortion performance.


asia and south pacific design automation conference | 2015

Distributed computing in IoT: System-on-a-chip for smart cameras as an example

Shao-Yi Chien; Wei-Kai Chan; Yu-Hsiang Tseng; Chia-Han Lee; V. Srinivasa Somayazulu; Yen-Kuang Chen

There are four major components in application systems with internet-of-things (IoT): sensors, communications, computation and service, where large amount of data are acquired for ultra-big data analysis to discover the context information and knowledge behind signals. To support such large-scale data size and computation tasks, it is not feasible to employ centralized solutions on cloud servers. Thanks for the advances of silicon technology, the cost of computation become lower, and it is possible to distribute computation on every node in IoT. In this paper, we take video sensing network as an example to show the idea of distributed computing in IoT. Existing related works are reviewed and the architecture of a system-on-a-chip solution for distributed smart cameras is proposed with coarse-grained reconfigurable image stream processing architecture. It can accelerate various computer vision algorithms for distributed smart cameras in IoT.


visual communications and image processing | 2011

Distributed video coding: A promising solution for distributed wireless video sensors or not?

Chieh-Chuan Chiu; Shao-Yi Chien; Chia-Han Lee; V. Srinivasa Somayazulu; Yen-Kuang Chen

Low-power and low-cost distributed wireless video sensors play important roles for applications in machine-to-machine (M2M) and wireless sensor networks. Distributed video coding (DVC), an emerging coding technology based on Wyner-Ziv theory, seems to be a possible solution for implementing low-power video sensors since most of the computational complexity is moved from the encoder to the decoder. In this paper, existing works on DVC are discussed with rate-distortion and power consumption analyses compared with H.264/AVC-based approaches. We show that, since more transmission power is required for compensating the lower rate-distortion performance, the power consumption of sensor nodes using DVC is just similar to that of using H.264/AVC with zero motion vectors. Therefore, there is still a room for improvement to make DVC applicable for distributed wireless video sensors. Based on our analysis results, several possible research directions, such as studies on the tradeoff between hardware cost and system power consumption, are also addressed in this paper under a unified DVC framework.


the internet of things | 2014

Connected vehicle safety science, system, and framework

Kuan-Wen Chen; Hsin-Mu Tsai; Chih-Hung Hsieh; Shou-De Lin; Chieh-Chih Wang; Shao-Wen Yang; Shao-Yi Chien; Chia-Han Lee; Yu-Chi Su; Chun-Ting Chou; Yuh-Jye Lee; Hsing-Kuo Pao; Ruey-Shan Guo; Chung-Jen Chen; Ming-Hsuan Yang; Bing-Yu Chen; Yi-Ping Hung

In this paper, we propose a framework to develop an M2M-based (machine-to-machine) proactive driver assistance system. Unlike traditional approaches, we take the benefits of M2M in intelligent transportation system (ITS): 1) expansion of sensor coverage, 2) increase of time allowed to react, and 3) mediation of bidding for right of way, to help driver avoiding potential traffic accidents. To develop such a system, we divide it into three main parts: 1) driver behavior modeling and prediction, which collects grand driving data to learn and predict the future behaviors of drivers; 2) M2M-based neighbor map building, which includes sensing, communication, and fusion technologies to build a neighbor map, where neighbor map mentions the locations of all neighboring vehicles; 3) design of passive information visualization and proactive warning mechanism, which researches on how to provide user-needed information and warning signals to drivers without interfering their driving activities.


international conference on acoustics, speech, and signal processing | 2014

LOW COMPLEXITY ON-LINE VIDEO SUMMARIZATION WITH GAUSSIAN MIXTURE MODEL BASED CLUSTERING

Shun-Hsing Ou; Chia-Han Lee; V. Srinivasa Somayazulu; Yen-Kuang Chen; Shao-Yi Chien

Techniques of video summarization have attracted significant research interests in the past decade due to the rapid progress in video recording, computation, and communication technologies. However, most of the existing methods analyze the video in an off-line manner, which greatly reduces the flexibility of the system. On-line summarization, which can progressively process video during video recording, is then proposed for a wide range of applications. In this paper, an on-line summarization method using Gaussian mixture model is proposed. As shown in the experiments, the proposed method outperforms other on-line methods in both summarization quality and computational efficiency. It can generate summarization with a shorter latency and much lower computation resource requirements.


asia and south pacific design automation conference | 2012

Power optimization of wireless video sensor nodes in M2M networks

Shao-Yi Chien; Teng-Yuan Cheng; Chieh-Chuan Chiu; Pei-Kuei Tsung; Chia-Han Lee; V. Srinivasa Somayazulu; Yen-Kuang Chen

Low-power wireless video sensor nodes play important roles for applications in machine-to-machine (M2M) network. Several design issues to optimize the power consumption of a video sensor node are addressed in this paper. For the video coding engine selection, the comparison between conventional video coding system and distributed video coding (DVC) system shows that although the rate-distortion performance of existing DVC codec still has room to improve, it can provide lower power consumption with a noisy transmission channel. Furthermore, it also demonstrated that video analysis unit can help to filter out video contents without event-of-interest to reduce transmission power. Finally, several future research directions are addressed, and the trade-off between the video analysis unit, video coding unit, and data transmission should be further studied to design wireless video sensors with optimized power consumption.


international conference on multimedia and expo | 2016

Wearable social camera: Egocentric video summarization for social interaction

Jen-An Yang; Chia-Han Lee; Shao-Wen Yang; V. Srinivasa Somayazulu; Yen-Kuang Chen; Shao-Yi Chien

Wearable social camera is an egocentric camera that summarizes the video of the users social activities. This paper proposes a core technology of the wearable social camera: egocentric video summarization for social interaction. Different from other works of third-person action/interaction recognition in egocentric videos, which focus on distinguishing different actions, this work finds the common features among all the interactions, which is called interaction features (IF). IF of the third-person is proposed to be composed of three parts: physical information of head, body languages, and emotional expression. Furthermore, hidden Markov model (HMM) is employed to model the interaction sequences, and a summarized video is generated with hidden Markov support vector machine (HM-SVM). Experimental results with a life-log dataset show that the proposed system performs well for summarizing life-log videos.

Collaboration


Dive into the Chia-Han Lee's collaboration.

Top Co-Authors

Avatar

Shao-Yi Chien

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Chieh-Chuan Chiu

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Shun-Hsing Ou

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Teng-Yuan Cheng

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Hsin-Fang Wu

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Shou-De Lin

National Taiwan University

View shared research outputs
Researchain Logo
Decentralizing Knowledge