Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chuan-Yen Chiang is active.

Publication


Featured researches published by Chuan-Yen Chiang.


Sensors | 2012

A Vision-Based Driver Nighttime Assistance and Surveillance System Based on Intelligent Image Sensing Techniques and a Heterogamous Dual-Core Embedded System Architecture

Yen-Lin Chen; Hsin-Han Chiang; Chuan-Yen Chiang; Chuan-Ming Liu; Shyan-Ming Yuan; Jenq-Haur Wang

This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system.


Sensors | 2012

An Intelligent Knowledge-Based and Customizable Home Care System Framework with Ubiquitous Patient Monitoring and Alerting Techniques

Yen-Lin Chen; Hsin-Han Chiang; Chao-Wei Yu; Chuan-Yen Chiang; Chuan-Ming Liu; Jenq-Haur Wang

This study develops and integrates an efficient knowledge-based system and a component-based framework to design an intelligent and flexible home health care system. The proposed knowledge-based system integrates an efficient rule-based reasoning model and flexible knowledge rules for determining efficiently and rapidly the necessary physiological and medication treatment procedures based on software modules, video camera sensors, communication devices, and physiological sensor information. This knowledge-based system offers high flexibility for improving and extending the system further to meet the monitoring demands of new patient and caregiver health care by updating the knowledge rules in the inference mechanism. All of the proposed functional components in this study are reusable, configurable, and extensible for system developers. Based on the experimental results, the proposed intelligent homecare system demonstrates that it can accomplish the extensible, customizable, and configurable demands of the ubiquitous healthcare systems to meet the different demands of patients and caregivers under various rehabilitation and nursing conditions.


Sensors | 2011

Vision-Based Finger Detection, Tracking, and Event Identification Techniques for Multi-Touch Sensing and Display Systems

Yen-Lin Chen; Wen-Yew Liang; Chuan-Yen Chiang; Tung-Ju Hsieh; Da-Cheng Lee; Shyan-Ming Yuan; Yang-Lang Chang

This study presents efficient vision-based finger detection, tracking, and event identification techniques and a low-cost hardware framework for multi-touch sensing and display applications. The proposed approach uses a fast bright-blob segmentation process based on automatic multilevel histogram thresholding to extract the pixels of touch blobs obtained from scattered infrared lights captured by a video camera. The advantage of this automatic multilevel thresholding approach is its robustness and adaptability when dealing with various ambient lighting conditions and spurious infrared noises. To extract the connected components of these touch blobs, a connected-component analysis procedure is applied to the bright pixels acquired by the previous stage. After extracting the touch blobs from each of the captured image frames, a blob tracking and event recognition process analyzes the spatial and temporal information of these touch blobs from consecutive frames to determine the possible touch events and actions performed by users. This process also refines the detection results and corrects for errors and occlusions caused by noise and errors during the blob extraction process. The proposed blob tracking and touch event recognition process includes two phases. First, the phase of blob tracking associates the motion correspondence of blobs in succeeding frames by analyzing their spatial and temporal features. The touch event recognition process can identify meaningful touch events based on the motion information of touch blobs, such as finger moving, rotating, pressing, hovering, and clicking actions. Experimental results demonstrate that the proposed vision-based finger detection, tracking, and event identification system is feasible and effective for multi-touch sensing applications in various operational environments and conditions.


systems, man and cybernetics | 2010

Embedded on-road nighttime vehicle detection and tracking system for driver assistance

Yen-Lin Chen; Chuan-Yen Chiang

This study presents an effective method for detecting vehicles in front of the camera-assisted car during nighttime driving and implements it on an embedded system. The proposed method detects vehicles based on detecting and locating vehicle headlights and taillights using techniques of image segmentation and pattern analysis. Firstly, to effectively extract bright objects of interest, a segmentation process based on automatic multilevel thresholding applied on the grabbed road-scene images. Then the extracted bright objects are processed by to identify and tracking the vehicles by locating and analyzing the spatial and temporal features of vehicle light patterns and to estimate their distances to the camera-assisted car. Finally, we also implement the above vision-based techniques on a real-time system mounted in the host car. The proposed vision-based techniques are integrated and implemented on an ARM-Linux embedded platform, as well as the peripheral devices, including image grabbing devices, voice reporting module, and other in-vehicle control devices, will be also integrated to accomplish an in-vehicle embedded vision-based nighttime driver assistance system


international conference on genetic and evolutionary computing | 2011

An Efficient Component-Based Framework for Intelligent Home-Care System Design with Video and Physiological Monitoring Machineries

Chuan-Yen Chiang; Yen-Lin Chen; Chao-Wei Yu; Shyan-Ming Yuan; Zeng-Wei Hong

This study proposes a customized and reusable component-based design framework based on the UML modeling process for intelligent home healthcare systems. All the proposed functional components are reusable, replaceable, and extensible for the system developers to implement customized home healthcare systems for different demands of patients and caregivers on healthcare monitoring aspects. The prototype design of the intelligent healthcare system based on these proposed components can provide the following features: 1). the system can monitor and record the videos of rehabilitation situations and actions of the patient by multiple CCD cameras, and the monitoring videos at different times can be accordingly stored in the archive. 2). the system can record the patient¡¦s physiological data records and the corresponding treatment plan, and these records can be stored in a XML archiving database for caregivers¡¦ review. 3). during the times for the patient to take medicine or other healing activities listed on the given treatment plan, the system can automatically alarm the patient and record the patient¡¦s treatment situations. 4). The patient¡¦s caregivers and family members can ubiquitously monitor the videos and physiological records of the patient¡¦s rehabilitation situations via the handheld mobile devices via the internet or wireless communication networks. 5). The caregivers and patients can setup the alarm machinery for the patients¡¦ physiological warning states, and once the patients¡¦ physiological states suddenly deteriorate, the module will immediately alarm the caregivers by sending notification messages to their remote mobile devices or web browsers.


international symposium on computer communication control and automation | 2010

Embedded vision-based nighttime driver assistance system

Yen-Lin Chen; Chuan-Yen Chiang

This study presents an effective method for detecting vehicles in front of the camera-assisted car during nighttime driving and implements it on an embedded system. The proposed method detects vehicles based on detecting and locating vehicle headlights and taillights using techniques of image segmentation and pattern analysis. Firstly, to effectively extract bright objects of interest, a segmentation process based on automatic multilevel thresholding applied on the grabbed road-scene images. Then the extracted bright objects are processed by to identify the vehicles by locating and analyzing their vehicle light patterns and to estimate their distances to the cameraassisted car by a rule-based procedure. Finally, we also implement the above vision-based techniques on a real-time system mounted in the host car. The proposed vision-based techniques are integrated and implemented on an ARMLinux embedded platform, as well as the peripheral devices, including image grabbing devices, voice reporting module, and other in-vehicle control devices, will be also integrated to accomplish an in-vehicle embedded vision-based nighttime driver assistance system.


2014 International Conference on Trustworthy Systems and their Applications | 2014

A Video Conferencing System Based on WebRTC for Seniors

Chuan-Yen Chiang; Yen-Lin Chen; Pei-Shiun Tsai; Shyan-Ming Yuan

With the technology growing, traditional communication way is insufficient to meet everyone needs. Video chatting is gradually become more and more popular. There are many mature and free video chatting tools to use on the internet. However, for senior citizens, learning news is much more difficult, none of them are design for senior citizens, so that it is too hard for elder to use it, it is result in the elder have no wish to learn new things. Therefore, in this thesis, we focus on the needs of elder adults, this study use HTML5 and WebRTC to propose a video chatting system designed for senior citizens, the elder can video chatting without complex operation. In addition, we also combine it into the television, the elder can use remoter controller to watch television and chat to people at the same time. After chatting, the system will upload the chatting video to server and elder can share it to friends. In the experiment, the result shows that the senior citizens is interesting in this system, also, the elder thinks that the system is very useful, feel comfortable to use it. They can learn this system quickly.


international conference on consumer electronics | 2015

Real-time pedestrian detection technique for embedded driver assistance systems

Chuan-Yen Chiang; Yen-Lin Chen; Kun-Cing Ke; Shyan-Ming Yuan

Fast detection of pedestrians moving across the roads is a big challenge for in-vehicle embedded systems. Because the shape features of on-road pedestrians are irregular and complex, so that the detection techniques cost large computational resources. However, the in-vehicle embedded systems only have limited computational resources. To resolve this challenge, we propose fast pedestrian detection algorithms based on histogram of oriented gradients (HOGs), and support vector machines (SVMs). The proposed techniques are evaluated and implemented on a digital signal processing (DSP) based embedded platform. The experimental results demonstrate that the proposed detection techniques can provide high computational efficiency and detection accuracy.


systems, man and cybernetics | 2014

Real-time eye detection and event identification for human-computer interactive control for driver assistance

Yen-Lin Chen; Chao-Wei Yu; Chuan-Yen Chiang; Chin-Hsuan Liu; Wei-Chen Sun; Hsin-Han Chiang; Tsu-Tian Lee

Eye movements can provide important information for human-computer interactive applications. Due to the progress of computer technology, the detecting accuracy and speed of pattern recognition are promoted. Additionally, according to research advances in embedded systems, the applications of digital cameras, such as internet cameras, smart phones and smart TV, and smart cars, are widely used in nowadays. Therefore, we propose a set of real-time human-eye detection and tracking systems with human-computer interaction applications. This technique can obtain eye movements and can be adopted as interactive control commands on driver assistance systems. The proposed system is implemented on an OMAP4430 for embedded applications, and experimental results show that the proposed architecture is capable of effective and real-time eye position detection and event identification for human-computer interactive applications on driver assistance systems.


international conference on genetic and evolutionary computing | 2014

Vehicle Driving Video Sharing and Search Framework Based on GPS Data

Chuan-Yen Chiang; Shyan-Ming Yuan; Shian-Bo Yang; Guo-Heng Luo; Yen-Lin Chen

The driving dispute is a critical problem for drivers when the car accident is happened. The drivers usually install the car video recorder in their car to recode their driving images for many years ago. If a car accident is happened, the driver can provide a driving video file as an evidence to claim that they did not do any dangerous driving, and protect themself. However, in some situations, drivers may not in the car accident when driving the car, but they have the same requirement for a video record, because they want to use those video files to find out the crime in hit-and-run accident. Due to an evolution of social network, many people were post the required messages to find the driving videos recorded in some specific times and locations. In fact, such kind of messages can be beneficial for many people to solve the hit-and-run accident by using social networks and driving videos. The goal of this paper is to develop a framework which can provide a platform for users to upload their driving videos, and allow other users can search video by a given specific time, date and location from the frameworks’ database. This framework can also provide the application on mobile devices for user to recorder their driving videos, and this application can upload their driving video into the frameworks’ database automatically.

Collaboration


Dive into the Chuan-Yen Chiang's collaboration.

Top Co-Authors

Avatar

Yen-Lin Chen

National Taipei University of Technology

View shared research outputs
Top Co-Authors

Avatar

Shyan-Ming Yuan

National Chiao Tung University

View shared research outputs
Top Co-Authors

Avatar

Chao-Wei Yu

National Taipei University of Technology

View shared research outputs
Top Co-Authors

Avatar

Wen-Yew Liang

National Taipei University of Technology

View shared research outputs
Top Co-Authors

Avatar

Yang-Lang Chang

National Taipei University of Technology

View shared research outputs
Top Co-Authors

Avatar

Da-Cheng Lee

National Taipei University of Technology

View shared research outputs
Top Co-Authors

Avatar

Hsin-Han Chiang

Fu Jen Catholic University

View shared research outputs
Top Co-Authors

Avatar

Tung-Ju Hsieh

National Taipei University of Technology

View shared research outputs
Top Co-Authors

Avatar

Chuan-Ming Liu

National Taipei University of Technology

View shared research outputs
Top Co-Authors

Avatar

Jenq-Haur Wang

National Taipei University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge