Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Che-Tsung Lin is active.

Publication


Featured researches published by Che-Tsung Lin.


international symposium on circuits and systems | 2012

Self-learning-based rain streak removal for image/video

Li-Wei Kang; Chia-Wen Lin; Che-Tsung Lin; Yu-Cheng Lin

Rain removal from an image/video is a challenging problem and has been recently investigated extensively. In our previous work, we have proposed the first single-image-based rain streak removal framework via properly formulating it as an image decomposition problem based on morphological component analysis (MCA) solved by performing dictionary learning and sparse coding. However, in this previous work, the dictionary learning process cannot be fully automatic, where the two dictionaries used for rain removal were selected heuristically or by human intervention. In this paper, we extend our previous work to propose an automatic self-learning-based rain streak removal framework for single image. We propose to automatically self-learn the two dictionaries used for rain removal without additional information or any assumption. We then extend our single-image-based method to video-based rain removal in a static scene by exploiting the temporal information of successive frames and reusing the dictionaries learned by the former frame(s) in a video while maintaining the temporal consistency of the video. As a result, the rain component can be successfully removed from the image/video while preserving most original details. Experimental results demonstrate the efficacy of the proposed algorithm.


conference on industrial electronics and applications | 2013

A vision-based obstacle detection system for parking assistance

Yu-Chen Lin; Che-Tsung Lin; Wei-Cheng Liu; Long-Tai Chen

This paper proposes a monocular vision-based obstacle detection algorithm for parking assistance applications of advance safety vehicle by a rear camera. In order to efficiently detect various moving and stationary obstacles behind the vehicle, the feature of corner into the rear obstacles is firstly estimated by the Features from Accelerated Segment Test (FAST) corner detection methods. Then, the inverse perspective mapping (IPM) image can be used to determine whether every detected feature belongs to an obstacle candidate or to the ground. Based on these results, the segmentation and identification strategies are also proposed in order to determine the degree of collision risk and to filter out the non-hazardous candidates. Finally, the correct obstacle regions in IPM transformed image can be easily and quickly extracted. The system can provide a vision-based alert to the driver, helping to avoid collisions with obstacles behind the host vehicle. Through extensive experiments, we have shown that the rear obstacle detection system in typical urban situations can be used to efficiently extract obstacle regions markings. The proposed algorithm achieves high detecting rate and low computing power and is successfully implemented in ADI-BF561 600MHz dual core DSP.


Journal of Radioanalytical and Nuclear Chemistry | 1997

Determination of trace amounts of rare earth and representative elements in waste water samples by chemical neutron activation analysis

S. J. Yeh; Che-Tsung Lin; C. S. Tsai; Hung-Wei Yang; Chiung‐Huei Ke

An attempt was made to establish a reliable method using chemical neutron activation analysis for surveillance of pollutants in waste waters released by plants manufacturing various kinds of products. Since preconcentration process played important role in the entire course of the analysis work, special precaution was taken to re-confirm that the recovery efficiencies for pollutant ions were satisfactory during the preconcentration. It was also re-examined that the Langmuirs adsorption isotherm pattern well obeyed by all ions under investigation. In recent years, significant amounts of rare earth compounds and other raw materials containing representative elements have been imported and consumed to meet the demand due to the rapid progress in new manufacturing technology. Samples were collected from ten various production lines in plants and potential pollutants were determined using the Tsing Hua Open-pool Reactor. It would be noteworthy that the specimens obtained by this preconcentration process also would be usable for Induced Coupled Plasma-Mass Spectrometry analysis for supplementary and/or comparison purposes.


vehicular technology conference | 2015

Evaluation, Design and Application of Object Tracking Technologies for Vehicular Technology Applications

Che-Tsung Lin; Long-Tai Chen; Yuan-Fang Wang

Advances in video technology have enabled its wide adoption in the auto industry. Today, many vehicles are equipped with backup, front-looking, and side-looking cameras that allow the driver to easily monitor the traffic around the vehicle for enhanced safety. This paper reports our research on evaluating many existing object tracking techniques, and proposes a new tracker design and its application for 3D environmental mapping in vehicular technology applications. The contribution of our research is 4-fold: (1) We evaluate a large collection of state-of-the-art trackers using multiple criteria relevant to vehicular technology applications, (2) we show how to derive useful evaluation metrics from public-domain, real-world driving videos that do not come with ground-truth information on pixel tracking, (3) we propose a new tracker that is geared specifically for vehicular technology application and show that it achieves tracking accuracy that outperforms SIFT and is on-par with state-of-the-art optical-flow tracking algorithm, which has the best accuracy in our evaluation. Furthermore, we show that our tracker is 600 times more efficient than optical flow and 7 times more efficient than SIFT, and (4) we validated our new tracker design for 3D environmental map building application and showed that the new tracker can obtain comparable results as SIFT but with a significant saving in runtime.


vehicular technology conference | 2013

Front Vehicle Blind Spot Translucentization Based on Augmented Reality

Che-Tsung Lin; Yu-Chen Lin; Long-Tai Chen; Yuan-Fang Wang

This paper proposes a new vehicle blind spot elimination system which utilizes the on-board videos captured from other vehicles and the host vehicle. Such information is exchanged by WAVE/DSRC devices to achieve collaborative safety. The preceding vehicle which fully or partially blocks the field of view of the host vehicle could be translucentized in the video captured by the host vehicle and the driving environment of the front vehicle could be then visually checked by the host driver.


Archive | 2018

AugGAN: Cross Domain Adaptation with GAN-Based Data Augmentation

Sheng-Wei Huang; Che-Tsung Lin; Shu-Ping Chen; Yen-Yi Wu; Po-Hao Hsu; Shang-Hong Lai

Deep learning based image-to-image translation methods aim at learning the joint distribution of the two domains and finding transformations between them. Despite recent GAN (Generative Adversarial Network) based methods have shown compelling results, they are prone to fail at preserving image-objects and maintaining translation consistency, which reduces their practicality on tasks such as generating large-scale training data for different domains. To address this problem, we purpose a structure-aware image-to-image translation network, which is composed of encoders, generators, discriminators and parsing nets for the two domains, respectively, in a unified framework. The purposed network generates more visually plausible images compared to competing methods on different image-translation tasks. In addition, we quantitatively evaluate different methods by training Faster-RCNN and YOLO with datasets generated from the image-translation results and demonstrate significant improvement on the detection accuracies by using the proposed image-object preserving network.


vehicular technology conference | 2016

Robust and Efficient Tracking with Large Lens Distortion for Vehicular Technology Applications

Che-Tsung Lin; Long-Tai Chen; Pai-Wei Cheng; Yuan-Fang Wang

Advances in video technology have enabled its wide adoption in the auto industry. Today, many vehicles are equipped with backup, front-looking, and side-looking cameras that allow the driver to easily monitor traffic around the vehicle for enhancing safety. One difficulty with performing automated image analysis using a vehicles onboard video has to do with the significant lens distortion of these sensors to cover a large field of view around the vehicle. This paper reports our research on proposing a tracking scheme that improves the accuracy and denseness of object tracking in the presence of large lens distortion. The contribution of our research is 4-fold: (1) We evaluated a large collection of state-of-the-art trackers to understand their deficiency when applied to videos with large lens distortion, (2) we showed how to derive useful evaluation metrics from public-domain, real-world driving videos that do not come with ground-truth information on pixel tracking, (3) we identified many enhancement techniques that can potentially help improve the poor performance of current trackers on videos of large lens distortion, and (4) we performed a systematic study to validate the efficacy of these enhancement techniques and proposed a new tracker design that achieved substantial improvement over the state-of-the- art, in terms of both accuracy and density, based on a rigorous precision vs. recall analysis.


asia pacific signal and information processing association annual summit and conference | 2016

Video stabilization with distortion correction for wide-angle lens dashcam

Hsin-Yuan Huang; Shuo-Han Yeh; Che-Tsung Lin; Shang-Hong Lai

Video stabilization is of fundamental importance to provide high-quality video for non-stationary camera. However, conventional video stabilization methods did not consider lens distortion because the inter-frame translation is mostly estimated under the assumption of pure geometrical transform. This paper proposes a video stabilization algorithm which models the image distortion with a division model and estimates the vertical image translation so as to better stabilize consecutive images captured by wide-angle dashcam mounted on a vehicle. Besides, in order to quantitatively assess the stabilization result, several synthetic vibration sequences with associated Ground-Truth (GT) are generated. Using these sequences, we demonstrate that our algorithm outperforms other methods in that the distortion effect resulting from wide-angle lens is also modeled. Moreover, the proposed algorithm could achieve almost equivalent stabilization results from slight to heavy degree of vibration. The proposed datasets and the experimental results for either synthetic or real-driving data are available on the web.


international conference on intelligent transportation systems | 2014

Intelligent Surveillance System with See-Through Technology

Yu-Chen Lin; Che-Tsung Lin; Cheng-Chuan Chang; Long-Tai Chen

This paper proposes a new intelligent surveillance system for parking lot management in underground environments. The feature based image stitching and image blending techniques are used to let video streams captured by multiple cameras be fused into single output whose blind area could be easily seen through. Such system allows that anyone can immediately understand what is going on in the whole monitored area with a simple glance and without prior geometrical knowledge of the place because the blind area is translucent. The image stitching processes consists of four main parts:(1) feature correspondence detection and fundamental matrix estimation, (2) outlier filtering scheme based on the epipolar geometry and RANSAC, (3) computing the homography matrices between each pair of images, (4) projectively warping the images with their corresponding homography matrices, and conducting image fusion with the non-overlapping parts of the warped images. Finally, the process of translucent blending is applied to eliminate the blind areas inside the stitched image, and then the corresponding cameras behind the occluding pillar provide pixels for translucentizing. We have implemented the preliminary system with six surveillance cameras at underground parking lot environments, and experiment results of real world video sequences have been performed to verify the proposed design.


international conference on control and automation | 2014

In-image rain wiper elimination for vision-based Advanced Driver Assistance Systems

Che-Tsung Lin; Yu-Chen Lin; Long-Tai Chen; Yuan-Fang Wang

This paper proposes a new algorithm to eliminate the wiper interference in a vehicles onboard video to improve the detection rate of vision-based Advanced Driver Assistance Systems (ADAS), such as Forward Collision Warning (FCW) Systems. During raining days, the windshield wipers periodically and partially block the appearance of obstacles on the road that are to be detected in these early warning systems, and hence, impair their performance. The wiper pixels could be in-painted via (1) localizing the pixels belonging to the wipers and (2) replacing the wiper pixels with non-wiper pixels extracted from an adjacent video frame with no such blockage. Finally, since it is impossible to obtain the images captured in the same scenario with and without wiper interference, a quantitatively analyzing framework which blends wipers into non-wiper images is also proposed to quantitatively assess the detection rate of data with wipers, without wipers and with wipers but eliminated by our algorithm. The experimental results show that the wiper pixels being segmented and in-painted blend in unobtrusively with the surrounding pixels and the FCW detection rate is indeed improved.

Collaboration


Dive into the Che-Tsung Lin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Long-Tai Chen

Industrial Technology Research Institute

View shared research outputs
Top Co-Authors

Avatar

Yuan-Fang Wang

University of California

View shared research outputs
Top Co-Authors

Avatar

Shang-Hong Lai

National Tsing Hua University

View shared research outputs
Top Co-Authors

Avatar

Chia-Wen Lin

National Tsing Hua University

View shared research outputs
Top Co-Authors

Avatar

Li-Wei Kang

National Yunlin University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Wei-Cheng Liu

Industrial Technology Research Institute

View shared research outputs
Top Co-Authors

Avatar

C. S. Tsai

National Tsing Hua University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chi-Wei Lin

Industrial Technology Research Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge