Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Li-Wei Kang is active.

Publication


Featured researches published by Li-Wei Kang.


international conference on acoustics, speech, and signal processing | 2009

Distributed compressive video sensing

Li-Wei Kang; Chun-Shien Lu

Low-complexity video encoding has been applicable to several emerging applications. Recently, distributed video coding (DVC) has been proposed to reduce encoding complexity to the order of that for still image encoding. In addition, compressive sensing (CS) has been applicable to directly capture compressed image data efficiently. In this paper, by integrating the respective characteristics of DVC and CS, a distributed compressive video sensing (DCVS) framework is proposed to simultaneously capture and compress video data, where almost all computation burdens can be shifted to the decoder, resulting in a very low-complexity encoder. At the decoder, compressed video can be efficiently reconstructed using the modified GPSR (gradient projection for sparse reconstruction) algorithm. With the assistance of the proposed initialization and stopping criteria for GRSR, derived from statistical dependencies among successive video frames, our modified GPSR algorithm can terminate faster and reconstruct better video quality. The performance of our DCVS method is demonstrated via simulations to outperform three known CS reconstruction algorithms.


IEEE Transactions on Image Processing | 2012

Automatic Single-Image-Based Rain Streaks Removal via Image Decomposition

Li-Wei Kang; Chia-Wen Lin; Yu-Hsiang Fu

Rain removal from a video is a challenging problem and has been recently investigated extensively. Nevertheless, the problem of rain removal from a single image was rarely studied in the literature, where no temporal information among successive images can be exploited, making the problem very challenging. In this paper, we propose a single-image-based rain removal framework via properly formulating rain removal as an image decomposition problem based on morphological component analysis. Instead of directly applying a conventional image decomposition technique, the proposed method first decomposes an image into the low- and high-frequency (HF) parts using a bilateral filter. The HF part is then decomposed into a “rain component” and a “nonrain component” by performing dictionary learning and sparse coding. As a result, the rain component can be successfully removed from the image while preserving most original image details. Experimental results demonstrate the efficacy of the proposed algorithm.


visual communications and image processing | 2010

Dynamic measurement rate allocation for distributed compressive video sensing

Hung-Wei Chen; Li-Wei Kang; Chun-Shien Lu

We address an important issue of fully low-cost and low-complexity video encoding for use in resource limited sensors/devices. Conventional distributed video coding (DVC) does not actually meet this requirement because the acquisition of video sequences still relies on the high-cost mechanism (sampling + compression). Recently, we have proposed a distributed compressive video sensing (DCVS) framework to directly capture compressed video data called measurements, while exploiting correlations among successive frames for video reconstruction at the decoder. The core is to integrate the respective characteristics of DVC and compressive sensing (CS) to achieve CS-based single-pixel camera-compatible video encoder. At DCVS decoder, video reconstruction can be formulated as a convex unconstrained optimization problem via solving the sparse coefficients with respect to some basis functions. Nevertheless, the issue of measurement rate allocation has not been considered yet in the literature. Actually, different measurement rates should be adaptively assigned to different local regions by considering the sparsity of each region for improving reconstructed quality. This paper investigates dynamic measurement rate allocation in block-based DCVS, which can adaptively adjust measurement rates by estimating the sparsity of each block via feedback information. Simulation results have indicated the effectiveness of our scheme. It is worth noting that our goal is to develop a novel fully low-complexity video compression paradigm via the emerging compressive sensing and sparse representation technologies, and provide an alternative scheme adaptive to the environment, where raw video data is not available, instead of competing compression performances against the current compression standards (e.g., H.264/AVC) or DVC schemes which need raw data available for encoding.


IEEE Transactions on Multimedia | 2011

Feature-Based Sparse Representation for Image Similarity Assessment

Li-Wei Kang; Chao-Yung Hsu; Hung-Wei Chen; Chun-Shien Lu; Chih-Yang Lin; Soo-Chang Pei

Assessment of image similarity is fundamentally important to numerous multimedia applications. The goal of similarity assessment is to automatically assess the similarities among images in a perceptually consistent manner. In this paper, we interpret the image similarity assessment problem as an information fidelity problem. More specifically, we propose a feature-based approach to quantify the information that is present in a reference image and how much of this information can be extracted from a test image to assess the similarity between the two images. Here, we extract the feature points and their descriptors from an image, followed by learning the dictionary/basis for the descriptors in order to interpret the information present in this image. Then, we formulate the problem of the image similarity assessment in terms of sparse representation. To evaluate the applicability of the proposed feature-based sparse representation for image similarity assessment (FSRISA) technique, we apply FSRISA to three popular applications, namely, image copy detection, retrieval, and recognition by properly formulating them to sparse representation problems. Promising results have been obtained through simulations conducted on several public datasets, including the Stirmark benchmark, Corel-1000, COIL-20, COIL-100, and Caltech-101 datasets.


IEEE Transactions on Multimedia | 2014

Self-Learning Based Image Decomposition With Applications to Single Image Denoising

De-An Huang; Li-Wei Kang; Yu-Chiang Frank Wang; Chia-Wen Lin

Decomposition of an image into multiple semantic components has been an effective research topic for various image processing applications such as image denoising, enhancement, and inpainting. In this paper, we present a novel self-learning based image decomposition framework. Based on the recent success of sparse representation, the proposed framework first learns an over-complete dictionary from the high spatial frequency parts of the input image for reconstruction purposes. We perform unsupervised clustering on the observed dictionary atoms (and their corresponding reconstructed image versions) via affinity propagation, which allows us to identify image-dependent components with similar context information. While applying the proposed method for the applications of image denoising, we are able to automatically determine the undesirable patterns (e.g., rain streaks or Gaussian noise) from the derived image components directly from the input image, so that the task of single-image denoising can be addressed. Different from prior image processing works with sparse representation, our method does not need to collect training image data in advance, nor do we assume image priors such as the relationship between input and output image dictionaries. We conduct experiments on two denoising problems: single-image denoising with Gaussian noise and rain removal. Our empirical results confirm the effectiveness and robustness of our approach, which is shown to outperform state-of-the-art image denoising algorithms.


picture coding symposium | 2010

Dictionary learning-based distributed compressive video sensing

Hung-Wei Chen; Li-Wei Kang; Chun-Shien Lu

We address an important issue of fully low-cost and low-complex video compression for use in resource-extremely limited sensors/devices. Conventional motion estimation-based video compression or distributed video coding (DVC) techniques all rely on the high-cost mechanism, namely, sensing/sampling and compression are disjointedly performed, resulting in unnecessary consumption of resources. That is, most acquired raw video data will be discarded in the (possibly) complex compression stage. In this paper, we propose a dictionary learning-based distributed compressive video sensing (DCVS) framework to “directly” acquire compressed video data. Embedded in the compressive sensing (CS)-based single-pixel camera architecture, DCVS can compressively sense each video frame in a distributed manner. At DCVS decoder, video reconstruction can be formulated as an l1-minimization problem via solving the sparse coefficients with respect to some basis functions. We investigate adaptive dictionary/basis learning for each frame based on the training samples extracted from previous reconstructed neighboring frames and argue that much better basis can be obtained to represent the frame, compared to fixed basis-based representation and recent popular “CS-based DVC” approaches without relying on dictionary learning.


Optics Express | 2013

Haze effect removal from image via haze density estimation in optical model

Chia-Hung Yeh; Li-Wei Kang; Ming-Sui Lee; Cheng-Yang Lin

Images/videos captured from optical devices are usually degraded by turbid media such as haze, smoke, fog, rain and snow. Haze is the most common problem in outdoor scenes because of the atmosphere conditions. This paper proposes a novel single image-based dehazing framework to remove haze artifacts from images, where we propose two novel image priors, called the pixel-based dark channel prior and the pixel-based bright channel prior. Based on the two priors with the haze optical model, we propose to estimate atmospheric light via haze density analysis. We can then estimate transmission map, followed by refining it via the bilateral filter. As a result, high-quality haze-free images can be recovered with lower computational complexity compared with the state-of-the-art approach based on patch-based dark channel prior.


Information Sciences | 2014

Real-time background modeling based on a multi-level texture description

Chia-Hung Yeh; Chih-Yang Lin; Kahlil Muchtar; Li-Wei Kang

Background construction is the base of object detection and tracking of machine vision systems. Traditional background modeling methods often require complicated computations and are sensitive to illumination changes. This paper proposes a novel block-based background modeling method based on a hierarchical coarse-to-fine texture description, which fully utilizes the texture characteristics of each incoming frame. The proposed method is efficient and can resist both illumination changes and shadow disturbance. The experimental results show that this method is suitable for real-world scenes and real-time applications.


IEEE Transactions on Circuits and Systems for Video Technology | 2013

Scene-Based Movie Summarization Via Role-Community Networks

Chia-Ming Tsai; Li-Wei Kang; Chia-Wen Lin; Weisi Lin

Video summarization techniques aim at condensing a full-length video to a significantly shortened version that still preserves the major semantic content of the original video. Movie summarization, being a special class of video summarization, is particularly challenging since a large variety of movie scenarios and film styles complicate the problem. In this paper, we propose a two-stage scene-based movie summarization method based on mining the relationship between role-communities since the role-communities in earlier scenes are usually used to develop the role relationship in later scenes. In the analysis stage, we construct a social network to characterize the interactions between role-communities. As a result, the social power of each role-community is evaluated by the communitys centrality value and the role communities are clustered into relevant groups based on the centrality values. In the summarization stage, a set of feasible summary combinations of scenes is identified and an information-rich summary is selected from these candidates based on social power preservation. Our evaluation results show that in at most test cases the proposed method achieves better subjective performance than attention-based and role-based summarization methods in terms of semantic content preservation for a movie summary.


international conference on image processing | 2009

Compressive sensing-based image hashing

Li-Wei Kang; Chun-Shien Lu; Chao-Yung Hsu

In this paper, a new image hashing scheme satisfying robustness and security is proposed. We exploit the property of dimensionality reduction inherent in compressive sensing/sampling (CS) for image hash design. The gained benefits include (1) the hash size can be kept small and (2) the CS-based hash is computationally secure. We study the use of visual information fidelity (VIF) for hash comparison under Stirmark attacks. We further derive the relationships between the hash of an image and both of its MSE distortion and visual quality measured by VIF, respectively. Hence, based on hash comparisons, both the distortion and visual quality of a query image can be approximately estimated without accessing its original version. We also derive the minimum distortion for manipulating an image to be unauthentic to measure the security of our scheme.

Collaboration


Dive into the Li-Wei Kang's collaboration.

Top Co-Authors

Avatar

Chia-Hung Yeh

National Sun Yat-sen University

View shared research outputs
Top Co-Authors

Avatar

Chia-Wen Lin

National Tsing Hua University

View shared research outputs
Top Co-Authors

Avatar

Chao-Yung Hsu

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kahlil Muchtar

National Sun Yat-sen University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chih-Chung Hsu

National Tsing Hua University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge