Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chao-Yung Hsu is active.

Publication


Featured researches published by Chao-Yung Hsu.


IEEE Transactions on Image Processing | 2012

Image Feature Extraction in Encrypted Domain With Privacy-Preserving SIFT

Chao-Yung Hsu; Chun-Shien Lu; Soo-Chang Pei

Privacy has received considerable attention but is still largely ignored in the multimedia community. Consider a cloud computing scenario where the server is resource-abundant, and is capable of finishing the designated tasks. It is envisioned that secure media applications with privacy preservation will be treated seriously. In view of the fact that scale-invariant feature transform (SIFT) has been widely adopted in various fields, this paper is the first to target the importance of privacy-preserving SIFT (PPSIFT) and to address the problem of secure SIFT feature extraction and representation in the encrypted domain. As all of the operations in SIFT must be moved to the encrypted domain, we propose a privacy-preserving realization of the SIFT method based on homomorphic encryption. We show through the security analysis based on the discrete logarithm problem and RSA that PPSIFT is secure against ciphertext only attack and known plaintext attack. Experimental results obtained from different case studies demonstrate that the proposed homomorphic encryption-based privacy-preserving SIFT performs comparably to the original SIFT and that our method is useful in SIFT-based privacy-preserving applications.


IEEE Transactions on Multimedia | 2011

Feature-Based Sparse Representation for Image Similarity Assessment

Li-Wei Kang; Chao-Yung Hsu; Hung-Wei Chen; Chun-Shien Lu; Chih-Yang Lin; Soo-Chang Pei

Assessment of image similarity is fundamentally important to numerous multimedia applications. The goal of similarity assessment is to automatically assess the similarities among images in a perceptually consistent manner. In this paper, we interpret the image similarity assessment problem as an information fidelity problem. More specifically, we propose a feature-based approach to quantify the information that is present in a reference image and how much of this information can be extracted from a test image to assess the similarity between the two images. Here, we extract the feature points and their descriptors from an image, followed by learning the dictionary/basis for the descriptors in order to interpret the information present in this image. Then, we formulate the problem of the image similarity assessment in terms of sparse representation. To evaluate the applicability of the proposed feature-based sparse representation for image similarity assessment (FSRISA) technique, we apply FSRISA to three popular applications, namely, image copy detection, retrieval, and recognition by properly formulating them to sparse representation problems. Promising results have been obtained through simulations conducted on several public datasets, including the Stirmark benchmark, Corel-1000, COIL-20, COIL-100, and Caltech-101 datasets.


acm multimedia | 2009

Secure and robust SIFT

Chao-Yung Hsu; Chun-Shien Lu; Soo-Chang Pei

Scale-invariant feature transform (SIFT) is a powerful tool extensively used in the community of pattern recognition and computer vision. However, the security issue of SIFT is relatively unexplored in the literature. This paper investigates the potential weakness of SIFT, meaning that the SIFT features can be deleted or destroyed while maintaining acceptable visual qualities. We then propose an improved scheme to enhance the security of SIFT by introducing a key-based transform process to images. Experimental results demonstrate the effectiveness of our methods.


international conference on image processing | 2009

Compressive sensing-based image hashing

Li-Wei Kang; Chun-Shien Lu; Chao-Yung Hsu

In this paper, a new image hashing scheme satisfying robustness and security is proposed. We exploit the property of dimensionality reduction inherent in compressive sensing/sampling (CS) for image hash design. The gained benefits include (1) the hash size can be kept small and (2) the CS-based hash is computationally secure. We study the use of visual information fidelity (VIF) for hash comparison under Stirmark attacks. We further derive the relationships between the hash of an image and both of its MSE distortion and visual quality measured by VIF, respectively. Hence, based on hash comparisons, both the distortion and visual quality of a query image can be approximately estimated without accessing its original version. We also derive the minimum distortion for manipulating an image to be unauthentic to measure the security of our scheme.


Proceedings of SPIE | 2011

Homomorphic encryption-based secure SIFT for privacy-preserving feature extraction

Chao-Yung Hsu; Chun-Shien Lu; Soo-Chang Pei

Privacy has received much attention but is still largely ignored in the multimedia community. Consider a cloud computing scenario, where the server is resource-abundant and is capable of finishing the designated tasks, it is envisioned that secure media retrieval and search with privacy-preserving will be seriously treated. In view of the fact that scale-invariant feature transform (SIFT) has been widely adopted in various fields, this paper is the first to address the problem of secure SIFT feature extraction and representation in the encrypted domain. Since all the operations in SIFT must be moved to the encrypted domain, we propose a homomorphic encryption-based secure SIFT method for privacy-preserving feature extraction and representation based on Paillier cryptosystem. In particular, homomorphic comparison is a must for SIFT feature detection but is still a challenging issue for homomorphic encryption methods. To conquer this problem, we investigate a quantization-like secure comparison strategy in this paper. Experimental results demonstrate that the proposed homomorphic encryption-based SIFT performs comparably to original SIFT on image benchmarks, while preserving privacy additionally. We believe that this work is an important step toward privacy-preserving multimedia retrieval in an environment, where privacy is a major concern.


international conference on multimedia and expo | 2010

Secure SIFT-based sparse representation for image copy detection and recognition

Li-Wei Kang; Chao-Yung Hsu; Hung-Wei Chen; Chun-Shien Lu

In this paper, we formulate the problems of image copy detection and image recognition in terms of sparse representation. To achieve robustness, security, and efficient storage of image features, we propose to extract compact local feature descriptors via constructing the basis of the SIFT-based feature vectors extracted from the secure SIFT domain of an image. Image copy detection can be efficiently accomplished based on the sparse representations and reconstruction errors of the features extracted from an image possibly manipulated by signal processing or geometric attacks. For image recognition, we show that the features of a query image can be represented as sparse linear combinations of the features extracted from the training images belonging to the same cluster. Hence, image recognition can also be cast as a sparse representation problem. Then, we formulate our sparse representation problem as an l1-minimization problem. Promising results regarding image copy detection and recognition have been verified, respectively, through the simulations conducted on several content-preserving attacks defined in the Stirmark benchmark and Caltech-101 dataset.


international conference on multimedia and expo | 2008

Power-scalable multi-layer halftone video display for electronic paper

Chao-Yung Hsu; Chun-Shien Lu; Soo-Chang Pei

Video halftoning is a key technology for use in the new display device, electronic paper (e-paper). One challenging issue is how to save the limited power of mobile e-paper device when a halftone video is displayed with various frame rates. In this paper, we propose a power-scalable multi-layer halftone video display scheme, which is composed of layer coding, non-uniform sampling, and flicker rate reduction. Our method not only efficiently save power over the state of the art video halftoning technology but also keep the quality of halftone video nearly unchanged when power saving is additionally considered. Experimental results demonstrate the effectiveness of the proposed method.


international conference on image processing | 2008

Compression of halftone video for electronic paper

Chao-Yung Hsu; Chun-Shien Lu; Soo-Chang Pei

Video halftoning is a key technology for use in the innovative display - electronic paper (e-paper). Since e-paper is power-limited, halftone video compression becomes an emerging issue but is still relatively unexplored. In this paper, this issue is addressed and a novel halftone video compression scheme is proposed. Our scheme is mainly composed of three components: block decomposition, block-based halftone quantization, and source coding. We evaluate the proposed method via lossless halftone video compression comparison with the famous standard, JBIG2. In addition, we demonstrate the rate-distortion performance of the proposed lossy halftone video compression method.


international conference on multimedia and expo | 2013

Cross-camera vehicle tracking via affine invariant object matching for video forensics applications

Chao-Yung Hsu; Li-Wei Kang; Hong-Yuan Mark Liao

The recent deployment of very large-scale camera networks consisting of fixed/moving surveillance cameras and vehicle video recorders, has led to a novel field in object tracking problem. The major goal is to detect and track each vehicle within a large area, which can be applied to video forensics. For example, a suspected vehicle can be automatically identified for mining digital criminal evidences from a large amount of video data. In this paper, we propose an efficient cross-camera vehicle tracking technique via affine invariant object matching. More specifically, we formulate the problem as invariant image feature matching among different viewpoints of cameras. To achieve vehicle matching, we first extract invariant image feature based on ASIFT (affine and scale-invariant feature transform) for each detected vehicle in a camera network. Then, to improve the accuracy of ASIFT feature matching between images from different viewpoints, we propose to efficiently match feature points based on our observed spatially invariant property of ASIFT, as well as the min-hash technique. As a result, cross-camera vehicle tracking can be efficiently and accurately achieved. Experimental results demonstrate the efficacy of the proposed algorithm and the feasibility to video forensics applications.


international conference on internet multimedia computing and service | 2013

Identification and tracking of players in sport videos

Chun-Wei Lu; Chih-Yang Lin; Chao-Yung Hsu; Ming-Fang Weng; Li-Wei Kang; Hong-Yuan Mark Liao

In this paper, we propose a novel framework to automatically perform player tracking and identification for sport videos filmed by a single pan-tilt-zoom camera from the court view. The proposed scheme is separated into three parts. The first part is to detect players by a deformable part model. The second part is to recognize jersey numbers by gradient differences and optical character recognition. The final part applies particle filters to track players. Experimental results demonstrate the efficacy of the proposed algorithm and the feasibility for sports video analysis.

Collaboration


Dive into the Chao-Yung Hsu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Soo-Chang Pei

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Li-Wei Kang

National Yunlin University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chia-Hung Yeh

National Sun Yat-sen University

View shared research outputs
Top Co-Authors

Avatar

Chia-Mu Yu

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Chia-Tsung Lin

National Yunlin University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge