Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shiyu Chang is active.

Publication


Featured researches published by Shiyu Chang.


computer vision and pattern recognition | 2013

Learning Locally-Adaptive Decision Functions for Person Verification

Zhen Li; Shiyu Chang; Feng Liang; Thomas S. Huang; Liangliang Cao; John R. Smith

This paper considers the person verification problem in modern surveillance and video retrieval systems. The problem is to identify whether a pair of face or human body images is about the same person, even if the person is not seen before. Traditional methods usually look for a distance (or similarity) measure between images (e.g., by metric learning algorithms), and make decisions based on a fixed threshold. We show that this is nevertheless insufficient and sub-optimal for the verification problem. This paper proposes to learn a decision function for verification that can be viewed as a joint model of a distance metric and a locally adaptive thresholding rule. We further formulate the inference on our decision function as a second-order large-margin regularization problem, and provide an efficient algorithm in its dual from. We evaluate our algorithm on both human body verification and face verification problems. Our method outperforms not only the classical metric learning algorithm including LMNN and ITML, but also the state-of-the-art in the computer vision community.


knowledge discovery and data mining | 2015

Heterogeneous Network Embedding via Deep Architectures

Shiyu Chang; Wei Han; Jiliang Tang; Guo-Jun Qi; Charu C. Aggarwal; Thomas S. Huang

Data embedding is used in many machine learning applications to create low-dimensional feature representations, which preserves the structure of data points in their original space. In this paper, we examine the scenario of a heterogeneous network with nodes and content of various types. Such networks are notoriously difficult to mine because of the bewildering combination of heterogeneous contents and structures. The creation of a multidimensional embedding of such data opens the door to the use of a wide variety of off-the-shelf mining techniques for multidimensional data. Despite the importance of this problem, limited efforts have been made on embedding a network of scalable, dynamic and heterogeneous data. In such cases, both the content and linkage structure provide important cues for creating a unified feature representation of the underlying network. In this paper, we design a deep embedding algorithm for networked data. A highly nonlinear multi-layered embedding function is used to capture the complex interactions between the heterogeneous data in a network. Our goal is to create a multi-resolution deep embedding function, that reflects both the local and global network structures, and makes the resulting embedding useful for a variety of data mining tasks. In particular, we demonstrate that the rich content and linkage information in a heterogeneous network can be captured by such an approach, so that similarities among cross-modal data can be measured directly in a common embedding space. Once this goal has been achieved, a wide variety of data mining problems can be solved by applying off-the-shelf algorithms designed for handling vector representations. Our experiments on real-world network datasets show the effectiveness and scalability of the proposed algorithm as compared to the state-of-the-art embedding methods.


web search and data mining | 2015

Negative Link Prediction in Social Media

Jiliang Tang; Shiyu Chang; Charu C. Aggarwal; Huan Liu

Signed network analysis has attracted increasing attention in recent years. This is in part because research on signed network analysis suggests that negative links have added value in the analytical process. A major impediment in their effective use is that most social media sites do not enable users to specify them explicitly. In other words, a gap exists between the importance of negative links and their availability in real data sets. Therefore, it is natural to explore whether one can predict negative links automatically from the commonly available social network data. In this paper, we investigate the novel problem of negative link prediction with only positive links and content-centric interactions in social media. We make a number of important observations about negative links, and propose a principled framework NeLP, which can exploit positive links and content-centric interactions to predict negative links. Our experimental results on real-world social networks demonstrate that the proposed NeLP framework can accurately predict negative links with positive links and content-centric interactions. Our detailed experiments also illustrate the relative importance of various factors to the effectiveness of the proposed framework.


computer vision and pattern recognition | 2011

Towards cross-category knowledge propagation for learning visual concepts

Guo-Jun Qi; Charu C. Aggarwal; Yong Rui; Qi Tian; Shiyu Chang; Thomas S. Huang

In recent years, knowledge transfer algorithms have become one of most the active research areas in learning visual concepts. Most of the existing learning algorithms focuses on leveraging the knowledge transfer process which is specific to a given category. However, in many cases, such a process may not be very effective when a particular target category has very few samples. In such cases, it is interesting to examine, whether it is feasible to use cross-category knowledge for improving the learning process by exploring the knowledge in correlated categories. Such a task can be quite challenging due to variations in semantic similarities and differences between categories, which could either help or hinder the cross-category learning process. In order to address this challenge, we develop a cross-category label propagation algorithm, which can directly propagate the inter-category knowledge at instance level between the source and the target categories. Furthermore, this algorithm can automatically detect conditions under which the transfer process can be detrimental to the learning process. This provides us a way to know when the transfer of cross-category knowledge is both useful and desirable. We present experimental results on real image and video data sets in order to demonstrate the effectiveness of our approach.


computer vision and pattern recognition | 2015

Self-tuned deep super resolution

Zhangyang Wang; Yingzhen Yang; Zhaowen Wang; Shiyu Chang; Wei Han; Jianchao Yang; Thomas S. Huang

Deep learning has been successfully applied to image super resolution (SR). In this paper, we propose a deep joint super resolution (DJSR) model to exploit both external and self similarities for SR. A Stacked Denoising Convolutional Auto Encoder (SDCAE) is first pre-trained on external examples with proper data augmentations. It is then fine-tuned with multi-scale self examples from each input, where the reliability of self examples is explicitly taken into account. We also enhance the model performance by sub-model training and selection. The DJSR model is extensively evaluated and compared with state-of-the-arts, and show noticeable performance improvements both quantitatively and perceptually on a wide range of images.


IEEE Transactions on Image Processing | 2015

Learning Super-Resolution Jointly From External and Internal Examples

Zhangyang Wang; Yingzhen Yang; Zhaowen Wang; Shiyu Chang; Jianchao Yang; Thomas S. Huang

Single image super-resolution (SR) aims to estimate a high-resolution (HR) image from a low-resolution (LR) input. Image priors are commonly learned to regularize the, otherwise, seriously ill-posed SR problem, either using external LR-HR pairs or internal similar patterns. We propose joint SR to adaptively combine the advantages of both external and internal SR methods. We define two loss functions using sparse coding-based external examples, and epitomic matching based on internal examples, as well as a corresponding adaptive weight to automatically balance their contributions according to their reconstruction errors. Extensive SR results demonstrate the effectiveness of the proposed method over the existing state-of-the-art methods, and is also verified by our subjective evaluation study.


computer vision and pattern recognition | 2016

Studying Very Low Resolution Recognition Using Deep Networks

Zhangyang Wang; Shiyu Chang; Yingzhen Yang; Ding Liu; Thomas S. Huang

Visual recognition research often assumes a sufficient resolution of the region of interest (ROI). That is usually violated in practice, inspiring us to explore the Very Low Resolution Recognition (VLRR) problem. Typically, the ROI in a VLRR problem can be smaller than 16 16 pixels, and is challenging to be recognized even by human experts. We attempt to solve the VLRR problem using deep learning methods. Taking advantage of techniques primarily in super resolution, domain adaptation and robust regression, we formulate a dedicated deep learning method and demonstrate how these techniques are incorporated step by step. Any extra complexity, when introduced, is fully justified by both analysis and simulation results. The resulting Robust Partially Coupled Networks achieves feature enhancement and recognition simultaneously. It allows for both the flexibility to combat the LR-HR domain mismatch, and the robustness to outliers. Finally, the effectiveness of the proposed models is evaluated on three different VLRR tasks, including face identification, digit recognition and font recognition, all of which obtain very impressive performances.


IEEE Transactions on Image Processing | 2014

Blind Image Deblurring Using Spectral Properties of Convolution Operators

Guangcan Liu; Shiyu Chang; Yi Ma

Blind deconvolution is to recover a sharp version of a given blurry image or signal when the blur kernel is unknown. Because this problem is ill-conditioned in nature, effectual criteria pertaining to both the sharp image and blur kernel are required to constrain the space of candidate solutions. While the problem has been extensively studied for long, it is still unclear how to regularize the blur kernel in an elegant, effective fashion. In this paper, we show that the blurry image itself actually encodes rich information about the blur kernel, and such information can indeed be found by exploring and utilizing a well-known phenomenon, that is, sharp images are often high pass, whereas blurry images are usually low pass. More precisely, we shall show that the blur kernel can be retrieved through analyzing and comparing how the spectrum of an image as a convolution operator changes before and after blurring. Subsequently, we establish a convex kernel regularizer, which depends only on the given blurry image. Interestingly, the minimizer of this regularizer guarantees to give a good estimate to the desired blur kernel if the original image is sharp enough. By combining this powerful regularizer with the prevalent nonblind devonvolution techniques, we show how we could significantly improve the deblurring results through simulations on synthetic images and experiments on realistic images.


international conference on data mining | 2014

Factorized Similarity Learning in Networks

Shiyu Chang; Guo-Jun Qi; Charu C. Aggarwal; Jiayu Zhou; Meng Wang; Thomas S. Huang

The problem of similarity learning is relevant to many data mining applications, such as recommender systems, classification, and retrieval. This problem is particularly challenging in the context of networks, which contain different aspects such as the topological structure, content, and user supervision. These different aspects need to be combined effectively, in order to create a holistic similarity function. In particular, while most similarity learning methods in networks such as Sim Rank utilize the topological structure, the user supervision and content are rarely considered. In this paper, a Factorized Similarity Learning (FSL) is proposed to integrate the link, node content, and user supervision into an uniform framework. This is learned by using matrix factorization, and the final similarities are approximated by the span of low rank matrices. The proposed framework is further extended to a noise-tolerant version by adopting a hinge-loss alternatively. To facilitate efficient computation on large scale data, a parallel extension is developed. Experiments are conducted on the DBLP and CoRA datasets. The results show that FSL is robust, efficient, and outperforms the state-of-the-art.


international conference on data mining | 2013

Multimedia LEGO: Learning Structured Model by Probabilistic Logic Ontology Tree

Shiyu Chang; Guo-Jun Qi; Jinhui Tang; Qi Tian; Yong Rui; Thomas S. Huang

Recent advances in Multimedia research have generated a large collection of concept models, e.g., LSCOM and Media mill 101, which become accessible to other researchers. While most current research effort still focuses on building new concepts from scratch, little effort has been made on constructing new concepts upon the existing models already in the warehouse. To address this issue, we develop a new framework in this paper, termed LEGO, to seamlessly integrate both the new target training examples and the existing primitive concept models. LEGO treats the primitive concept models as a lego toy to potentially construct an unlimited vocabulary of new concepts. Specifically, LEGO first formulates the logic operations to be the lego connectors to combine existing concept models hierarchically in probabilistic logic ontology trees. LEGO then simultaneously incorporates new target training information to efficiently disambiguate the underlying logic tree and correct the error propagation. We present extensive experimental results on a large vehicle domain data set from Image Net, and demonstrate significantly superior performance over existing state-of-the-art approaches which build new concept models from scratch.

Collaboration


Dive into the Shiyu Chang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Guo-Jun Qi

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Jiayu Zhou

Michigan State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Charu C. Aggarwal

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xianming Liu

Harbin Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge