Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Cewu Lu is active.

Publication


Featured researches published by Cewu Lu.


international conference on computer vision | 2013

Abnormal Event Detection at 150 FPS in MATLAB

Cewu Lu; Jianping Shi; Jiaya Jia

Speedy abnormal event detection meets the growing demand to process an enormous number of surveillance videos. Based on inherent redundancy of video structures, we propose an efficient sparse combination learning framework. It achieves decent performance in the detection phase without compromising result quality. The short running time is guaranteed because the new method effectively turns the original complicated problem to one in which only a few costless small-scale least square optimization steps are involved. Our method reaches high detection rates on benchmark datasets at a speed of 140-150 frames per second on average when computing on an ordinary desktop PC using MATLAB.


european conference on computer vision | 2016

Visual Relationship Detection with Language Priors

Cewu Lu; Ranjay Krishna; Michael S. Bernstein; Li Fei-Fei

Visual relationships capture a wide variety of interactions between pairs of objects in images (e.g. “man riding bicycle” and “man pushing bicycle”). Consequently, the set of possible relationships is extremely large and it is difficult to obtain sufficient training examples for all possible relationships. Because of this limitation, previous work on visual relationship detection has concentrated on predicting only a handful of relationships. Though most relationships are infrequent, their objects (e.g. “man” and “bicycle”) and predicates (e.g. “riding” and “pushing”) independently occur more frequently. We propose a model that uses this insight to train visual models for objects and predicates individually and later combines them together to predict multiple relationships per image. We improve on prior work by leveraging language priors from semantic word embeddings to finetune the likelihood of a predicted relationship. Our model can scale to predict thousands of types of relationships from a few examples. Additionally, we localize the objects in the predicted relationships as bounding boxes in the image. We further demonstrate that understanding relationships can improve content based image retrieval.


computer vision and pattern recognition | 2014

Learning Important Spatial Pooling Regions for Scene Classification

Di Lin; Cewu Lu; Renjie Liao; Jiaya Jia

We address the false response influence problem when learning and applying discriminative parts to construct the mid-level representation in scene classification. It is often caused by the complexity of latent image structure when convolving part filters with input images. This problem makes mid-level representation, even after pooling, not distinct enough to classify input data correctly to categories. Our solution is to learn important spatial pooling regions along with their appearance. The experiments show that this new framework suppresses false response and produces improved results on several datasets, including MIT-Indoor, 15-Scene, and UIUC 8-Sport. When combined with global image features, our method achieves state-of-the-art performance on these datasets.


computer vision and pattern recognition | 2013

Online Robust Dictionary Learning

Cewu Lu; Jianping Shi; Jiaya Jia

Online dictionary learning is particularly useful for processing large-scale and dynamic data in computer vision. It, however, faces the major difficulty to incorporate robust functions, rather than the square data fitting term, to handle outliers in training data. In this paper, we propose a new online framework enabling the use of ℓ1 sparse data fitting term in robust dictionary learning, notably enhancing the usability and practicality of this important technique. Extensive experiments have been carried out to validate our new framework.


computer vision and pattern recognition | 2015

Deep LAC: Deep localization, alignment and classification for fine-grained recognition

Di Lin; Xiaoyong Shen; Cewu Lu; Jiaya Jia

We propose a fine-grained recognition system that incorporates part localization, alignment, and classification in one deep neural network. This is a nontrivial process, as the input to the classification module should be functions that enable back-propagation in constructing the solver. Our major contribution is to propose a valve linkage function (VLF) for back-propagation chaining and form our deep localization, alignment and classification (LAC) system. The VLF can adaptively compromise the errors of classification and alignment when training the LAC model. It in turn helps update localization. The performance on fine-grained object data bears out the effectiveness of our LAC system.


computer vision and pattern recognition | 2014

Range-Sample Depth Feature for Action Recognition

Cewu Lu; Jiaya Jia; Chi-Keung Tang

We propose binary range-sample feature in depth. It is based on τ tests and achieves reasonable invariance with respect to possible change in scale, viewpoint, and background. It is robust to occlusion and data corruption as well. The descriptor works in a high speed thanks to its binary property. Working together with standard learning algorithms, the proposed descriptor achieves state-of-the-art results on benchmark datasets in our experiments. Impressively short running time is also yielded.


international conference on computational photography | 2012

Contrast preserving decolorization

Cewu Lu; Li Xu; Jiaya Jia

Decolorization - the process to transform a color image to a grayscale one - is a basic tool in digital printing, stylized black-and-white photography, and in many single channel image processing applications. In this paper, we propose an optimization approach aiming at maximally preserving the original color contrast. Our main contribution is to alleviate a strict order constraint for color mapping based on human vision system, which enables the employment of a bimodal distribution to constrain spatial pixel difference and allows for automatic selection of suitable gray scale in order to preserve the original contrast. Both the quantitative and qualitative evaluation bears out the effectiveness of the proposed method.


non photorealistic animation and rendering | 2012

Combining sketch and tone for pencil drawing production

Cewu Lu; Li Xu; Jiaya Jia

We propose a new system to produce pencil drawing from natural images. The results contain various natural strokes and patterns, and are structurally representative. They are accomplished by novelly combining the tone and stroke structures, which complement each other in generating visually constrained results. Prior knowledge on pencil drawing is also incorporated, making the two basic functions robust against noise, strong texture, and significant illumination variation. In light of edge, shadow, and shading information conveyance, our pencil drawing system establishes a style that artists use to understand visual data and draw them. Meanwhile, it lets the results contain rich and well-ordered lines to vividly express the original scene.


international conference on computer graphics and interactive techniques | 2016

A scalable active framework for region annotation in 3D shape collections

Li Yi; Vladimir G. Kim; Duygu Ceylan; I-Chao Shen; Mengyan Yan; Hao Su; Cewu Lu; Qixing Huang; Alla Sheffer; Leonidas J. Guibas

Large repositories of 3D shapes provide valuable input for data-driven analysis and modeling tools. They are especially powerful once annotated with semantic information such as salient regions and functional parts. We propose a novel active learning method capable of enriching massive geometric datasets with accurate semantic region annotations. Given a shape collection and a user-specified region label our goal is to correctly demarcate the corresponding regions with minimal manual work. Our active framework achieves this goal by cycling between manually annotating the regions, automatically propagating these annotations across the rest of the shapes, manually verifying both human and automatic annotations, and learning from the verification results to improve the automatic propagation algorithm. We use a unified utility function that explicitly models the time cost of human input across all steps of our method. This allows us to jointly optimize for the set of models to annotate and for the set of models to verify based on the predicted impact of these actions on the human efficiency. We demonstrate that incorporating verification of all produced labelings within this unified objective improves both accuracy and efficiency of the active learning procedure. We automatically propagate human labels across a dynamic shape network using a conditional random field (CRF) framework, taking advantage of global shape-to-shape similarities, local feature similarities, and point-to-point correspondences. By combining these diverse cues we achieve higher accuracy than existing alternatives. We validate our framework on existing benchmarks demonstrating it to be significantly more efficient at using human input compared to previous techniques. We further validate its efficiency and robustness by annotating a massive shape dataset, labeling over 93,000 shape parts, across multiple model classes, and providing a labeled part collection more than one order of magnitude larger than existing ones.


International Journal of Computer Vision | 2014

Contrast Preserving Decolorization with Perception-Based Quality Metrics

Cewu Lu; Li Xu; Jiaya Jia

Converting color images into grayscale ones suffer from information loss. In the meantime, it is one fundamental tool indispensable for single channel image processing, digital printing, and monotone e-ink display. In this paper, we propose an optimization framework aiming at maximally preserving color contrast. Our main contribution is threefold. First, we employ a bimodal objective function to alleviate the restrictive order constraint for color mapping. Second, we develop an efficient solver that allows for automatic selection of suitable grayscales based on global contrast constraints. Third, we advocate a perceptual-based metric to measure contrast loss, as well as content preservation, in the produced grayscale images. It is among the first attempts in this field to quantitatively evaluate decolorization results.

Collaboration


Dive into the Cewu Lu's collaboration.

Top Co-Authors

Avatar

Jiaya Jia

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Chi-Keung Tang

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Di Lin

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Yongyi Lu

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Hao-Shu Fang

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Yonglu Li

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Jianping Shi

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Li Xu

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Yao Xiao

Hong Kong University of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge