Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zefeng Ni is active.

Publication


Featured researches published by Zefeng Ni.


international conference on image processing | 2010

Distributed particle filter tracking with online multiple instance learning in a camera sensor network

Zefeng Ni; Santhoshkumar Sunderrajan; Amir M. Rahimi; B. S. Manjunath

This paper proposes a distributed algorithm for object tracking in a camera sensor network. At each camera node, an efficient online multiple instance learning algorithm is used to model objects appearance. This is integrated with particle filter for cameras image plane tracking. To improve the tracking accuracy, each camera node shares its particle states with others and fuses multi-camera information locally. In particular, particle weights are updated according to the fused information. Then, appearance model is updated with the re-weighted particles. The effectiveness of the proposed algorithm is demonstrated on human tracking in challenging environments.


international conference on image processing | 2009

Improving the quality of depth image based rendering for 3D Video systems

Zefeng Ni; Dong Tian; Sitaram Bhagavathy; Joan Llach; B. S. Manjunath

In 3D Video (3DV) applications, a reduced number of views plus depth maps are transmitted or stored. When there is a need to render virtual views in between the actual views, the technique of depth image based rendering (DIBR) can be used to generate the intermediate views. To address the problem of noisy depth information in 3DV systems, we propose novel methods that can be easily incorporated into DIBR to improve synthesized image quality. These include: (1) a heuristic scheme with adaptive spatting that blends multiple warped reference pixels based on their depth, warped pixel positions and camera parameters; (2) an approximation of the first scheme with up-sampling for fast processing; (3) boundary only splatting; and (4) view weighting based on hole distribution. Experiment results show that the proposed methods can improve synthesis quality significantly.


international conference on pattern recognition | 2010

Particle Filter Tracking with Online Multiple Instance Learning

Zefeng Ni; Santhoshkumar Sunderrajan; Amir M. Rahimi; B. S. Manjunath

This paper proposes a distributed algorithm for object tracking in a camera sensor network. At each camera node, an efficient online multiple instance learning algorithm is used to model objects appearance. This is integrated with particle filter for cameras image plane tracking. To improve the tracking accuracy, each camera node shares its particle states with others and fuses multi-camera information locally. In particular, particle weights are updated according to the fused information. Then, appearance model is updated with the re-weighted particles. The effectiveness of the proposed algorithm is demonstrated on human tracking in challenging environments.This paper addresses the problem of object tracking by learning a discriminative classifier to separate the object from its background. The online-learned classifier is used to adaptively model objects appearance and its background. To solve the typical problem of erroneous training examples generated during tracking, an online multiple instance learning (MIL) algorithm is used by allowing false positive examples. In addition, particle filter is applied to make best use of the learned classifier and help to generate a better representative set of training examples for the online MIL learning. The effectiveness of the proposed algorithm is demonstrated in some challenging environments for human tracking.


international conference on distributed smart cameras | 2008

VISNET: A distributed vision testbed

M. Quinn; R. Mudumbai; Thomas Kuo; Zefeng Ni; C. De Leo; B. S. Manjunath

We introduce UCSBpsilas visual sensor network (VISNET) and discuss current research being conducted with the system. VISNET is a ten-node experimental camera network at UCSB used for various vision-related research. The mission of VISNET is to provide an easy-to-use multi-node camera network to the vision research community at UCSB. This paper briefly discusses design and setup considerations before discussing current research. Current research includes operation visualization, camera network calibration, tracked object modeling, and multiple object / multiple camera tracking.


IEEE Transactions on Multimedia | 2013

Graph-Based Topic-Focused Retrieval in Distributed Camera Network

Jiejun Xu; Vignesh Jagadeesh; Zefeng Ni; Santhoshkumar Sunderrajan; B. S. Manjunath

Wide-area wireless camera networks are being increasingly deployed in many urban scenarios. The large amount of data generated from these cameras pose significant information processing challenges. In this work, we focus on representation, search and retrieval of moving objects in the scene, with emphasis on local camera node video analysis. We develop a graph model that captures the relationships among objects without the need to identify global trajectories. Specifically, two types of edges are defined in the graph: object edges linking the same object across the whole network and context edges linking different objects within a spatial-temporal proximity. We propose a manifold ranking method with a greedy diversification step to order the relevant items based on similarity as well as diversity within the database. Detailed experimental results using video data from a 10-camera network covering bike paths are presented.


computer vision and pattern recognition | 2010

Design and implementation of a wide area, large-scale camera network

Thomas Kuo; Zefeng Ni; Carter De Leo; B. S. Manjunath

We describe a wide area camera network on a campus setting, the SCALLOPSNet (Scalable Large Optical Sensor Network). It covers with about 100 stationary cameras an expansive area that can be divided into three distinct regions: inside a building, along urban paths, and in a remote natural reserve. Some of these regions lack connections for power and communications, and, therefore, necessitate wireless, battery-powered camera nodes. In our exploration of available solutions, we found existing smart cameras to be insufficient for this task, and instead designed our own battery-powered camera nodes that communicate using 802.11b. The camera network uses the Internet Protocol on either wired or wireless networks to communicate with our central cluster, which runs cluster and cloud computing infrastructure. These frameworks like Apache Hadoop are well suited for large distributed and parallel tasks such as many computer vision algorithms. We discuss the design and implementation details of this network, together with the challenges faced in deploying such a large scale network on a research campus. We plan to make the datasets available for researchers in the computer vision community in the near future.


ACM Transactions on Sensor Networks | 2014

Calibrating a wide-area camera network with non-overlapping views using mobile devices

Thomas Kuo; Zefeng Ni; Santhoshkumar Sunderrajan; B. S. Manjunath

In a wide-area camera network, cameras are often placed such that their views do not overlap. Collaborative tasks such as tracking and activity analysis still require discovering the network topology including the extrinsic calibration of the cameras. This work addresses the problem of calibrating a fixed camera in a wide-area camera network in a global coordinate system so that the results can be shared across calibrations. We achieve this by using commonly available mobile devices such as smartphones. At least one mobile device takes images that overlap with a fixed cameras view and records the GPS position and 3D orientation of the device when an image is captured. These sensor measurements (including the image, GPS position, and device orientation) are fused in order to calibrate the fixed camera. This article derives a novel maximum likelihood estimation formulation for finding the most probable location and orientation of a fixed camera. This formulation is solved in a distributed manner using a consensus algorithm. We evaluate the efficacy of the proposed methodology with several simulated and real-world datasets.


computer vision and pattern recognition | 2012

Object browsing and searching in a camera network using graph models

Zefeng Ni; Jiejun Xu; B. S. Manjunath

This paper proposes a novel system to assist human image analysts to effectively browse and search for objects in a camera network. In contrast to the existing approaches that focus on finding global trajectories across cameras, the proposed approach directly models the relationship among raw camera observations. A graph model is proposed to represent detected/tracked objects, their appearance and spatial-temporal relationships. In order to minimize communication requirements, we assume that raw video is processed at camera nodes independently to compute object identities and trajectories at video rate. However, this would result in unreliable object locations and/or trajectories. The proposed graph structure captures the uncertainty in these camera observations by effectively modeling their global relationships, and enables a human analyst to query, browse and search the data collected from the camera network. A novel graph ranking framework is proposed for the search and retrieval task, and the absorbing random walk algorithm is adapted to retrieve a representative and diverse set of video frames from the cameras in response to a user query. Preliminary results on a wide area camera network are presented.


Proceedings of the 1st ACM international workshop on Multimodal pervasive video analysis | 2010

Spatial-temporal understanding of urban scenes through large camera network

Jiejun Xu; Zefeng Ni; Carter De Leo; Thomas Kuo; B. S. Manjunath

Outdoor surveillance cameras have become prevalent as part of the urban infrastructure, and provided a good data source for studying urban dynamics. In this work, we provide a spatial-temporal analysis of 8 weeks of video data collected from the large outdoor camera network at UCSB campus, which consists of 27 cameras. We first apply simple vision algorithm to extract the crowdedness information in the scene. Then we further explore the relationship between the traffic pattern observed from the cameras with activities in the nearby area using additional knowledge such as campus class schedule. Finally we investigate the potential of discovering aggregated human movement pattern by assuming a simple probabilistic model. Experiment has shown promising results using the proposed method.


Archive | 2009

View synthesis with heuristic view merging

Zefeng Ni; Dong Tian; Sitaram Bhagavathy; Joan Llach

Collaboration


Dive into the Zefeng Ni's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Thomas Kuo

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jiejun Xu

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Amir M. Rahimi

University of California

View shared research outputs
Top Co-Authors

Avatar

Carter De Leo

University of California

View shared research outputs
Top Co-Authors

Avatar

C. De Leo

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge