Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Junbin Liu is active.

Publication


Featured researches published by Junbin Liu.


international conference on embedded networked sensor systems | 2012

Efficient background subtraction for real-time tracking in embedded camera networks

Yiran Shen; Wen Hu; Junbin Liu; Mingrui Yang; Bo Wei; Chun Tung Chou

Background subtraction is often the first step of many computer vision applications. For a background subtraction method to be useful in embedded camera networks, it must be both accurate and computationally efficient because of the resource constraints on embedded platforms. This makes many traditional background subtraction algorithms unsuitable for embedded platforms because they use complex statistical models to handle subtle illumination changes. These models make them accurate but the computational requirement of these complex models is often too high for embedded platforms. In this paper, we propose a new background subtraction method which is both accurate and computational efficient. The key idea is to use compressive sensing to reduce the dimensionality of the data while retaining most of the information. By using multiple datasets, we show that the accuracy of our proposed background subtraction method is comparable to that of the traditional background subtraction methods. Moreover, real implementation on an embedded camera platform shows that our proposed method is at least 5 times faster, and consumes significantly less energy and memory resources than the conventional approaches. Finally, we demonstrated the feasibility of the proposed method by the implementation and evaluation of an end-to-end real-time embedded camera network target tracking application.


IEEE Transactions on Image Processing | 2014

Optimal Camera Planning Under Versatile User Constraints in Multi-Camera Image Processing Systems

Junbin Liu; Sridha Sridharan; Clinton Fookes; Tim Wark

The selection of optimal camera configurations (camera locations, orientations, etc.) for multi-camera networks remains an unsolved problem. Previous approaches largely focus on proposing various objective functions to achieve different tasks. Most of them, however, do not generalize well to large scale networks. To tackle this, we propose a statistical framework of the problem as well as propose a trans-dimensional simulated annealing algorithm to effectively deal with it. We compare our approach with a state-of-the-art method based on binary integer programming (BIP) and show that our approach offers similar performance on small scale problems. However, we also demonstrate the capability of our approach in dealing with large scale problems and show that our approach produces better results than two alternative heuristics designed to deal with the scalability issue of BIP. Last, we show the versatility of our approach using a number of specific scenarios.


IEEE Transactions on Mobile Computing | 2016

Real-Time and Robust Compressive Background Subtraction for Embedded Camera Networks

Yiran Shen; Wen Hu; Mingrui Yang; Junbin Liu; Bo Wei; Simon Lucey; Chun Tung Chou

Real-time target tracking is an important service provided by embedded camera networks. The first step in target tracking is to extract the moving targets from the video frames, which can be realised by using background subtraction. For a background subtraction method to be useful in embedded camera networks, it must be both accurate and computationally efficient because of the resource constraints on embedded platforms. This makes many traditional background subtraction algorithms unsuitable for embedded platforms because they use complex statistical models to handle subtle illumination changes. These models make them accurate but the computational requirement of these complex models is often too high for embedded platforms. In this paper, we propose a new background subtraction method which is both accurate and computationally efficient. We propose a baseline version which uses luminance only and then extend it to use colour information. The key idea is to use random projection matrics to reduce the dimensionality of the data while retaining most of the information. By using multiple datasets, we show that the accuracy of our proposed background subtraction method is comparable to that of the traditional background subtraction methods. Moreover, to show the computational efficiency of our methods is not platform specific, we implement it on various platforms. The real implementation shows that our proposed method is consistently better and is up to six times faster, and consume significantly less resources than the conventional approaches. Finally, we demonstrated the feasibility of the proposed method by the implementation and evaluation of an end-to-end real-time embedded camera network target tracking application.


ACM Computing Surveys | 2016

Recent Advances in Camera Planning for Large Area Surveillance: A Comprehensive Review

Junbin Liu; Sridha Sridharan; Clinton Fookes

With recent advances in consumer electronics and the increasingly urgent need for public security, camera networks have evolved from their early role of providing simple and static monitoring to current complex systems capable of obtaining extensive video information for intelligent processing, such as target localization, identification, and tracking. In all cases, it is of vital importance that the optimal camera configuration (i.e., optimal location, orientation, etc.) is determined before cameras are deployed as a suboptimal placement solution will adversely affect intelligent video surveillance and video analytic algorithms. The optimal configuration may also provide substantial savings on the total number of cameras required to achieve the same level of utility. In this article, we examine most, if not all, of the recent approaches (post 2000) addressing camera placement in a structured manner. We believe that our work can serve as a first point of entry for readers wishing to start researching into this area or engineers who need to design a camera system in practice. To this end, we attempt to provide a complete study of relevant formulation strategies and brief introductions to most commonly used optimization techniques by researchers in this field. We hope our work to be inspirational to spark new ideas in the field.


information processing in sensor networks | 2012

Efficient background subtraction for tracking in embedded camera networks

Yiran Shen; Wen Hu; Mingrui Yang; Junbin Liu; Chun Tung Chou

Background subtraction is often the first step in many computer vision applications such as object localisation and tracking. It aims to segment out moving parts of a scene that represent object of interests. In the field of computer vision, researchers have dedicated their efforts to improve the robustness and accuracy of such segmentations but most of their methods are computationally intensive, making them nonviable options for our targeted embedded camera platform whose energy and processing power is significantly more con-strained. To address this problem as well as maintain an acceptable level of performance, we introduce Compressive Sensing (CS) to the widely used Mixture of Gaussian to create a new background subtraction method. The results show that our method not only can decrease the computation significantly (a factor of 7 in a DSP setting) but remains comparably accurate.


european conference on computer vision | 2012

On the statistical determination of optimal camera configurations in large scale surveillance networks

Junbin Liu; Clinton Fookes; Tim Wark; Sridha Sridharan

The selection of optimal camera configurations (camera locations, orientations etc.) for multi-camera networks remains an unsolved problem. Previous approaches largely focus on proposing various objective functions to achieve different tasks. Most of them, however, do not generalize well to large scale networks. To tackle this, we introduce a statistical formulation of the optimal selection of camera configurations as well as propose a Trans-Dimensional Simulated Annealing (TDSA) algorithm to effectively solve the problem. We compare our approach with a state-of-the-art method based on Binary Integer Programming (BIP) and show that our approach offers similar performance on small scale problems. However, we also demonstrate the capability of our approach in dealing with large scale problems and show that our approach produces better results than 2 alternative heuristics designed to deal with the scalability issue of BIP.


Computer Vision and Image Understanding | 2012

Self-calibration of wireless cameras with restricted degrees of freedom

Junbin Liu; Tim Wark; Ruan Lakemond; Sridha Sridharan

This paper presents an approach for the automatic calibration of low-cost cameras which are assumed to be restricted in their freedom of movement to either pan or tilt movements. Camera parameters, including focal length, principal point, lens distortion parameter and the angle and axis of rotation, can be recovered from a minimum set of two images of the camera, provided that the axis of rotation between the two images goes through the camera’s optical center and is parallel to either the vertical (panning) or horizontal (tilting) axis of the image. Previous methods for auto-calibration of cameras based on pure rotations fail to work in these two degenerate cases. In addition, our approach includes a modified RANdom SAmple Consensus (RANSAC) algorithm, as well as improved integration of the radial distortion coefficient in the computation of inter-image homographies. We show that these modifications are able to increase the overall efficiency, reliability and accuracy of the homography computation and calibration procedure using both synthetic and real image sequences


information processing in sensor networks | 2012

Poster abstract: Efficient background subtraction for tracking in embedded camera networks

Yiran Shenn; Wen Hu; Mingrui Yang; Junbin Liu; Chun Tung Chou

Background subtraction is often the first step in many computer vision applications such as object localisation and tracking. It aims to segment out moving parts of a scene that represent object of interests. In the field of computer vision, researcherswlon have dedicated their efforts to improve the robustness and accuracy of such segmentations but most of their methods are computationally intensive, making them nonviable options for our targeted embedded camera platform whose energy and processing power is significantly more con-strained. To address this problem as well as maintain an acceptable level of performance, we introduce Compressive Sensing (CS) to the widely used Mixture of Gaussian to create a new background subtraction method. The results show that our method not only can decrease the computation significantly (a factor of 7 in a DSP setting) but remains comparably accurate.


information processing in sensor networks | 2010

Towards a framework for a versatile wireless multimedia sensor network platform

Damien O'Rourke; Junbin Liu; Tim Wark; Wen Hu; Darren Moore; Leslie Overs; Raja Jurdak

We describe our current work towards a framework that establishes a hierarchy of devices (sensors and actuators) within a wireless multimedia node and uses frequent sampling of cheaper devices to trigger the activation of more energy-hungry devices. Within this framework, we consider the suitability of servos for Wireless Multimedia Sensor Networks (WMSNs) by examining their functional characteristics and energy consumption [2].


Faculty of Built Environment and Engineering | 2008

Design and evaluation of an image analysis platform for low-power, low-bandwidth camera networks

Peter Corke; Junbin Liu; Darren Moore; Tim Wark

Collaboration


Dive into the Junbin Liu's collaboration.

Top Co-Authors

Avatar

Tim Wark

Commonwealth Scientific and Industrial Research Organisation

View shared research outputs
Top Co-Authors

Avatar

Sridha Sridharan

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Clinton Fookes

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Wen Hu

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar

Chun Tung Chou

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar

Damien O'Rourke

Commonwealth Scientific and Industrial Research Organisation

View shared research outputs
Top Co-Authors

Avatar

Mingrui Yang

Commonwealth Scientific and Industrial Research Organisation

View shared research outputs
Top Co-Authors

Avatar

Yiran Shen

Harbin Engineering University

View shared research outputs
Top Co-Authors

Avatar

Darren Moore

Commonwealth Scientific and Industrial Research Organisation

View shared research outputs
Top Co-Authors

Avatar

Juan Chen

Harbin Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge