Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Parthipan Siva is active.

Publication


Featured researches published by Parthipan Siva.


british machine vision conference | 2012

Transfer Learning by Ranking for Weakly Supervised Object Annotation.

Zhiyuan Shi; Parthipan Siva; Tony Xiang

Most existing approaches to training object detectors rely on fully supervised learning, which requires the tedious manual annotation of object location in a training set. Recently there has been an increasing interest in developing weakly supervised approach to detector training where the object location is not manually annotated but automatically determined based on binary (weak) labels indicating if a training image contains the object. This is a challenging problem because each image can contain many candidate object locations which partially overlaps the object of interest. Existing approaches focus on how to best utilise the binary labels for object location annotation. In this paper we propose to solve this problem from a very different perspective by casting it as a transfer learning problem. Specifically, we formulate a novel transfer learning based on learning to rank, which effectively transfers a model for automatic annotation of object location from an auxiliary dataset to a target dataset with completely unrelated object categories. We show that our approach outperforms existing state-of-the-art weakly supervised approach to annotating objects in the challenging VOC dataset.


IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing | 2015

Hyperspectral Image Classification With Limited Labeled Training Samples Using Enhanced Ensemble Learning and Conditional Random Fields

Fan Li; Linlin Xu; Parthipan Siva; Alexander Wong; David A. Clausi

Classification of hyperspectral imagery using few labeled samples is a challenging problem, considering the high dimensionality of hyperspectral imagery. Classifiers trained on limited samples with abundant spectral bands tend to overfit, leading to weak generalization capability. To address this problem, we have developed an enhanced ensemble method called multiclass boosted rotation forest (MBRF), which combines the rotation forest algorithm and a multiclass AdaBoost algorithm. The benefit of this combination can be explained by bias-variance analysis, especially in the situation of inadequate training samples and high dimensionality. Furthermore, MBRF innately produces posterior probabilities inherited from AdaBoost, which are served as the unary potentials of the conditional random field (CRF) model to incorporate spatial context information. Experimental results show that the classification accuracy by MBRF as well as its integration with CRF consistently outperforms the other referenced state-of-the-art classification methods when limited labeled samples are available for training.


international conference on image processing | 2014

Efficient Bayesian inference using fully connected conditional random fields with stochastic cliques

Mohammad Javad Shafiee; Alexander Wong; Parthipan Siva; Paul W. Fieguth

Conditional random fields (CRFs) are one of the most powerful frameworks in image modeling. However practical CRFs typically have edges only between nearby nodes; using more interactions and expressive relations among nodes make these methods impractical for large-scale applications, due to the high computational complexity. Recent work has shown that fully connected CRFs can be tractable by defining specific potential functions. In this paper, we present a novel framework to tackle the computational complexity of a fully connected graph without requiring specific potential functions. Instead, inspired by random graph theory and sampling methods, we propose a new clique structure called stochastic cliques. The stochastically fully connected CRF (SFCRF) is a marriage between random graphs and random fields, benefiting from the advantages of fully connected graphs while maintaining computational tractability. The effectiveness of SFCRF was examined by binary image labeling of highly noisy images. The results show that the proposed framework outperforms an adjacency CRF and a CRF with a large neighborhood size.


IEEE Access | 2016

StochasticNet: Forming Deep Neural Networks via Stochastic Connectivity

Mohammad Javad Shafiee; Parthipan Siva; Alexander Wong

Deep neural networks are a branch in machine learning that has seen a meteoric rise in popularity due to its powerful abilities to represent and model high-level abstractions in highly complex data. One area in deep neural networks that are ripe for exploration is neural connectivity formation. A pivotal study on the brain tissue of rats found that synaptic formation for specific functional connectivity in neocortical neural microcircuits can be surprisingly well modeled and predicted as a random formation. Motivated by this intriguing finding, we introduce the concept of StochasticNet where deep neural networks are formed via stochastic connectivity between neurons. As a result, any type of deep neural networks can be formed as a StochasticNet by allowing the neuron connectivity to be stochastic. Stochastic synaptic formations in a deep neural network architecture can allow for efficient utilization of neurons for performing specific tasks. To evaluate the feasibility of such a deep neural network architecture, we train a StochasticNet using four different image datasets (CIFAR-10, MNIST, SVHN, and STL-10). Experimental results show that a StochasticNet using less than half the number of neural connections as a conventional deep neural network achieves comparable accuracy and reduces overfitting on the CIFAR-10, MNIST, and SVHN data sets. Interestingly, StochasticNet with less than half the number of neural connections, achieved a higher accuracy (relative improvement in test error rate of ~6% compared to ConvNet) on the STL-10 data set than a conventional deep neural network. Finally, the StochasticNets have faster operational speeds while achieving better or similar accuracy performances.


canadian conference on computer and robot vision | 2014

Grid Seams: A Fast Superpixel Algorithm for Real-Time Applications

Parthipan Siva; Alexander Wong

Super pixels are a compact and simple representation of images that has been used for many computer vision applications such as object localization, segmentation and depth estimation. While useful as compact representations of images, the time complexity of super pixel algorithms has prevented their use in real-time applications like video processing. Fast super pixel algorithms have been proposed recently but they lack regular structure or required accuracy for representing image structure. We present Grid Seams, a novel seam carving approach to super pixel generation that preserves image structure information while enforcing a global spatial constraint in the form of a grid structure cost. Using a standard dataset, we show that our approach is faster than existing approaches and can achieve accuracies close to a state-of-the-art super pixel generation algorithms.


computer vision and pattern recognition | 2016

Embedded Motion Detection via Neural Response Mixture Background Modeling

Mohammad Javad Shafiee; Parthipan Siva; Paul W. Fieguth; Alexander Wong

Recent studies have shown that deep neural networks (DNNs) can outperform state-of-the-art algorithms for a multitude of computer vision tasks. However, the ability to leverage DNNs for near real-time performance on embedded systems have been all but impossible so far without requiring specialized processors or GPUs. In this paper, we present a new motion detection algorithm that leverages the power of DNNs while maintaining low computational complexity needed for near real-time embedded performance without specialized hardware. The proposed Neural Response Mixture (NeRM) model leverages rich deep features extracted from the neural responses of an efficient, stochastically-formed deep neural network (StochasticNet) for constructing Gaussian mixture models to detect motion in a scene. NeRM was implemented embedded on an Axis surveillance camera, and results demonstrated that the proposed NeRM approach can achieve strong motion detection accuracy while operating at near real-time performance.


canadian conference on computer and robot vision | 2007

Automated Detection of Mitosis in Embryonic Tissues

Parthipan Siva; G.W. Brodland; David A. Clausi

Characterization of mitosis is important for understanding the mechanisms of development in early stage embryos. In studies of cancer, another situation in which mitosis is of interest, the tissue is stained with contrast agents before mitosis characterization; an intervention that could lead to atypical development in live embryos. A new image processing algorithm that does not rely on the use of contrast agents was developed to detect mitosis in embryonic tissue. Unlike previous approaches that uses still images, the algorithm presented here uses temporal information from time-lapse images to track the deformation of the embryonic tissue and then uses changes in intensity at tracked regions to identify the locations of mitosis. On a one hundred minute image sequence, consisting of twenty images, the algorithm successfully detected eighty-one out of the ninety-five mitosis. The performance of the algorithm is calculated using the geometric mean measure as 82%. Since no other method to count mitoses in live tissues is known, comparisons with the present results could not be made.


international conference on image processing | 2015

PIRM: Fast background subtraction under sudden, local illumination changes via probabilistic illumination range modelling

Parthipan Siva; Mohammad Javad Shafiee; Francis Li; Alexander Wong

We present an illumination-compensation method to enable fast and reliable background subtraction under sudden, local illumination changes in wide area surveillance videos. We use Probabilistic Illumination Range Modeling (PIRM) to model the conditional probability distribution of current frame intensity given background intensity. With this model, we can identify a continuous range of current frame intensities that map to the same background intensity, and scale all pixels within that range in the current frame appropriately to enable illumination-compensated background subtraction. Experimental results using a standard academic dataset as well as very challenging industry videos show that PIRM can achieve improvements in compensating for sudden, local illumination changes.


Scientific Reports | 2016

Fluorescence microscopy image noise reduction using a stochastically-connected random field model.

Shahid A. Haider; Andrew Cameron; Parthipan Siva; Dorothy Lui; Mohammad Javad Shafiee; Ameneh Boroomand; N. Haider; Alexander Wong

Fluorescence microscopy is an essential part of a biologist’s toolkit, allowing assaying of many parameters like subcellular localization of proteins, changes in cytoskeletal dynamics, protein-protein interactions, and the concentration of specific cellular ions. A fundamental challenge with using fluorescence microscopy is the presence of noise. This study introduces a novel approach to reducing noise in fluorescence microscopy images. The noise reduction problem is posed as a Maximum A Posteriori estimation problem, and solved using a novel random field model called stochastically-connected random field (SRF), which combines random graph and field theory. Experimental results using synthetic and real fluorescence microscopy data show the proposed approach achieving strong noise reduction performance when compared to several other noise reduction algorithms, using quantitative metrics. The proposed SRF approach was able to achieve strong performance in terms of signal-to-noise ratio in the synthetic results, high signal to noise ratio and contrast to noise ratio in the real fluorescence microscopy data results, and was able to maintain cell structure and subtle details while reducing background and intra-cellular noise.


signal processing systems | 2018

Real-Time Embedded Motion Detection via Neural Response Mixture Modeling

Mohammad Javad Shafiee; Parthipan Siva; Paul W. Fieguth; Alexander Wong

Deep neural networks (DNNs) have shown significant promise in different fields including computer vision. Although previous research have demonstrated the ability of DNNs, utilizing these types of networks for real-time applications on embedded systems is not possible without requiring specialized hardware such as GPUs. In this paper, we propose a new approach to real-time motion detection in videos that leverages the power of DNNs while maintaining low computational complexity needed for real-time performance on existing embedded platforms without specialized hardware. The rich deep features extracted from the neural responses of an efficient, stochastically-formed deep neural network (StochasticNet) are utilized for constructing Gaussian mixture models (GMM) to detect motion in a scene. The proposed Ne ural R esponse M ixture (NeRM) model was embedded on an Axis surveillance camera, and results demonstrate that the proposed NeRM approach can improve the GMM performance (i.e., with less false detection and noise) in modeling the foreground and background compared to other state-of-the-art approaches applied for motion detection on embedded systems while operating at real-time performance.

Collaboration


Dive into the Parthipan Siva's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fan Li

University of Waterloo

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge