Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Parvez Ahammad is active.

Publication


Featured researches published by Parvez Ahammad.


international conference on distributed smart cameras | 2008

CITRIC: A low-bandwidth wireless camera network platform

Phoebus Chen; Parvez Ahammad; Colby Boyer; Shih-I Huang; Leon Lin; Edgar J. Lobaton; Marci Meingast; Songhwai Oh; Simon Wang; Posu Yan; Allen Y. Yang; Chuohao Yeo; Lung-Chung Chang; J. D. Tygar; Shankar Sastry

In this paper, we propose and demonstrate a novel wireless camera network system, called CITRIC. The core component of this system is a new hardware platform that integrates a camera, a frequency-scalable (up to 624 MHz) CPU, 16MB FLASH, and 64MB RAM onto a single device. The device then connects with a standard sensor network mote to form a camera mote. The design enables in-network processing of images to reduce communication requirements, which has traditionally been high in existing camera networks with centralized processing. We also propose a back-end client/server architecture to provide a user interface to the system and support further centralized processing for higher-level applications. Our camera mote enables a wider variety of distributed pattern recognition applications than traditional platforms because it provides more computing power and tighter integration of physical components while still consuming relatively little power. Furthermore, the mote easily integrates with existing low-bandwidth sensor networks because it can communicate over the IEEE 802.15.4 protocol with other sensor network platforms. We demonstrate our system on three applications: image compression, target tracking, and camera localization.


international conference on image processing | 2008

Rate-efficient visual correspondences using random projections

Chuohao Yeo; Parvez Ahammad; Kannan Ramchandran

We consider the problem of establishing visual correspondences in a distributed and rate-efficient fashion by broadcasting compact descriptors. Establishing visual correspondences is a critical task before other vision tasks can be performed in a wireless camera network. We propose the use of coarsely quantized random projections of descriptors to build binary hashes, and use the Hamming distance between binary hashes as the matching criterion. In this work, we derive the analytic relationship of Hamming distance between the binary hashes to Euclidean distance between the original descriptors. We present experimental verification of our result, and show that for the task of finding visual correspondences, sending binary hashes is more rate-efficient than prior approaches.


Neuroinformatics | 2011

Automated reconstruction of neuronal morphology based on local geometrical and global structural models.

Ting Zhao; Jun Xie; Fernando Amat; Nathan G. Clack; Parvez Ahammad; Hanchuan Peng; Fuhui Long; Eugene W. Myers

Digital reconstruction of neurons from microscope images is an important and challenging problem in neuroscience. In this paper, we propose a model-based method to tackle this problem. We first formulate a model structure, then develop an algorithm for computing it by carefully taking into account morphological characteristics of neurons, as well as the image properties under typical imaging protocols. The method has been tested on the data sets used in the DIADEM competition and produced promising results for four out of the five data sets.


IEEE Transactions on Circuits and Systems for Video Technology | 2008

High-Speed Action Recognition and Localization in Compressed Domain Videos

Chuohao Yeo; Parvez Ahammad; Kannan Ramchandran; Shankar Sastry

We present a compressed domain scheme that is able to recognize and localize actions at high speeds. The recognition problem is posed as performing an action video query on a test video sequence. Our method is based on computing motion similarity using compressed domain features which can be extracted with low complexity. We introduce a novel motion correlation measure that takes into account differences in motion directions and magnitudes. Our method is appearance-invariant, requires no prior segmentation, alignment or stabilization, and is able to localize actions in both space and time. We evaluated our method on a benchmark action video database consisting of six actions performed by 25 people under three different scenarios. Our proposed method achieved a classification accuracy of 90%, comparing favorably with existing methods in action classification accuracy, and is able to localize a template video of 80 x 64 pixels with 23 frames in a test video of 368 x 184 pixels with 835 frames in just 11 s, easily outperforming other methods in localization speed. We also perform a systematic investigation of the effects of various encoding options on our proposed approach. In particular, we present results on the compression-classification tradeoff, which would provide valuable insight into jointly designing a system that performs video encoding at the camera front-end and action classification at the processing back-end.


eLife | 2015

Dynamical feature extraction at the sensory periphery guides chemotaxis.

Aljoscha Schulze; Alex Gomez-Marin; Vani G. Rajendran; Gus K Lott; Marco Musy; Parvez Ahammad; Ajinkya Deogade; James Sharpe; Julia Riedl; David Jarriault; Eric T. Trautman; Christopher Werner; Madhusudhan Venkadesan; Shaul Druckmann; Vivek Jayaraman; Matthieu Louis

Behavioral strategies employed for chemotaxis have been described across phyla, but the sensorimotor basis of this phenomenon has seldom been studied in naturalistic contexts. Here, we examine how signals experienced during free olfactory behaviors are processed by first-order olfactory sensory neurons (OSNs) of the Drosophila larva. We find that OSNs can act as differentiators that transiently normalize stimulus intensity—a property potentially derived from a combination of integral feedback and feed-forward regulation of olfactory transduction. In olfactory virtual reality experiments, we report that high activity levels of the OSN suppress turning, whereas low activity levels facilitate turning. Using a generalized linear model, we explain how peripheral encoding of olfactory stimuli modulates the probability of switching from a run to a turn. Our work clarifies the link between computations carried out at the sensory periphery and action selection underlying navigation in odor gradients. DOI: http://dx.doi.org/10.7554/eLife.06694.001


ACM Transactions on Sensor Networks | 2013

A low-bandwidth camera sensor platform with applications in smart camera networks

Phoebus Chen; Kirak Hong; Nikhil Naikal; Shankar Sastry; J. Doug Tygar; Posu Yan; Allen Y. Yang; Lung-Chung Chang; Leon Lin; Simon Wang; Edgar J. Lobaton; Songhwai Oh; Parvez Ahammad

Smart camera networks have recently emerged as a new class of sensor network infrastructure that is capable of supporting high-power in-network signal processing and enabling a wide range of applications. In this article, we provide an exposition of our efforts to build a low-bandwidth wireless camera network platform, called CITRIC, and its applications in smart camera networks. The platform integrates a camera, a microphone, a frequency-scalable (up to 624 MHz) CPU, 16 MB FLASH, and 64 MB RAM onto a single device. The device then connects with a standard sensor network mote to form a wireless camera mote. With reasonably low power consumption and extensive algorithmic libraries running on a decent operating system that is easy to program, CITRIC is ideal for research and applications in distributed image and video processing. Its capabilities of in-network image processing also reduce communication requirements, which has been high in other existing camera networks with centralized processing. Furthermore, the mote easily integrates with other low-bandwidth sensor networks via the IEEE 802.15.4 protocol. To justify the utility of CITRIC, we present several representative applications. In particular, concrete research results will be demonstrated in two areas, namely, distributed coverage hole identification and distributed object recognition.


multimedia signal processing | 2006

Compressed Domain Real-time Action Recognition

Chuohao Yeo; Parvez Ahammad; Shankar Sastry

We present a compressed domain scheme that is able to recognize and localize actions in real-time. The recognition problem is posed as performing a video query on a test video sequence. Our method is based on computing motion similarity using compressed domain features which can be extracted with low complexity. We introduce a novel motion correlation measure that takes into account differences in motion magnitudes. Our method is appearance invariant, requires no prior segmentation, alignment or stabilization, and is able to localize actions in both space and time. We evaluated our method on a large action video database consisting of 6 actions performed by 25 people under 3 different scenarios. Our classification results compare favorably with existing methods at only a fraction of their computational cost


International Journal of Computer Vision | 2011

Coding of Image Feature Descriptors for Distributed Rate-efficient Visual Correspondences

Chuohao Yeo; Parvez Ahammad; Kannan Ramchandran

Establishing visual correspondences is a critical step in many computer vision tasks involving multiple views of a scene. In a dynamic environment and when cameras are mobile, visual correspondences need to be updated on a recurring basis. At the same time, the use of wireless links between camera motes imposes tight rate constraints. This combination of issues motivates us to consider the problem of establishing visual correspondences in a distributed fashion between cameras operating under rate constraints. We propose a solution based on constructing distance preserving hashes using binarized random projections. By exploiting the fact that descriptors of regions in correspondence are highly correlated, we propose a novel use of distributed source coding via linear codes on the binary hashes to more efficiently exchange feature descriptors for establishing correspondences across multiple camera views. A systematic approach is used to evaluate rate vs visual correspondences retrieval performance; under a stringent matching criterion, our proposed methods demonstrate superior performance to a baseline scheme employing transform coding of descriptors.


international conference on computer communications and networks | 2008

Multi-Modal Target Tracking Using Heterogeneous Sensor Networks

Manish Kushwaha; Isaac Amundson; Péter Völgyesi; Parvez Ahammad; Gyula Simon; Xenofon D. Koutsoukos; Ákos Lédeczi; Shankar Sastry

The paper describes a target tracking system running on a heterogeneous sensor network (HSN) and presents results gathered from a realistic deployment. The system fuses audio direction of arrival data from mote class devices and object detection measurements from embedded PCs equipped with cameras. The acoustic sensor nodes perform beamforming and measure the energy as a function of the angle. The camera nodes detect moving objects and estimate their angle. The sensor detections are sent to a centralized sensor fusion node via a combination of two wireless networks. The novelty of our system is the unique combination of target tracking methods customized for the application at hand and their implementation on an actual HSN platform.


research in computational molecular biology | 2007

Comparative analysis of spatial patterns of gene expression in Drosophila melanogaster imaginal discs

Cyrus L. Harmon; Parvez Ahammad; Ann S. Hammonds; Richard Weiszmann; Susan E. Celniker; Shankar Sastry; Gerald M. Rubin

Determining the precise spatial extent of expression of genes across different tissues, along with knowledge of the biochemical function of the genes is critical for understanding the roles of various genes in the development of metazoan organisms. To address this problem, we have developed high-throughput methods for generating images of gene expression in Drosophila melanogaster imaginal discs and for the automated analysis of these images. Our method automatically learns tissue shapes from a small number of manually segmented training examples and automatically aligns, extracts and scores new images, which are analyzed to generate gene expression maps for each gene. We have developed a reverse lookup procedure that enables us to identify genes that have spatial expression patterns most similar to a given gene of interest. Our methods enable us to cluster both the genes and the pixels that of the maps, thereby identifying sets of genes that have similar patterns, and regions of the tissues of interest that have similar gene expression profiles across a large number of genes.

Collaboration


Dive into the Parvez Ahammad's collaboration.

Top Co-Authors

Avatar

Shankar Sastry

University of California

View shared research outputs
Top Co-Authors

Avatar

Chuohao Yeo

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Allen Y. Yang

University of California

View shared research outputs
Top Co-Authors

Avatar

Ann S. Hammonds

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gerald M. Rubin

Howard Hughes Medical Institute

View shared research outputs
Top Co-Authors

Avatar

Phoebus Chen

University of California

View shared research outputs
Top Co-Authors

Avatar

Posu Yan

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge