Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Raja Bala is active.

Publication


Featured researches published by Raja Bala.


IEEE Transactions on Circuits and Systems for Video Technology | 2014

Adaptive Sparse Representations for Video Anomaly Detection

Xuan Mo; Vishal Monga; Raja Bala; Zhigang Fan

Video anomaly detection can be used in the transportation domain to identify unusual patterns such as traffic violations, accidents, unsafe driver behavior, street crime, and other suspicious activities. A common class of approaches relies on object tracking and trajectory analysis. Very recently, sparse reconstruction techniques have been employed in video anomaly detection. The fundamental underlying assumption of these methods is that any new feature representation of a normal/anomalous event can be approximately modeled as a (sparse) linear combination prelabeled feature representations (of previously observed events) in a training dictionary. Sparsity can be a powerful prior on model coefficients but challenges remain in the detection of anomalies involving multiple objects and the ability of the linear sparsity model to effectively allow for class separation. The proposed research addresses both these issues. First, we develop a new joint sparsity model for anomaly detection that enables the detection of joint anomalies involving multiple objects. This extension is highly nontrivial since it leads to a new simultaneous sparsity problem that we solve using a greedy pursuit technique. Second, we introduce nonlinearity into, that is, kernelize. The linear sparsity model to enable superior class separability and hence anomaly detection. We extensively test on several real world video datasets involving both single and multiple object anomalies. Results show marked improvements in detection of anomalies in both supervised and unsupervised scenarios when using the proposed sparsity models.


color imaging conference | 2003

Color-to-grayscale conversion to maintain discriminability

Raja Bala; Karen M. Braun

Monochrome devices that receive color imagery must perform a conversion from color to grayscale. The most common approach is to calculate the luminance signal from the three color signals. The problem with this approach is that the distinction between two colors of similar luminance (but different hue) is lost. This can be a significant problem when rendering colors within graphical objects such as pie charts and bar charts, which are often chosen for maximum discriminability. This paper proposes a method of converting color business graphics to grayscale in a manner that preserves discriminability. Colors are first sorted according to their original lightness values. They are then spaced equally in gray, or spaced according to their 3-D color difference from colors adjacent to them along the lightness dimension. This is most useful when maximum differentiability is desired in images containing a small number of colors, such as pie charts and bar graphs. Subjective experiments indicate that the proposed algorithms outperform standard color-to-grayscale conversions.


computer vision and pattern recognition | 2014

Estimating Gaze Direction of Vehicle Drivers Using a Smartphone Camera

Meng-Che Chuang; Raja Bala; Edgar A. Bernal; Peter Paul; Aaron Michael Burry

Many automated driver monitoring technologies have been proposed to enhance vehicle and road safety. Most existing solutions involve the use of specialized embedded hardware, primarily in high-end automobiles. This paper explores driver assistance methods that can be implemented on mobile devices such as a consumer smartphone, thus offering a level of safety enhancement that is more widely accessible. Specifically, the paper focuses on estimating driver gaze direction as an indicator of driver attention. Input video frames from a smartphone camera facing the driver are first processed through a coarse head pose direction. Next, the locations and scales of face parts, namely mouth, eyes, and nose, define a feature descriptor that is supplied to an SVM gaze classifier which outputs one of 8 common driver gaze directions. A key novel aspect is an in-situ approach for gathering training data that improves generalization performance across drivers, vehicles, smartphones, and capture geometry. Experimental results show that a high accuracy of gaze direction estimation is achieved for four scenarios with different drivers, vehicles, smartphones and camera locations.


IEEE Transactions on Image Processing | 2005

Two-dimensional transforms for device color correction and calibration

Raja Bala; Gaurav Sharma; Vishal Monga; J.-P. Van de Capelle

Color device calibration is traditionally performed using one-dimensional (1-D) per-channel tone-response corrections (TRCs). While 1-D TRCs are attractive in view of their low implementation complexity and efficient real-time processing of color images, their use severely restricts the degree of control that can be exercised along various device axes. A typical example is that per separation (or per-channel), TRCs in a printer can be used to either ensure gray balance along the C=M=Y axis or to provide a linear response in delta-E units along each of the individual (C, M, and Y) axis, but not both. This paper proposes a novel two-dimensional color correction architecture that enables much greater control over the device color gamut with a modest increase in implementation cost. Results show significant improvement in calibration accuracy and stability when compared to traditional 1-D calibration. Superior cost quality tradeoffs (over 1-D methods) are also achieved for emulation of one color device on another.


Journal of Electronic Imaging | 2013

Computer vision in roadway transportation systems: a survey

Robert P. Loce; Edgar A. Bernal; Wencheng Wu; Raja Bala

Abstract. There is a worldwide effort to apply 21st century intelligence to evolving our transportation networks. The goals of smart transportation networks are quite noble and manifold, including safety, efficiency, law enforcement, energy conservation, and emission reduction. Computer vision is playing a key role in this transportation evolution. Video imaging scientists are providing intelligent sensing and processing technologies for a wide variety of applications and services. There are many interesting technical challenges including imaging under a variety of environmental and illumination conditions, data overload, recognition and tracking of objects at high speed, distributed network sensing and processing, energy sources, as well as legal concerns. This paper presents a survey of computer vision techniques related to three key problems in the transportation domain: safety, efficiency, and security and law enforcement. A broad review of the literature is complemented by detailed treatment of a few selected algorithms and systems that the authors believe represent the state-of-the-art.


international conference on image processing | 2002

Detection and segmentation of sweeps in color graphics images

Salil Prabhakar; Hui Cheng; Raja Bala; John C. Handley; Ying-wei Lin

Business graphics are an important class of digital imagery. Such images are computer-generated, and comprise synthetic elements such as solid fills, line art, and color sweeps. Often these images are first printed and then scanned for further electronic reuse. The printing and scanning process destroys the synthetic structure of a graphics image, and furthermore introduces distortions due to halftoning and other forms of printer and scanner noise. Subsequent reproductions usually amplify these distortions thus resulting in rapid degradation of image quality. It would thus be desirable to detect and reconstruct the original synthetic structure from the scanned image. This paper presents an effort in this direction, namely a method to detect color sweeps in scanned images. Once detected, the synthetic signature of the sweep is derived, namely its starting and ending color. This information can be used to optimize subsequent image processing operations such as rendering to an output device, or image compression. This work represents a novel application of known image processing techniques to extract semantic information from graphics images.


Proceedings of SPIE | 2012

Image simulation for automatic license plate recognition

Raja Bala; Yonghui Zhao; Aaron Michael Burry; Vladimir Kozitsky; Claude S. Fillion; Craig Saunders; Jose A. Rodriguez-Serrano

Automatic license plate recognition (ALPR) is an important capability for traffic surveillance applications, including toll monitoring and detection of different types of traffic violations. ALPR is a multi-stage process comprising plate localization, character segmentation, optical character recognition (OCR), and identification of originating jurisdiction (i.e. state or province). Training of an ALPR system for a new jurisdiction typically involves gathering vast amounts of license plate images and associated ground truth data, followed by iterative tuning and optimization of the ALPR algorithms. The substantial time and effort required to train and optimize the ALPR system can result in excessive operational cost and overhead. In this paper we propose a framework to create an artificial set of license plate images for accelerated training and optimization of ALPR algorithms. The framework comprises two steps: the synthesis of license plate images according to the design and layout for a jurisdiction of interest; and the modeling of imaging transformations and distortions typically encountered in the image capture process. Distortion parameters are estimated by measurements of real plate images. The simulation methodology is successfully demonstrated for training of OCR.


IEEE Signal Processing Magazine | 2005

System optimization in digital color imaging

Raja Bala; Gaurav Sharma

The work presents an overview of a color imaging system and its elements and then highlights techniques in the literature that attempt to account for system interactions for improved quality or performance. After that, presented in greater detail, are two specific examples of approaches that take into account interactions between elements that are normally treated independently. Finally, concluding remarks are presented.


computer vision and pattern recognition | 2013

Mobile Video Capture of Multi-page Documents

Jayant Kumar; Raja Bala; Hengzhou Ding; Phillip J. Emmett

This paper presents a mobile application for capturing images of printed multi-page documents with a smartphone camera. With todays available document capture applications, the user has to carefully capture individual photographs of each page and assemble them into a document, leading to a cumbersome and time consuming user experience. We propose a novel approach of using video to capture multipage documents. Our algorithm automatically selects the best still images corresponding to individual pages of the document from the video. The technique combines video motion analysis, inertial sensor signals, and an image quality (IQ) prediction technique to select the best page images from the video. For the latter, we extend a previous no-reference IQ prediction algorithm to suit the needs of our video application. The algorithm has been implemented on an iPhone 4S. Individual pages are successfully extracted for a wide variety of multi-page documents. OCR analysis shows that the quality of document images produced by our app is comparable to that of standard still captures. At the same time, user studies confirm that in the majority of trials, video capture provides an experience that is faster and more convenient than multiple still captures.


international conference on computer vision | 2012

Data-driven vehicle identification by image matching

Jose A. Rodriguez-Serrano; Harsimrat Sandhawalia; Raja Bala; Florent Perronnin; Craig Saunders

Vehicle identification from images has been predominantly addressed through automatic license plate recognition (ALPR) techniques which detect and recognize the characters in the plate region of the image. We move away from traditional ALPR techniques and advocate for a data-driven approach for vehicle identification. Here, given a plate image region, the idea is to search for a near-duplicate image in an annotated database; if found, the identity of the near-duplicate is transferred to the input region. Although this approach could be perceived as impractical, we actually demonstrate that it is feasible with state-of-the-art image representations, and that it presents some advantages in terms of speed, and time-to-deploy. To overcome the issue of identifying previously unseen identities, we propose an image simulation approach where photo-realistic images of license plates are generated for desired plate numbers. We demonstrate that there is no perceivable performance difference between using synthetic and real plates. We also improve the matching accuracy using similarity learning, which is in the spirit of domain adaptation.

Collaboration


Dive into the Raja Bala's collaboration.

Researchain Logo
Decentralizing Knowledge