Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rajkumar Janakiraman is active.

Publication


Featured researches published by Rajkumar Janakiraman.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2007

Continuous Verification Using Multimodal Biometrics

Terence Sim; Sheng Zhang; Rajkumar Janakiraman; Sandeep S. Kumar

Conventional verification systems, such as those controlling access to a secure room, do not usually require the user to reauthenticate himself for continued access to the protected resource. This may not be sufficient for high-security environments in which the protected resource needs to be continuously monitored for unauthorized use. In such cases, continuous verification is needed. In this paper, we present the theory, architecture, implementation, and performance of a multimodal biometrics verification system that continuously verifies the presence of a logged-in user. Two modalities are currently used - face and fingerprint - but our theory can be readily extended to include more modalities. We show that continuous verification imposes additional requirements on multimodal fusion when compared to conventional verification systems. We also argue that the usual performance metrics of false accept and false reject rates are insufficient yardsticks for continuous verification and propose new metrics against which we benchmark our system


acm multimedia | 2005

Towards context-aware face recognition

Marc Davis; Michael Smith; John F. Canny; Nathan Good; Simon P. King; Rajkumar Janakiraman

In this paper, we focus on the use of context-aware, collaborative filtering, machine-learning techniques that leverage automatically sensed and inferred contextual metadata together with computer vision analysis of image content to make accurate predictions about the human subjects depicted in cameraphone photos. We apply Sparse-Factor Analysis (SFA) to both the contextual metadata gathered in the MMM2 system and the results of PCA (Principal Components Analysis) of the photo content to achieve a 60% face recognition accuracy of people depicted in our cameraphone photos, which is 40% better than media analysis alone. In short, we use context-aware media analysis to solve the face recognition problem for cameraphone photos.


computer vision and pattern recognition | 2007

Are Digraphs Good for Free-Text Keystroke Dynamics?

Terence Sim; Rajkumar Janakiraman

Research in keystroke dynamics has largely focused on the typing patterns found in fixed text (e.g. userid and passwords). In this regard, digraphs and trigraphs have proven to be discriminative features. However, there is increasing interest in free-text keystroke dynamics, in which the user to be authenticated is free to type whatever he/she wants, rather than a pre-determined text. The natural question that arises is whether digraphs and trigraphs are just as discriminative for free text as they are for fixed text. We attempt to answer this question in this paper. We show that digraphs and trigraphs, if computed without regard to what word was typed, are no longer discriminative. Instead, word-specific digraphs/trigraphs are required. We also show that the typing dynamics for some words depend on whether they are part of a larger word. Our study is the first to investigate these issues, and we hope our work will help guide researchers looking for good features for free-text keystroke dynamics.


electronic imaging | 2006

Using Context and Similarity for Face and Location Identification

Marc Davis; Michael Smith; Fred Stentiford; Adetokunbo Bamidele; John F. Canny; Nathan Good; Simon P. King; Rajkumar Janakiraman

This paper describes a new approach to the automatic detection of human faces and places depicted in photographs taken on cameraphones. Cameraphones offer a unique opportunity to pursue new approaches to media analysis and management: namely to combine the analysis of automatically gathered contextual metadata with media content analysis to fundamentally improve image content recognition and retrieval. Current approaches to content-based image analysis are not sufficient to enable retrieval of cameraphone photos by high-level semantic concepts, such as who is in the photo or what the photo is actually depicting. In this paper, new methods for determining image similarity are combined with analysis of automatically acquired contextual metadata to substantially improve the performance of face and place recognition algorithms. For faces, we apply Sparse-Factor Analysis (SFA) to both the automatically captured contextual metadata and the results of PCA (Principal Components Analysis) of the photo content to achieve a 60% face recognition accuracy of people depicted in our database of photos, which is 40% better than media analysis alone. For location, grouping visually similar photos using a model of Cognitive Visual Attention (CVA) in conjunction with contextual metadata analysis yields a significant improvement over color histogram and CVA methods alone. We achieve an improvement in location retrieval precision from 30% precision for color histogram and CVA image analysis, to 55% precision using contextual metadata alone, to 67% precision achieved by combining contextual metadata with CVA image analysis. The combination of context and content analysis produces results that can indicate the faces and places depicted in cameraphone photos significantly better than image analysis or context analysis alone. We believe these results indicate the possibilities of a new context-aware paradigm for image analysis.


international conference on biometrics | 2007

Keystroke dynamics in a general setting

Rajkumar Janakiraman; Terence Sim

It is well known that Keystroke Dynamics can be used as a biometric to authenticate users. But most work to date use fixed strings, such as userid or password. In this paper, we study the feasibility of using Keystroke Dynamics as a biometric in a more general setting, where users go about their normal daily activities of emailing, web surfing, and so on. We design two classifiers that appropriate for one-time and continuous authentication. We also propose a new Goodness Measure to compute the quality of a word used for Keystroke Dynamics. From our experiments we find that, surprisingly, non-English words are better suited for identification than English words.


annual computer security applications conference | 2005

Using continuous biometric verification to protect interactive login sessions

Sandeep S. Kumar; Terence Sim; Rajkumar Janakiraman; Sheng Zhang

In this paper we describe the theory, architecture, implementation, and performance of a multimodal passive biometric verification system that continually verifies the presence/participation of a logged-in user. We assume that the user logged in using strong authentication prior to the starting of the continuous verification process. While the implementation described in the paper combines a digital camera-based face verification with a mouse-based fingerprint reader, the architecture is generic enough to accommodate additional biometric devices with different accuracy of classifying a given user from an imposter. The main thrust of our work is to build a multimodal biometric feedback mechanism into the operating system so that verification failure can automatically lock up the computer within some estimate of the time it takes to subvert the computer. This must be done with low false positives in order to realize a usable system. We show through experimental results that combining multiple suitably chosen modalities in our theoretical framework can effectively do that with currently available off-the-shelf components


workshop on applications of computer vision | 2005

Using Continuous Face Verification to Improve Desktop Security

Rajkumar Janakiraman; Sandeep S. Kumar; Sheng Zhang; Terence Sim

In this paper we describe the architecture, implementation, and performance of a face verification system that continually verifies the presence of a logged-in user at a computer console. It maintains a sliding window of about ten seconds of verification data points and uses them as input to a Bayesian framework to compute a probability that the logged-in user is still present at the console. If the probability falls below a threshold, the system can delay or freeze operating system processes belonging to the logged-in user. This helps prevent misuse of computer resources when an unauthorized user maliciously takes the place of an authorized user. Processes may be unconditionally frozen (they never return from a system call) or delayed (it takes longer to complete a system call, or appropriate action may be taken for certain classes of system calls, such as those that are considered security critical. We believe that the integrated system presented here is the first of its kind. Furthermore, we believe that the analysis of the tradeoffs between verification accuracy, processor overhead, and system security that we do in this paper has not been done elsewhere


workshop on applications of computer vision | 2007

VIM: Vision for Interactive Music

Terence Sim; Dennis Zhaowen Ng; Rajkumar Janakiraman

Traditionally, people were either producers of entertainment media, or else consumers of them. Todays digital entertainment, however, provides for a new dimension: that of interactivity. Instead of passive enjoyment, consumers can now control some elements of the media that were previously solely determined by the producer. This interactivity appears to enhance enjoyment. In this paper, we present a vision-based, interactive music playback system which allows anyone, even untrained musicians, to conduct music. The goal is to allow the user to dynamically influence how music is played back, much like what a real conductor would do. The tempo and volume of the music playback are controlled by the users movements. In addition, our system projects colorful patterns that respond to the user, making the interaction truly multimedia


Lecture Notes in Computer Science | 2006

Continuous verification using multimodal biometrics

Sheng Zhang; Rajkumar Janakiraman; Terence Sim; Sandeep S. Kumar

Collaboration


Dive into the Rajkumar Janakiraman's collaboration.

Top Co-Authors

Avatar

Terence Sim

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Sheng Zhang

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

John F. Canny

University of California

View shared research outputs
Top Co-Authors

Avatar

Marc Davis

University of California

View shared research outputs
Top Co-Authors

Avatar

Nathan Good

University of California

View shared research outputs
Top Co-Authors

Avatar

Dennis Zhaowen Ng

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fred Stentiford

University College London

View shared research outputs
Researchain Logo
Decentralizing Knowledge