Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Madirakshi Das is active.

Publication


Featured researches published by Madirakshi Das.


systems, man and cybernetics | 2003

Automatic face-based image grouping for albuming

Madirakshi Das; Alexander C. Loui

In this paper, a system for automatic albuming of consumer photographs is described, that uses face-based information extracted from images. The target image sets for this work are snapshots from family photo collections. The aim is to automatically provide the user with the option of selecting image groups based on the people present in them. The core components of age/gender classification and clustering based on facial similarity for this domain are discussed. Age and gender classification based on individual facial feature measurements combined to produce a single classifier using the AdaBoost algorithm. Face-based information is used both in locating suitable images and layout/design phases of the albuming process. Face-based information is combined with earlier work on event segmentation and layout design to provide a more effective system. Performance is tested on two family photo databases covering a 5 year time-span.


international conference on multimedia and expo | 2000

Videoabstract: a hybrid approach to generate semantically meaningful video summaries

Candemir Toklu; Shih-Ping Liou; Madirakshi Das

Video summarization is a key component in providing Internet users a way to quickly browse a video clip in different levels of detail, without the need to view the entire video clip. We present a hybrid approach to video summary generation, which automatically process the video, creating a multimedia video summary, while providing easy-to-use interfaces for verification, correction, and augmentation of the automatically generated story segments and extracted multimedia content. Algorithms are developed to solve the sub-problems of story segmentation, story boundary refinement, and video summary generation. The use of automatic processing in conjunction with input from the user allows a user to produce meaningful video summaries efficiently.


international conference on multimedia and expo | 2009

Event classification in personal image collections

Madirakshi Das; Alexander C. Loui

In this paper, we investigate event classification that is specifically developed for use in consumer family photo collections. This domain is very different from news video collections that have been the focus of research in the area of scene content classification. We determine a set of broad event classes that are relevant to personal collections. We investigate the use of a variety of high-level visual and temporal features, and determine a set of features that show good correlation with the event class. We propose a Bayesian belief network for event classification that computes the a posteriori probability of the event class given the input features. The Bayes net is trained on a large set of manually annotated consumer collections. We obtain a classification accuracy of over 70% in this challenging domain.


conference on image and video retrieval | 2008

Event-based location matching for consumer image collections

Madirakshi Das; Jens C. Farmer; Andrew C. Gallagher; Alexander C. Loui

The explosion in use of digital visual media has created many challenges for efficient search and retrieval of relevant content from large consumer photo collections. This paper proposes a novel approach to reliably retrieve photos taken at a particular location, that can also be used to narrow the search space when used in conjunction with other search dimensions such as date, event, and people present in images. By using a novel clustering algorithm with intelligent filtering steps, consumer images with cluttered background and common objects can be matched effectively. A major contribution of the paper is to present a set of constraints that produce reliable matching with high precision between two images. The other important contribution is to combine this scene matching technique with automatically computed temporal event clustering to provide a solution for location clustering of events in consumer image collections. We have developed a software application to evaluate the performance of this method using actual consumer images. Experimental results have shown that this approach produces scene matches with high precision that can be used to aid the tagging of events with location.


international conference on multimedia and expo | 2007

User-Assisted People Search in Consumer Image Collections

Andrew C. Gallagher; Madirakshi Das; Alexander C. Loui

In this paper, we investigate the process of searching for images of specified people in the consumer family photo domain. This domain is very different from the controlled environment of secure-access applications that have been extensively studied and face recognition packages are available in the market. Instead of the typical frontal mug shot, consumer photos are more likely to show people with unconstrained pose and illumination. This domain is also unique in that there are a large number of instances of a limited number of unique individuals. We develop and test facial recognition that is specifically targeted to this domain, using facial features that are derived from active shape modeling of faces, followed by a combination of the features using AdaBoost. We also provide a workflow that is suitable for lay users, and which rewards user inputs with improved performance. Test results show good performance on a challenging data set of consumer images.


International Journal of Multimedia Information Retrieval | 2012

An efficient framework for location-based scene matching in image databases

Xu Chen; Madirakshi Das; Alexander C. Loui

SIFT-based methods have been widely used for scene matching of photos taken at particular locations or places of interest. These methods are typically very time consuming due to the large number and high dimensionality of features used, making them unfeasible for use in consumer image collections containing a large number of images where computational power is limited and a fast response is desired. Considerable computational savings can be realized if images containing signature elements of particular locations can be automatically identified from the large number of images and only these representative images used for scene matching. We propose an efficient framework incorporating a set of discriminative image features that effectively enables us to select representative images for fast location-based scene matching. These image features are used for classifying images into good or bad candidates for scene matching, using different classification approaches. Furthermore, the image features created from our framework can facilitate the process of using sub-images for location-based scene matching with SIFT features. The experimental results demonstrate the effectiveness of our approach compared with the traditional SIFT-, PCA-SIFT-, and SURF-based approaches by reducing the computational time by an order of magnitude.


acm multimedia | 2011

Dynamic media show drivable by semantics

Vivek K. Singh; Jiebo Luo; Dhiraj Joshi; Madirakshi Das; Phoury Lei; Peter O. Stubler

We demonstrate a system to generate dynamic media shows that are significantly richer than static slide shows, which are currently the most popular form of photo playback. The goal is to enable media reliving experiences that are aesthetically pleasing, interactive, and semantically drivable as they center on people, locations, time, and events discovered in a media collection. Dynamic shows allow for better sharing of ones media collections in diverse social networks because people have different time availabilities and perspectives, and hence may want to interact, customize, and reroute the media flow per their individual needs.


international conference on multimedia and expo | 2010

A novel framework for fast scene matching in consumer image collections

Xu Chen; Madirakshi Das; Alexander C. Loui

The widespread utilization of digital visual media has motivated many research efforts towards efficient search and retrieval from large photo collections. Traditionally, SIFT feature-based methods have been widely used for matching photos taken at particular locations or places of interest. These methods are very time-consuming due to the complexity of the features and the large number of images typically contained in the image database being searched. In this paper, we propose a fast approach to matching images captured at particular locations or places of interest by selecting representative images from an image collection that have the best chance of being successfully matched by using SIFT, and relying on only these representative images for efficient scene matching. We present a unified framework incorporating a set of discriminative features that can effectively select the images containing signature elements of particular locations from a large number of images. The proposed approach produces an order of magnitude improvement in computational time for matching similar scenes in an image collection using SIFT features. The experimental results demonstrate the efficiency of our approach compared to the traditional SIFT, PCA-SIFT, and SURF-based approaches.


Proceedings of the 1st ACM international workshop on Connected multimedia | 2010

Collaborative content synchronization through an event-based framework

Madirakshi Das; Alexander C. Loui; Suprakash Datta

Web-based user-driven multimedia applications such as Facebook, Flickr, YouTube, and MySpace have gained enormous popularity in recent years and have enabled the sharing of billions of multimedia objects. Multimedia files, especially images and videos, have a natural chronological ordering based on capture date and time. However, in most cases capture information is no longer available once images have been uploaded, emailed, or edited. In this paper, we focus on the problem of adding groups of images with missing temporal information (but ordered temporally) into a primary, organized image collection. We formulate the problem of adding images to an existing collection as a discrete optimization problem in which the objective function incorporates intuitive notions of temporal ordering and similarity with the existing images. We use the event and sub-event structure of the collection to identify potential matches. Specifically, we maximize the sum of similarity scores of the added images while maintaining their temporal order in an event-based framework. We also propose a greedy algorithm for adding images with the objective of minimizing the temporal spread of the images in the merged collection. We evaluate our algorithms using consumer image collections.


Archive | 2003

Method for generating customized photo album pages and prints based on people and gender profiles

Madirakshi Das; Alexander C. Loui

Collaboration


Dive into the Madirakshi Das's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jiebo Luo

University of Rochester

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge