Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Eakta Jain is active.

Publication


Featured researches published by Eakta Jain.


acm symposium on applied perception | 2012

Inferring artistic intention in comic art through viewer gaze

Eakta Jain; Yaser Sheikh; Jessica K. Hodgins

Comics are a compelling, though complex, visual storytelling medium. Researchers are interested in the process of comic art creation to be able to automatically tell new stories, and also, summarize videos and catalog large collections of photographs for example. A primary organizing principle used by artists to lay out the components of comic art (panels, word bubbles, objects inside each panel) is to lead the viewers attention along a deliberate visual route that reveals the narrative. If artists are successful in leading viewer attention, then their intended visual route would be accessible through recorded viewer attention, i.e., eyetracking data. In this paper, we conduct an experiment to verify if artists are successful in their goal of leading viewer gaze. We eyetrack viewers on images taken from comic books, as well as photographs taken by experts, amateur photographers and a robot. Our data analyses show that there is increased consistency in viewer gaze for comic pictures versus photographs taken by a robot and by amateur photographers, thus confirming that comic artists do indeed direct the flow of viewer attention.


ACM Transactions on Graphics | 2015

Gaze-Driven Video Re-Editing

Eakta Jain; Yaser Sheikh; Ariel Shamir; Jessica K. Hodgins

Given the current profusion of devices for viewing media, video content created at one aspect ratio is often viewed on displays with different aspect ratios. Many previous solutions address this problem by retargeting or resizing the video, but a more general solution would re-edit the video for the new display. Our method employs the three primary editing operations: pan, cut, and zoom. We let viewers implicitly reveal what is important in a video by tracking their gaze as they watch the video. We present an algorithm that optimizes the path of a cropping window based on the collected eyetracking data, finds places to cut, and computes the size of the cropping window. We present results on a variety of video clips, including close-up and distant shots, and stationary and moving cameras. We conduct two experiments to evaluate our results. First, we eyetrack viewers on the result videos generated by our algorithm, and second, we perform a subjective assessment of viewer preference. These experiments show that viewer gaze patterns are similar on our result videos and on the original video clips, and that viewers prefer our results to an optimized crop-and-warp algorithm.


ACM Transactions on Graphics | 2012

Three-dimensional proxies for hand-drawn characters

Eakta Jain; Yaser Sheikh; Moshe Mahler; Jessica K. Hodgins

Drawing shapes by hand and manipulating computer-generated objects are the two dominant forms of animation. Though each medium has its own advantages, the techniques developed for one medium are not easily leveraged in the other medium because hand animation is two-dimensional, and inferring the third dimension is mathematically ambiguous. A second challenge is that the character is a consistent three-dimensional (3D) object in computer animation while hand animators introduce geometric inconsistencies in the two-dimensional (2D) shapes to better convey a characters emotional state and personality. In this work, we identify 3D proxies to connect hand-drawn animation and 3D computer animation. We present an integrated approach to generate three levels of 3D proxies: single-points, polygonal shapes, and a full joint hierarchy. We demonstrate how this approach enables one medium to take advantage of techniques developed for the other; for example, 3D physical simulation is used to create clothes for a hand-animated character, and a traditionally trained animator is able to influence the performance of a 3D character while drawing with paper and pencil.


IEEE Computer Graphics and Applications | 2016

Predicting Moves-on-Stills for Comic Art Using Viewer Gaze Data

Eakta Jain; Yaser Sheikh; Jessica K. Hodgins

Comic art consists of a sequence of panels of different shapes and sizes that visually communicate the narrative to the reader. The move-on-stills technique allows such still images to be retargeted for digital displays via camera moves. Today, moves-on-stills can be created by software applications given user-provided parameters for each desired camera move. The proposed algorithm uses viewer gaze as input to computationally predict camera move parameters. The authors demonstrate their algorithm on various comic book panels and evaluate its performance by comparing their results with a professional DVD.


symposium on computer animation | 2010

Augmenting hand animation with three-dimensional secondary motion

Eakta Jain; Yaser Sheikh; Moshe Mahler; Jessica K. Hodgins

Secondary motion, or the motion of objects in response to that of the primary character, is widely used to amplify the audiences response to the characters motion and to provide a connection to the environment. These three-dimensional (3D) effects are largely passive and tend to be time consuming to animate by hand, yet most are very effectively simulated in current animation software. In this paper, we present a technique for augmenting hand-drawn animation of human characters with 3D physical effects to create secondary motion. In particular, we create animations in which hand-drawn characters interact with cloth and clothing, dynamically simulated balls and particles, and a simple fluid simulation. The driving points or volumes for the secondary motion are tracked in two dimensions, reconstructed into three dimensions, and used to drive and collide with the simulated objects. Our technique employs user interaction that can be reasonably integrated into the traditional animation pipeline of drawing, cleanup, inbetweening, and coloring.


tests and proofs | 2016

Is the Motion of a Child Perceivably Different from the Motion of an Adult

Eakta Jain; Lisa Anthony; Amanda Castonguay; Isabella Cuba; Alex Shaw; Julia Woodward

Artists and animators have observed that children’s movements are quite different from adults performing the same action. Previous computer graphics research on human motion has primarily focused on adult motion. There are open questions as to how different child motion actually is, and whether the differences will actually impact animation and interaction. We report the first explicit study of the perception of child motion (ages 5 to 9 years old), compared to analogous adult motion. We used markerless motion capture to collect an exploratory corpus of child and adult motion, and conducted a perceptual study with point light displays to discover whether naive viewers could identify a motion as belonging to a child or an adult. We find that people are generally successful at this task. This work has implications for creating more engaging and realistic avatars for games, online social media, and animated videos and movies.


international conference on multimedia and expo | 2016

A preliminary benchmark of four saliency algorithms on comic art

Khimya Khetarpal; Eakta Jain

Predicting the salient regions of a comic book panel has the potential to drive a variety of applications such as segmentation, cropping, effects such as moves on stills, etc. Computational saliency algorithms have been widely tested on a variety of natural images, and extensively benchmarked. We report the performance of four saliency algorithms on a set of comic panels taken from public domain legacy comics. We find that a data-driven method performs highest based on two metrics, Normalized Scanpath Saliency and Area Under the Curve. We discuss possible reasons for this finding based on an exploratory analysis of the similarity between the comic images in our dataset and images used in the dataset of the data driven method.


acm symposium on applied perception | 2016

Decoupling light reflex from pupillary dilation to measure emotional arousal in videos

Pallavi Raiturkar; Andrea Kleinsmith; Andreas Keil; Arunava Banerjee; Eakta Jain

Predicting the exciting portions of a video is a widely relevant problem because of applications such as video summarization, searching for similar videos, and recommending videos to users. Researchers have proposed the use of physiological indices such as pupillary dilation as a measure of emotional arousal. The key problem with using the pupil to measure emotional arousal is accounting for pupillary response to brightness changes. We propose a linear model of pupillary light reflex to predict the pupil diameter of a viewer based only on incident light intensity. The residual between the measured pupillary diameter and the model prediction is attributed to the emotional arousal corresponding to that scene. We evaluate the effectiveness of this method of factoring out pupillary light reflex for the particular application of video summarization. The residual is converted into an exciting-ness score for each frame of a video. We show results on a variety of videos, and compare against ground truth as reported by three independent coders.


ACM Transactions on Multimedia Computing, Communications, and Applications | 2017

Creating Segments and Effects on Comics by Clustering Gaze Data

Ishwarya Thirunarayanan; Khimya Khetarpal; Sanjeev J. Koppal; Olivier Le Meur; John M. Shea; Eakta Jain

Traditional comics are increasingly being augmented with digital effects, such as recoloring, stereoscopy, and animation. An open question in this endeavor is identifying where in a comic panel the effects should be placed. We propose a fast, semi-automatic technique to identify effects-worthy segments in a comic panel by utilizing gaze locations as a proxy for the importance of a region. We take advantage of the fact that comic artists influence viewer gaze towards narrative important regions. By capturing gaze locations from multiple viewers, we can identify important regions and direct a computer vision segmentation algorithm to extract these segments. The challenge is that these gaze data are noisy and difficult to process. Our key contribution is to leverage a theoretical breakthrough in the computer networks community towards robust and meaningful clustering of gaze locations into semantic regions, without needing the user to specify the number of clusters. We present a method based on the concept of relative eigen quality that takes a scanned comic image and a set of gaze points and produces an image segmentation. We demonstrate a variety of effects such as defocus, recoloring, stereoscopy, and animations. We also investigate the use of artificially generated gaze locations from saliency models in place of actual gaze locations.


visual information communication and interaction  | 2013

ERELT: a faster alternative to the list-based interfaces for tree exploration and searching in mobile devices

Abhishek P. Chhetri; Kang Zhang; Eakta Jain

This paper presents ERELT (Enhanced Radial Edgeless Tree), a tree visualization approach on modern mobile devices. ERELT is designed to offer a clear visualization of any tree structure with intuitive interaction. We are interested in both the observation and navigation of such structures. Such visualization can assist users in interacting with a hierarchical structure such as a media collection, file system, etc. In the ERELT visualization, a subset of the tree is displayed at a time. The displayed tree size depends on the maximum number of tree elements that can be put on the screen while maintaining clarity. Users can quickly navigate to the hidden parts of the tree through touch-based gestures. We conducted a user study to evaluate this visualization for a music collection. Test results show that this approach reduces the time and effort in navigating tree structures for exploration and search tasks.

Collaboration


Dive into the Eakta Jain's collaboration.

Top Co-Authors

Avatar

Yaser Sheikh

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alex Shaw

University of Florida

View shared research outputs
Top Co-Authors

Avatar

Amanda Castonguay

University of Southern Maine

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge