Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zoya Bylinskii is active.

Publication


Featured researches published by Zoya Bylinskii.


Vision Research | 2015

Intrinsic and extrinsic effects on image memorability

Zoya Bylinskii; Phillip Isola; Constance Bainbridge; Antonio Torralba; Aude Oliva

Previous studies have identified that images carry the attribute of memorability, a predictive value of whether a novel image will be later remembered or forgotten. Here we investigate the interplay between intrinsic and extrinsic factors that affect image memorability. First, we find that intrinsic differences in memorability exist at a finer-grained scale than previously documented. Second, we test two extrinsic factors: image context and observer behavior. Building on prior findings that images that are distinct with respect to their context are better remembered, we propose an information-theoretic model of image distinctiveness. Our model can automatically predict how changes in context change the memorability of natural images. In addition to context, we study a second extrinsic factor: where an observer looks while memorizing an image. It turns out that eye movements provide additional information that can predict whether or not an image will be remembered, on a trial-by-trial basis. Together, by considering both intrinsic and extrinsic effects on memorability, we arrive at a more complete and fine-grained model of image memorability than previously available.


european conference on computer vision | 2016

Where Should Saliency Models Look Next

Zoya Bylinskii; Adria Recasens; Ali Borji; Aude Oliva; Antonio Torralba

Recently, large breakthroughs have been observed in saliency modeling. The top scores on saliency benchmarks have become dominated by neural network models of saliency, and some evaluation scores have begun to saturate. Large jumps in performance relative to previous models can be found across datasets, image types, and evaluation metrics. Have saliency models begun to converge on human performance? In this paper, we re-examine the current state-of-the-art using a fine-grained analysis on image types, individual images, and image regions. Using experiments to gather annotations for high-density regions of human eye fixations on images in two established saliency datasets, MIT300 and CAT2000, we quantify up to 60% of the remaining errors of saliency models. We argue that to continue to approach human-level performance, saliency models will need to discover higher-level concepts in images: text, objects of gaze and action, locations of motion, and expected locations of people in images. Moreover, they will need to reason about the relative importance of image regions, such as focusing on the most important person in the room or the most informative sign on the road. More accurately tracking performance will require finer-grained evaluations and metrics. Pushing performance further will require higher-level image understanding.


Vision Research | 2015

Towards the quantitative evaluation of visual attention models

Zoya Bylinskii; E.M. DeGennaro; R. Rajalingham; H. Ruda; J. Zhang; John K. Tsotsos

Scores of visual attention models have been developed over the past several decades of research. Differences in implementation, assumptions, and evaluations have made comparison of these models very difficult. Taxonomies have been constructed in an attempt at the organization and classification of models, but are not sufficient at quantifying which classes of models are most capable of explaining available data. At the same time, a multitude of physiological and behavioral findings have been published, measuring various aspects of human and non-human primate visual attention. All of these elements highlight the need to integrate the computational models with the data by (1) operationalizing the definitions of visual attention tasks and (2) designing benchmark datasets to measure success on specific tasks, under these definitions. In this paper, we provide some examples of operationalizing and benchmarking different visual attention tasks, along with the relevant design considerations.


user interface software and technology | 2017

Learning Visual Importance for Graphic Designs and Data Visualizations

Zoya Bylinskii; Nam Wook Kim; Peter O'donovan; Sami Alsheikh; Spandan Madan; Hanspeter Pfister; Bryan C. Russell; Aaron Hertzmann

Knowing where people look and click on visual designs can provide clues about how the designs are perceived, and where the most important or relevant content lies. The most important content of a visual design can be used for effective summarization or to facilitate retrieval from a database. We present automated models that predict the relative importance of different elements in data visualizations and graphic designs. Our models are neural networks trained on human clicks and importance annotations on hundreds of designs. We collected a new dataset of crowdsourced importance, and analyzed the predictions of our models with respect to ground truth importance and human eye movements. We demonstrate how such predictions of importance can be used for automatic design retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven applications are on par with, or outperform, current state-of-the-art methods, including natural image saliency. We also provide a demonstration of how our importance predictions can be built into interactive design tools to offer immediate feedback during the design process.


Workshop on Eye Tracking and Visualization | 2015

Eye Fixation Metrics for Large Scale Evaluation and Comparison of Information Visualizations

Zoya Bylinskii; Michelle A. Borkin; Nam Wook Kim; Hanspeter Pfister; Aude Oliva

An observer’s eye movements are often informative about how the observer interacts with and processes a visual stimulus. Here, we are specifically interested in what eye movements reveal about how the content of information visualizations is processed. Conversely, by pooling over many observers’ worth of eye movements, what can we learn about the general effectiveness of different visualizations and the underlying design principles employed? The contribution of this manuscript is to consider these questions at a large data scale, with thousands of eye fixations on hundreds of diverse information visualizations. We survey existing methods and metrics for collective eye movement analysis, and consider what each can tell us about the overall effectiveness of different information visualizations and designs at this large data scale.


human factors in computing systems | 2015

A Crowdsourced Alternative to Eye-tracking for Visualization Understanding

Nam Wook Kim; Zoya Bylinskii; Michelle A. Borkin; Aude Oliva; Krzysztof Z. Gajos; Hanspeter Pfister

In this study we investigate the utility of using mouse clicks as an alternative for eye fixations in the context of understanding data visualizations. We developed a crowdsourced study online in which participants were presented with a series of images containing graphs and diagrams and asked to describe them. Each image was blurred so that the participant needed to click to reveal bubbles - small, circular areas of the image at normal resolution. This is similar to having a confined area of focus like the human eye fovea. We compared the bubble click data with the fixation data from a complementary eye-tracking experiment by calculating the similarity between the resulting heatmaps. A high similarity score suggests that our methodology may be a viable crowdsourced alternative to eye-tracking experiments, especially when little to no eye-tracking data is available. This methodology can also be used to complement eye-tracking studies with an additional behavioral measurement, since it is specifically designed to measure which information people consciously choose to examine for understanding visualizations. \


ACM Transactions on Computer-Human Interaction | 2017

BubbleView: An Interface for Crowdsourcing Image Importance Maps and Tracking Visual Attention

Nam Wook Kim; Zoya Bylinskii; Michelle A. Borkin; Krzysztof Z. Gajos; Aude Oliva; Hanspeter Pfister

In this article, we present BubbleView, an alternative methodology for eye tracking using discrete mouse clicks to measure which information people consciously choose to examine. BubbleView is a mouse-contingent, moving-window interface in which participants are presented with a series of blurred images and click to reveal “bubbles” -- small, circular areas of the image at original resolution, similar to having a confined area of focus like the eye fovea. Across 10 experiments with 28 different parameter combinations, we evaluated BubbleView on a variety of image types: information visualizations, natural images, static webpages, and graphic designs, and compared the clicks to eye fixations collected with eye-trackers in controlled lab settings. We found that BubbleView clicks can both (i) successfully approximate eye fixations on different images, and (ii) be used to rank image and design elements by importance. BubbleView is designed to collect clicks on static images, and works best for defined tasks such as describing the content of an information visualization or measuring image importance. BubbleView data is cleaner and more consistent than related methodologies that use continuous mouse movements. Our analyses validate the use of mouse-contingent, moving-window methodologies as approximating eye fixations for different image and task types.


Journal of Vision | 2015

Quantifying Context Effects on Image Memorability

Zoya Bylinskii; Phillip Isola; Antonio Torralba; Aude Oliva

Why do some images stick in our minds while others fade away? Recent work suggests that this is partially due to intrinsic differences in image content (Isola 2011, Bainbridge 2013, Borkin 2013). However, the context in which an image appears can also affect its memorability. Previous studies have found that images that are distinct, interfere less with other images in memory and are thus better remembered (Standing 1973, Hunt 2006, Konkle 2010). However, these effects have not previously been rigorously quantified on large-scale sets of complex, natural stimuli. Our contribution is to quantify image distinctiveness and predict memory performance using information-theoretic measures on a large collection of scene images. We measured the memory performance of both online (Amazon Mechanical Turk) and in-lab participants on an image recognition game (using the protocol of Isola 2011). We systematically varied the image context for over 1,754 images (from 21 indoor and outdoor scene categories), by either presenting images together with other images from the same scene category or with images from different scene categories. We use state-of-the-art computer vision features to quantify the distinctiveness of images relative to other images in the same experimental context and to correlate image distinctiveness with memorability. We show that by changing an images context, we can change its distinctiveness and predict effects on memorability. Images that are distinct with respect to one context may no longer be distinct with respect to another. We find that images that are not clear exemplars of their image category experience the largest drop in memory performance when combined with images of other categories. Moreover, image contexts that are more diverse lead to better memory performances overall. Our quantitative approach can be used for informing applications on how to make visual material more memorable. Meeting abstract presented at VSS 2015.


IEEE Transactions on Visualization and Computer Graphics | 2013

What Makes a Visualization Memorable

Michelle A. Borkin; Azalea A. Vo; Zoya Bylinskii; Phillip Isola; Shashank Sunkavalli; Aude Oliva; Hanspeter Pfister


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2018

What do different evaluation metrics tell us about saliency models

Zoya Bylinskii; Tilke Judd; Aude Oliva; Antonio Torralba

Collaboration


Dive into the Zoya Bylinskii's collaboration.

Top Co-Authors

Avatar

Aude Oliva

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Antonio Torralba

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Adria Recasens

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ali Borji

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Phillip Isola

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge