Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tina Walber is active.

Publication


Featured researches published by Tina Walber.


human factors in computing systems | 2014

Smart photo selection: interpret gaze as personal interest

Tina Walber; Ansgar Scherp; Steffen Staab

Manually selecting subsets of photos from large collections in order to present them to friends or colleagues or to print them as photo books can be a tedious task. Today, fully automatic approaches are at hand for supporting users. They make use of pixel information extracted from the images, analyze contextual information such as capture time and focal aperture, or use both to determine a proper subset of photos. However, these approaches miss the most important factor in the photo selection process: the user. The goal of our approach is to consider individual interests. By recording and analyzing gaze information from the users viewing photo collections, we obtain information on users interests and use this information in the creation of personal photo selections. In a controlled experiment with 33 participants, we show that the selections can be significantly improved over a baseline approach by up to 22% when taking individual viewing behavior into account. We also obtained significantly better results for photos taken at an event participants were involved in compared with photos from another event.


conference on multimedia modeling | 2013

Can You See It? Two Novel Eye-Tracking-Based Measures for Assigning Tags to Image Regions

Tina Walber; Ansgar Scherp; Steffen Staab

Eye tracking information can be used to assign given tags to image regions in order to describe the depicted scene in more details. We introduce and compare two novel eye-tracking-based measures for conducting such assignments: The segmentation measure uses automatically computed image segments and selects the one segment the user fixates for the longest time. The heat map measure is based on traditional gaze heat maps and sums up the users’ fixation durations per pixel. Both measures are applied on gaze data obtained for a set of social media images, which have manually labeled objects as ground truth. We have determined a maximum average precision of 65% at which the segmentation measure points to the correct region in the image. The best coverage of the segments is obtained for the segmentation measure with a F-measure of 35%. Overall, both newly introduced gaze-based measures deliver better results than baseline measures that selects a segment based on the golden ratio of photography or the center position in the image. The eye-tracking-based segmentation measure significantly outperforms the baselines for precision and F-measure.


Multimedia Tools and Applications | 2014

Benefiting from users' gaze: selection of image regions from eye tracking information for provided tags

Tina Walber; Ansgar Scherp; Steffen Staab

Providing image annotations is a tedious task. This becomes even more cumbersome when objects shall be annotated in the images. Such region-based annotations can be used in various ways like similarity search or as training set in automatic object detection. We investigate the principle idea of finding objects in images by looking at gaze paths from users, viewing images with an interest in a specific object. We have analyzed 799 gaze paths from 30 subjects viewing image-tag-pairs with the task to decide whether a tag could be found in the image or not. We have compared 13 different fixation measures analyzing the gaze paths. The best performing fixation measure is able to correctly assign a tag to a region for 63 % of the image-tag-pairs and significantly outperforms three baselines. We look into details of the image region characteristics such as the position and size for incorrect and correct assignments. The influence of aggregating multiple gaze paths from several subjects with respect to improving the precision of identifying the correct regions is also investigated. In addition, we look into the possibilities of discriminating different regions in the same image. Here, we are able to correctly identify two regions in the same image from different primings with an accuracy of 38 %.


conference on multimedia modeling | 2014

Exploitation of Gaze Data for Photo Region Labeling in an Immersive Environment

Tina Walber; Ansgar Scherp; Steffen Staab

Metadata describing the content of photos are of high importance for applications like image search or as part of training sets for object detection algorithms. In this work, we apply tags to image regions for a more detailed description of the photo semantics. This region labeling is performed without additional effort from the user, just from analyzing eye tracking data, recorded while users are playing a gaze-controlled game. In the game EyeGrab, users classify and rate photos falling down the screen. The photos are classified according to a given category under time pressure. The game has been evaluated in a study with 54 subjects. The results show that it is possible to assign the given categories to image regions with a precision of up to 61%. This shows that we can perform an almost equally good region labeling using an immersive environment like in EyeGrab compared to a previous classification experiment that was much more controlled.


Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications | 2018

Enhanced representation of web pages for usability analysis with eye tracking

Raphael Menges; Hanadi Tamimi; Chandan Kumar; Tina Walber; Christoph Schaefer; Steffen Staab

Eye tracking as a tool to quantify user attention plays a major role in research and application design. For Web page usability, it has become a prominent measure to assess which sections of a Web page are read, glanced or skipped. Such assessments primarily depend on the mapping of gaze data to a Web page representation. However, current representation methods, a virtual screenshot of the Web page or a video recording of the complete interaction session, suffer either from accuracy or scalability issues. We present a method that identifies fixed elements on Web pages and combines user viewport screenshots in relation to fixed elements for an enhanced representation of the page. We conducted an experiment with 10 participants and the results signify that analysis with our method is more efficient than a video recording, which is an essential criterion for large scale Web studies.


intelligent user interfaces | 2014

Tagging-by-search: automatic image region labeling using gaze information obtained from image search

Tina Walber; Chantal Neuhaus; Ansgar Scherp

Labeled image regions provide very valuable information that can be used in different settings such as image search. The manual creation of region labels is a tedious task. Fully automatic approaches lack understanding the image content sufficiently due to the huge variety of depicted objects. Our approach benefits from the expected spread of eye tracking hardware and uses gaze information obtained from users performing image search tasks to automatically label image regions. This allows to exploit the human capabilities regarding the visual perception of image content while performing daily routine tasks. In an experiment with 23 participants, we show that it is possible to assign search terms to photo regions by means of gaze analysis with an average precision of 0.56 and an average F-measure of 0.38 over 361 photos. The participants performed different search tasks while their gaze was recorded. The results of the experiment show that the gaze-based approach performs significantly better than a baseline approach based on saliency maps.


conference on multimedia modeling | 2012

Identifying objects in images from analyzing the users' gaze movements for provided tags

Tina Walber; Ansgar Scherp; Steffen Staab


acm multimedia | 2013

Creation of individual photo selections: read preferences from the users' eyes

Tina Walber; Chantal Neuhaus; Steffen Staab; Ansgar Scherp; Ramesh Jain


EuroHCIR | 2012

EyeGrab: A Gaze-based Game with a Purpose to Enrich Image Context Information

Tina Walber; Chantal Neuhaus; Ansgar Scherp


Archive | 2011

Towards Improving the Understanding of Image Semantics by Gaze-based Tag-to-Region Assignments

Tina Walber; Ansgar Scherp; Steffen Staab

Collaboration


Dive into the Tina Walber's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Steffen Staab

University of Koblenz and Landau

View shared research outputs
Top Co-Authors

Avatar

Chantal Neuhaus

University of Koblenz and Landau

View shared research outputs
Top Co-Authors

Avatar

Chandan Kumar

University of Koblenz and Landau

View shared research outputs
Top Co-Authors

Avatar

Hanadi Tamimi

University of Koblenz and Landau

View shared research outputs
Top Co-Authors

Avatar

Raphael Menges

University of Koblenz and Landau

View shared research outputs
Top Co-Authors

Avatar

Ramesh Jain

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge