Andreas Dengel
Kaiserslautern University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Andreas Dengel.
human factors in computing systems | 2008
Georg Buscher; Andreas Dengel; Ludger van Elst
Reading detection is an important step in the process of automatic relevance feedback generation based on eye movements for information retrieval tasks. We describe a reading detection algorithm and present a preliminary study to find expressive eye movement measures.
international acm sigir conference on research and development in information retrieval | 2008
Georg Buscher; Andreas Dengel; Ludger van Elst
We examine the effect of incorporating gaze-based attention feedback from the user on personalizing the search process. Employing eye tracking data, we keep track of document parts the user read in some way. We use this information on the subdocument level as implicit feedback for query expansion and reranking. We evaluated three different variants incorporating gaze data on the subdocument level and compared them against a baseline based on context on the document level. Our results show that considering reading behavior as feedback yields powerful improvements of the search result accuracy of ca. 32% in the general case. However, the extent of the improvements varies depending on the internal structure of the viewed documents and the type of the current information need.
augmented human international conference | 2014
Shoya Ishimaru; Kai Kunze; Koichi Kise; Jens Weppner; Andreas Dengel; Paul Lukowicz; Andreas Bulling
We demonstrate how information about eye blink frequency and head motion patterns derived from Google Glass sensors can be used to distinguish different types of high level activities. While it is well known that eye blink frequency is correlated with user activity, our aim is to show that (1) eye blink frequency data from an unobtrusive, commercial platform which is not a dedicated eye tracker is good enough to be useful and (2) that adding head motion patterns information significantly improves the recognition rates. The method is evaluated on a data set from an experiment containing five activity classes (reading, talking, watching TV, mathematical problem solving, and sawing) of eight participants showing 67% recognition accuracy for eye blinking only and 82% when extended with head motion patterns.
Archive | 2004
Simone Marinai; Andreas Dengel
Implications of technical demands made within digital libraries (DL’s) for document image analysis systems are discussed. The state-of-the-art is summarized, including a digest of themes that emerged during the recent International Workshop on Document Image Analysis for Libraries. We attempt to specify, in considerable detail, the essential features of document analysis systems that can assist in: (a) the creation of DL’s; (b) automatic indexing and retrieval of doc-images within DL’s; (c) the presentation of doc-images to DL users; (d) navigation within and among doc-images in DL’s; and (e) effective use of personal and
Lecture Notes in Computer Science | 2008
Andreas Dengel; Karsten Berns; Thomas M. Breuel; Frank Bomarius; Thomas Roth-Berghofer
The research in this thesis aims to enable robots to imitate humans. Learning by imitation is a fundamental part of human behaviour, since it allows humans to acquire motor skills simply by demonstration; seen from a robotic viewpoint you can easily “program” your fellow humans by showing them what to do. Would it not be great if the same mechanism could be used to program robots? A robot is programmed by specifying the torque of its motors. The torque can be regarded as the force or strength that is the result of muscles contracting or relaxing. Typical approaches to determine motor torques that will lead to a desired behaviour include setting them manually, i.e. on a trial-and-error basis, or specifying them by mathematical equations. Neither of these are intuitive to most humans, so most robot behaviours are programmed by engineers. However, if an engineer was to design a preprogrammed housekeeping robot, it would be very hard to program all the possible behaviours the robot could be expected to perform, even in such a limited domain. It is much more cost-efficient to make the robot learn what to do. This would allow the robot to adapt to its human owner, and not the other way around. Since humans easily learn new behaviours by imitating others, it would be ideal if humans could use the same technique to transfer motor knowledge to robots. I believe research in this area could be of great help to bridge the human-robot interaction gap that currently exists, so that you could have truly intelligent robots that could assist people in everyday life. To understand imitation learning, knowledge of psychology and neuroscience is required. The research in this thesis has taken an interdisciplinary approach, studying the desired mechanism on both a behavioural and neuroscientific level. I have focused on imitation in a musical setting. The system can both see and hear, and seeks to imitate the perceived behaviour. The application has been to create an intelligent virtual drummer, that imitates both the physical playing style (i.e. the movement of the arms) as well as the musical playing style (i.e. the groove) of the teacher. The virtual drummer will then both look and sound like a human drummer. The research in this thesis presents a multi-modal architecture for imitation of human movements. I have been working on simulated robots due to limits of time and money, however the principles of my research have been developed in a platformindependent way, so it should be applicable to real robots as well.
human factors in computing systems | 2010
Ralf Biedert; Georg Buscher; Sven Schwarz; Jörn Hees; Andreas Dengel
We discuss the idea of text responsive to reading and argue that the combination of eye tracking, text and real time interaction offers various possibilities to en- hance the reading experience. We present a number of prototypes and applications facilitating the users gaze in order to assist comprehension difficulties and show their benefit in a preliminary evaluation.
international conference on document analysis and recognition | 2001
Thomas Kieninger; Andreas Dengel
This paper summarizes the core idea of the T-Recs table recognition system, an integrated system covering block-segmentation, table location and a model-free structural analysis of tables. T-Recs works on the output of commercial OCR systems that provide the word bounding box geometry together with the text itself (e.g. Xerox ScanWorX). While T-Recs performs well on a number of document categories, business letters still remained a challenging domain because the T-Recs location heuristics are mislead by their header or footer resulting in a low recognition precision. Business letters such as invoices are a very interesting domain for industrial applications due to the large amount of documents to be analyzed and the importance of the data carried within their tables. Hence, we developed a more restrictive approach which is implemented in the T-Recs++ prototype. This paper describes the ideas of the T-Recs++ location and also proposes a quality evaluation measure that reflects the bottom-up strategy of either T-Recs or T-Recs++. Finally, some results comparing both systems on a collection of business letters are given.
Informatik Spektrum | 2010
Ralf Biedert; Georg Buscher; Andreas Dengel
Introduction A rapid development of eye tracking technology has been observed in recent years. Today’s eye trackers can determine the current focus point of the eye precisely while being relatively unobtrusive in their application. Also, a variety of research and commercial groups has been working on this technology, and there is a growing interest for such devices on the market. Eye tracking has great potential and it can be assumed that it will advance further and might become a widespread technology used at a large number of personal or office computer workplaces. Approaches using simple webcams for eye tracking already exist, for example webcams integrated into laptop computers by default. Thus, they allow for new kinds of applications using eye gaze data. However, not only eye tracking technology is advancing rapidly to an easily usable state. Additionally, during the past 100 years researchers gathered a considerable amount of knowledge on eye movements, why and how they occur, and what they might mean. So, today, we have the technology and knowledge for tracking and analyzing eye movements, making an excellent starting point for sophisticated interactive gaze-based applications. Naive approaches where gaze data is directly employed for interacting with the system, e. g., pressing buttons on the screen with the “blink of an eye” generally have serious problems. Because the eyes are organs used for perceiving the world and not for manipulating the world, it is hard and against human nature to control eye movements deliberately. However, a highly promising approach is just to observe eye movements of the user during his or her daily work in front of the computer, to infer user intentions based on eye movement behavior, and to provide assistance where helpful. Gaze can be seen as a proxy for the user’s attention, and eye movements are known to be usually tightly coupled with cognitive processes in the brain, so that a great deal about those processes can be observed by eye tracking. For example, by interpreting eye movements, reading behavior of the user can be detected, which most likely entails cognitive processes of understanding with regard to the currently read text. In this paper we are focusing particularly on reading behavior since reading is probably the most common activity of knowledge workers sitting in front of a computer screen. We present an algorithm for online reading detection based on eye tracking data and introduce an application for assisted and augmented reading called the eyeBook. The idea behind the eyeBook is to create an interactive and entertaining reading experience. The system observes, which text parts are currently being read by the user on the screen and generates appropriate effects such as playing sounds, presenting
International Journal of Pattern Recognition and Artificial Intelligence | 1988
Andreas Dengel; Gerhard Barth
The realization of the paper-free office seems to be difficult that expected. Therefore, good paper-computer interfaces are necessary to transform paper documents into an electronic form, which allows the use of a filing and retrieval system. An electronic document page is an optically scanned and digitized representation of a printed page. Document analysis is the problem of interpreting and labeling the constitutents of the document. Although there are very reliable optical character recognition (OCR) methods, the process could be very inefficient. To prune the search space and to become more efficient, some search supporting methods have to be developed. This article proposes an approach to identify the layout of a document page by dividing it recursively into nested rectangular areas. The procedure is used as a basis for a document layout model, which is able to control an automatic interpretation mechanism for deriving a high level representation of the contents of a document. We have implemented our method in Common Lisp on a Symbolies 3640 Workstation and have run it for a large population of office documents. The results obtained have been very encouraging and have convincingly confirmed the soundness of our approach.
human factors in computing systems | 2010
Georg Buscher; Ralf Biedert; Daniel Heinesch; Andreas Dengel
We report on an exploratory study analyzing preferred reading regions on a monitor using eye tracking. We show that users have individually preferred reading regions, varying in location on the screen and in size. Furthermore, we explore how scrolling interactions and mouse movements are correlated with position and size of the individually preferred reading regions.