Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Adrian Schwaninger is active.

Publication


Featured researches published by Adrian Schwaninger.


Cognition | 2008

An own-race advantage for components as well as configurations in face recognition

William G. Hayward; Gillian Rhodes; Adrian Schwaninger

The own-race advantage in face recognition has been hypothesized as being due to a superiority in the processing of configural information for own-race faces. Here we examined the contributions of both configural and component processing to the own-race advantage. We recruited 48 Caucasian participants in Australia and 48 Chinese participants in Hong Kong, and had them study Caucasian and Chinese faces. After study, they were shown old faces (along with distractors) that were either blurred (isolating configural processing), in which high spatial frequencies were removed from the intact faces, or scrambled (isolating component processing), in which the locations of all face components were rearranged. Participants performed better on the memory test for own-race faces in both the blurred (configural) and scrambled (component) conditions, showing an own-race advantage for both configural and component processing. These results suggest that the own-race advantage in face recognition is due to a general facilitation in different forms of face processing.


BMCV '02 Proceedings of the Second International Workshop on Biologically Motivated Computer Vision | 2002

Role of Featural and Configural Information in Familiar and Unfamiliar Face Recognition

Adrian Schwaninger; Janek S. Lobmaier; Stephan M. Collishaw

Using psychophysics we investigated to what extent human face recognition relies on local information in parts (featural information) and on their spatial relations (configural information). This is particularly relevant for biologically motivated computer vision since recent approaches have started considering such featural information. In Experiment 1 we showed that previously learnt faces could be recognized by human subjects when they were scrambled into constituent parts. This result clearly indicates a role of featural information. Then we determined the blur level that made the scrambled part versions impossible to recognize. This blur level was applied to whole faces in order to create configural versions that by definition do not contain featural information. We showed that configural versions of previously learnt faces could be recognized reliably. In Experiment 2 we replicated these results for familiar face recognition. Both Experiments provide evidence in favor of the view that recognition of familiar and unfamiliar faces relies on featural and configural information. Furthermore, the balance between the two does not differ for familiar and unfamiliar faces. We propose an integrative model of familiar and unfamiliar face recognition and discuss implications for biologically motivated computer vision algorithms for face recognition.


human computer interaction with mobile devices and services | 2005

Towards improving trust in context-aware systems by displaying system confidence

Stavros Antifakos; Nicky Kern; Bernt Schiele; Adrian Schwaninger

For automatic or context-aware systems a major issue is user trust, which is to a large extent determined by system reliability. For systems based on sensor input which are inherently uncertain or even uncomplete there is little hope that they will ever be perfectly reliable. In this paper we test the hypothesis if explicitly displaying the current confidence of the system increases the usability of such systems. For the example of a context-aware mobile phone, the experiments show that displaying confidence information increases the users trust in the system.


Progress in Brain Research | 2006

Processing of identity and emotion in faces: a psychophysical, physiological and computational perspective

Adrian Schwaninger; Christian Wallraven; Douglas W. Cunningham; Sarah D. Chiller-Glaus

A deeper understanding of how the brain processes visual information can be obtained by comparing results from complementary fields such as psychophysics, physiology, and computer science. In this chapter, empirical findings are reviewed with regard to the proposed mechanisms and representations for processing identity and emotion in faces. Results from psychophysics clearly show that faces are processed by analyzing component information (eyes, nose, mouth, etc.) and their spatial relationship (configural information). Results from neuroscience indicate separate neural systems for recognition of identity and facial expression. Computer science offers a deeper understanding of the required algorithms and representations, and provides computational modeling of psychological and physiological accounts. An interdisciplinary approach taking these different perspectives into account provides a promising basis for better understanding and modeling of how the human brain processes visual information for recognition of identity and emotion in faces.


international carnahan conference on security technology | 2004

Measuring visual abilities and visual knowledge of aviation security screeners

Adrian Schwaninger; Diana Hardmeier; Franziska Hofer

A central aspect of airport security is reliable detection of forbidden objects in passenger bags using X-ray screening equipment. Human recognition involves visual processing of the X-ray image and matching items with object representations stored in visual memory. Thus, without knowing which objects are forbidden and what they look like, prohibited items are difficult to recognize (aspect of visual knowledge). In order to measure whether a screener has acquired the necessary visual knowledge, we have applied the prohibited items test (PIT). This test contains different forbidden items according to international prohibited items lists. The items are placed in X-ray images of passenger bags so that the object shapes can be seen relatively well. Since all images can be inspected for 10 seconds, failing to recognize a threat item can be mainly attributed to a lack of visual knowledge. The object recognition test (ORT) is more related to visual processing and encoding. Three image-based factors can be distinguished that challenge different visual processing abilities. First, depending on the rotation within a bag, an object can be more or less difficult to recognize (effect of viewpoint). Second, prohibited items can be more or less superimposed by other objects, which can impair detection performance (effect of superposition). Third, the number and type of other objects in a bag can challenge visual search and processing capacity (effect of bag complexity). The ORT has been developed to measure how well screeners can cope with these image-based factors. This test contains only guns and knives, placed into bags in different views with different superposition and complexity levels. Detection performance is determined by the ability of a screener to detect threat items despite rotation, superposition and bag complexity. Since the shapes of guns and knives are usually known well even by novices, the aspect of visual threat object knowledge is of minor importance in this test.


WIT Transactions on the Built Environment | 2005

Using threat image projection data for assessing individual screener performance.

Franziska Hofer; Adrian Schwaninger

Threat image projection (TIP) is a technology of current x-ray machines that allows exposing screeners to artificial but realistic x-ray images during the routine baggage x-ray screening operation. If a screener does not detect a TIP within a specified amount of time, a feedback message appears indicating that a projected image was missed. Feedback messages are also shown when a TIP image is detected or in the case of a non-TIP alarm, i.e. when the screener indicated that there was threat but in fact no TIP was shown. TIP data is an interesting source for quality control, risk analysis and assessment of individual screener performance. In two studies we examined the conditions for using TIP data for the latter purpose. Our results strongly suggest using aggregated data in order to have a large enough data sample as the basis for statistical analysis. Second, an appropriate TIP library containing a large number of threat items, which are representative for the prohibited items to be detected is recommended. Furthermore, consideration should be given to image-based factors such as general threat item difficulty, viewpoint difficulty, superposition and bag complexity. Different methods to cope with these issues are discussed in order to achieve reliable, valid and standardized measurements of individual screener performance using TIP.


IEEE Aerospace and Electronic Systems Magazine | 2005

Aviation Security Screeners Visual Abilities & Visual Knowledge Measurement

Adrian Schwaninger; Diana Hardmeier; Franziska Hofer

A central aspect of airport security is reliable detection of forbidden objects in passenger’s bags using X-ray screening equipment. Human recognition involves visual processing of the X-ray image and matching items with object representations stored in visual memory. Thus, without knowing which objects are forbidden and what they look like, prohibited items are difficult to recognize (aspect of visual knowledge). In order to measure whether a screener has acquired the necessary visual knowledge, we have applied the prohibited items test (PIT). This test contains different forbidden items according to international prohibited items lists. The items are placed in X-ray images of passenger bags so that the object shapes can be seen relatively well. Since all images can be inspected for 10 seconds, failing to recognize a threat item can be mainly attributed to a lack of visual knowledge. The object recognition test (ORT) is more related to visual processing and encoding. Three image-based factors can be distinguished that challenge different visual processing abilities. First, depending on the rotation within a bag, an object can be more or less difficult to recognize (effect of viewpoint). Second, prohibited items can be more or less superimposed by other objects, which can impair detection performance (effect of superposition). Third, the number and type of other objects in a bag can challenge visual search and processing capacity (effect of bag complexity). The ORT has been developed to measure how well screeners cope with these image-based factors. This test contains only guns and knives, placed into bags in different views with different superposition and complexity levels. Detection performance is determined by the ability of a screener to detect threat items despite rotation, superposition and bag complexity. Since the shapes of guns and knives are usually well-known even by novices, the aspect of visual threat object knowledge is of minor importance in this test. A total of 134 aviation security screeners and 134 novices participated in this study. Detection performance was measured using A’. The three image-based factors of the ORT were validated. The effect of view, superposition, and bag complexity were highly significant. The validity of the PIT was examined by comparing the two participant groups. Large differences were found in detection performance between screeners and novices for the PIT. This result is consistent with the assumption that the PIT measures aspects related to visual knowledge. Although screeners were also better than novices in the ORT, the relative difference was much smaller. This result is consistent with the assumption that the ORT measures image-based factors that are related to visual processing abilities; whereas the PIT is more related to visual knowledge. For both tests, large inter-individual differences were found. Reliability was high for both participant groups and tests, indicating that they can be used for measuring performance on an individual basis. The application of the ORT and PIT for screener certification and competency assessment are discussed.


Experimental Psychology | 2006

Objects Capture Perceived Gaze Direction

Janek S. Lobmaier; Martin H. Fischer; Adrian Schwaninger

The interpretation of another persons eye gaze is a key element of social cognition. Previous research has established that this ability develops early in life and is influenced by the persons head orientation, as well as local features of the persons eyes. Here we show that the presence of objects in the attended space also has an impact on gaze interpretation. Eleven normal adults identified the fixation points of photographed faces with a mouse cursor. Their responses were systematically biased toward the locations of nearby objects. This capture of perceived gaze direction probably reflects the attribution of intentionality and has methodological implications for research on gaze perception.


international carnahan conference on security technology | 2005

The X-ray object recognition test (X-ray ORT) - a reliable and valid instrument for measuring visual abilities needed in X-ray screening

Diana Hardmeier; Franziska Hofer; Adrian Schwaninger

Aviation security screening has become very important in recent years. It was shown by Schwaninger et al. (2004) that certain image-based factors influence detection when visually inspecting X-ray images of passenger bags. Threat items are more difficult to recognize when placed in close-packed bags (effect of bag complexity), when superimposed by other objects (effect of superposition), and when rotated (effect of viewpoint). The X-ray object recognition rest (X-ray ORT) was developed to measure the abilities needed to cope with these factors. In this study, we examined the reliability and validity of the X-ray ORT based on a sample of 453 aviation security screeners and 453 novices. Cronbach Alpha and split-half analysis revealed high reliability. Validity was examined using internal, convergent, discriminant and criterion-related validity estimates. The results show that the X-ray ORT is a reliable and valid instrument for measuring visual abilities needed in X-ray screening. This makes the X-ray ORT an interesting tool for competency and pre-employment assessment purposes.


ubiquitous computing | 2004

Evaluating the Effects of Displaying Uncertainty in Context-Aware Applications

Stavros Antifakos; Adrian Schwaninger; Bernt Schiele

Many context aware systems assume that the context information they use is highly accurate. In reality, however, perfect and reliable context information is hard if not impossible to obtain. Several researchers have therefore argued that proper feedback such as monitor and control mechanisms have to be employed in order to make context aware systems applicable and useable in scenarios of realistic complexity. As of today, those feedback mechanisms are difficult to compare since they are too rarely evaluated. In this paper we propose and evaluate a simple but effective feedback mechanism for context aware systems. The idea is to explicitly display the uncertainty inherent in the context information and to leverage from the human ability to deal well with uncertain information. In order to evaluate the effectiveness of this feedback mechanism the paper describes two user studies which mimic a ubiquitous memory aid. By changing the quality, respectively the uncertainty of context recognition, the experiments show that human performance in a memory task is increased by explicitly displaying uncertainty information. Finally, we discuss implications of these experiments for today’s context-aware systems.

Collaboration


Dive into the Adrian Schwaninger's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge