Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Susanne Stadler is active.

Publication


Featured researches published by Susanne Stadler.


automotive user interfaces and interactive vehicular applications | 2014

Towards Autonomous Cars: The Effect of Autonomy Levels on Acceptance and User Experience

Christina Rödel; Susanne Stadler; Alexander Meschtscherjakov; Manfred Tscheligi

Surveys [8] show that people generally have a positive attitude towards autonomous cars. However, these studies neglect that cars have different levels of autonomy and that User Acceptance (UA) and User Experience (UX) with autonomous systems differ with regard to the degree of system autonomy. The National Highway Traffic Safety Administration (NHTSA) defines five degrees of car autonomy which vary in the penetration of cars with Advanced Driver Assistance Systems (ADAS) and the extent to which a car is taken over by autonomous systems. Based on these levels, we conducted an online-questionnaire study (N = 336), in which we investigated how UA and UX factors, such as Perceived Ease of Use, Attitude Towards using the system, Perceived Behavioral Control, Behavioral Intention to use a system, Trust and Fun, differ with regard to the degree of autonomy in cars. We show that UA and UX are highest in levels of autonomy that already have been deployed in modern cars. More specifically, perceived control and fun decrease continuously with higher autonomy. Furthermore, our results indicate that pre-experience with ADAS and demographics, such as age and gender, have an influence on UA and UX.


Frontiers in Psychology | 2015

Systematic analysis of video data from different human-robot interaction studies: a categorization of social signals during error situations.

Manuel Giuliani; Nicole Mirnig; Gerald Stollnberger; Susanne Stadler; Roland Buchner; Manfred Tscheligi

Human–robot interactions are often affected by error situations that are caused by either the robot or the human. Therefore, robots would profit from the ability to recognize when error situations occur. We investigated the verbal and non-verbal social signals that humans show when error situations occur in human–robot interaction experiments. For that, we analyzed 201 videos of five human–robot interaction user studies with varying tasks from four independent projects. The analysis shows that there are two types of error situations: social norm violations and technical failures. Social norm violations are situations in which the robot does not adhere to the underlying social script of the interaction. Technical failures are caused by technical shortcomings of the robot. The results of the video analysis show that the study participants use many head movements and very few gestures, but they often smile, when in an error situation with the robot. Another result is that the participants sometimes stop moving at the beginning of error situations. We also found that the participants talked more in the case of social norm violations and less during technical failures. Finally, the participants use fewer non-verbal social signals (for example smiling, nodding, and head shaking), when they are interacting with the robot alone and no experimenter or other human is present. The results suggest that participants do not see the robot as a social interaction partner with comparable communication skills. Our findings have implications for builders and evaluators of human–robot interaction systems. The builders need to consider including modules for recognition and classification of head movements to the robot input channels. The evaluators need to make sure that the presence of an experimenter does not skew the results of their user studies.


international conference on social robotics | 2015

Impact of Robot Actions on Social Signals and Reaction Times in HRI Error Situations

Nicole Mirnig; Manuel Giuliani; Gerald Stollnberger; Susanne Stadler; Roland Buchner; Manfred Tscheligi

Human-robot interaction experiments featuring error situations are often excluded from analysis. We argue that a lot of value lies hidden in this discarded data. We analyzed a corpus of 201 videos that show error situations in human-robot interaction experiments. The aim of our analysis was to research (a) if and which social signals the experiment participants show in reaction to error situations, (b) how long it takes the participants to react in the error situations, and (c) whether different robot actions elicit different social signals. We found that participants showed social signals in 49.3% of error situations, more during social norm violations and less during technical failures. Task-related actions by the robot elicited less social signals by the participants, while participants showed more social signals when the robot did not react. Finally, the participants had an overall reaction time of 1.64 seconds before they showed a social signal in response to a robot action. The reaction times are specifically long (4.39 seconds) during task-related actions that go wrong during execution.


robot and human interactive communication | 2014

I Trained this robot: The impact of pre-experience and execution behavior on robot teachers

Susanne Stadler; Astrid Weiss; Manfred Tscheligi

The teacher-learner constellation is a special one in Human-Robot Interaction (HRI), as it can essentially improve intuitive interaction with robots. In a 2 (background: programmer vs. non-programmer) × 3 (teacher: self vs. believed other vs. other) between participants experiment (n=48, counter-balanced in gender), participants kinesthetically taught a humanoid NAO robot a specific behavior, which the robot had to execute afterwards. Next, participants downloaded a taught behavior to the NAO and were told that the executed behavior either is (1) the one they previously taught (self), (2) the one someone else taught, but actually it was their own (believed other), or (3) the one someone else taught (other). We were interested in two main aspects: (1) whether programmers and non-programmers show differences in their teaching behavior and the perception of the teaching workload and (2) whether participants show a greater self-extension and trust into a robot they taught themselves over a robot they believed someone else taught. The study revealed that the teaching style independently of the background extends in the behavior execution time. Programmers showed a higher perceived workload than non-programmers. Differences in trust could not be found, but a self-extension effect was observed that people showed greater self-extension into a robot they taught themselves. Implications for Human-Robot Interaction are discussed.


international conference on multimodal interfaces | 2017

Head and shoulders: automatic error detection in human-robot interaction

Pauline Trung; Manuel Giuliani; Michael Miksch; Gerald Stollnberger; Susanne Stadler; Nicole Mirnig; Manfred Tscheligi

We describe a novel method for automatic detection of errors in human-robot interactions. Our approach is to detect errors based on the classification of head and shoulder movements of humans who are interacting with erroneous robots. We conducted a user study in which participants interacted with a robot that we programmed to make two types of errors: social norm violations and technical failures. During the interaction, we recorded the behavior of the participants with a Kinect v1 RGB-D camera. Overall, we recorded a data corpus of 237,998 frames at 25 frames per second; 83.48% frames showed no error situation; 16.52% showed an error situation. Furthermore, we computed six different feature sets to represent the movements of the participants and temporal aspects of their movements. Using this data we trained a rule learner, a Naive Bayes classifier, and a k-nearest neighbor classifier and evaluated the classifiers with 10-fold cross validation and leave-one-out cross validation. The results of this evaluation suggest the following: (1) The detection of an error situation works well, when the robot has seen the human before; (2) Rule learner and k-nearest neighbor classifiers work well for automated error detection when the robot is interacting with a known human; (3) For unknown humans, the Naive Bayes classifier performed the best; (4) The classification of social norm violations does perform the worst; (5) There was no big performance difference between using the original data and normalized feature sets that represent the relative position of the participants.


human robot interaction | 2017

Handovers and Resumption of Control in Semi-Autonomous Vehicles: What the Automotive Domain can Learn from Human-Robot-Interaction

Alexander G. Mirnig; Susanne Stadler; Manfred Tscheligi

The operating of semi-autonomous vehicles foresees so-called handovers, which refers to the transition of control from driver to vehicle or vice-versa. While the initiation and respective signaling of such handovers is actively being researched and worked on, the transition back to vehicle or driver after a certain driving task has been completed, sees considerably less attention. Clarity in communicating resumption of control is important to avoiding conflicts between driver and vehicle, which could result in road accidents. In this paper, we draw from the field of robotics to inform solutions to communicate resumption of control for automotive interaction design.


human robot interaction | 2017

Using Persona, Scenario, and Use Case to Develop a Human-Robot Augmented Reality Collaborative Workspace

Zdeněk Materna; Michal Kapinus; Vítězslav Beran; Pavel SmrĚ; Manuel Giuliani; Nicole Mirnig; Susanne Stadler; Gerald Stollnberger; Manfred Tscheligi

Up to date, methods from Human-Computer Interaction (HCI) have not been widely adopted in the development of Human-Robot Interaction systems (HRI). In this paper, we describe a system prototype and a use case. The prototype is an augmented reality-based collaborative workspace. The envisioned solution is focused on small and medium enterprises (SMEs) where it should enable ordinary-skilled workers to program a robot on a high level of abstraction and perform collaborative tasks effectively and safely. The use case consists of a scenario and a persona, two methods from the field of HCI. We outline how we are going to use these methods in the near future to refine the task of the collaborating robot and human and the interface elements of the collaborative workspace.


human robot interaction | 2017

Industrial Human-Robot Interaction: Creating Personas for Augmented Reality supported Robot Control and Teaching

Susanne Stadler; Nicole Mirnig; Manuel Giuliani; Manfred Tscheligi; Zdenek Materna; Michal Kapinus

In strong cooperation with small and medium-sized enterprises (SMEs), we research the simplification of industrial robot online-programming. For that, we extend existing programming interfaces with augmented reality (AR) technology. We proactively use personas as a tool for human-robot interaction design, the communication of study results and the discussion of findings with our industrial partners. While personas are popular in Human-Computer Interaction (HCI), their use is not well established in Human-Robot Interaction (HRI). We conducted contextual inquiries, interviews, and questionnaires with 80 industrial robotics professionals. From these qualitative and quantitative data, we now have a basis for describing the characteristics of persons working in the industrial robotics field. This work focuses on our approach to develop and share basic data-based personas for industrial robotics research. In this early stage we propose an initial list of variables suitable for personas in the field of industrial HRI. For our purpose we extend this variable list with AR related person characteristics. The future aim of this work is to provide a set of basic personas for industrial robotics usable in academic and industrial environments that can be adapted to particular usage scenarios.


robot and human interactive communication | 2016

User requirements for a medical robotic system: Enabling doctors to remotely conduct ultrasonography and physical examination

Gerald Stollnberger; Christiane Moser; Manuel Giuliani; Susanne Stadler; Manfred Tscheligi; Dorota Szczesniak-Stanczyk; Bartlomiej Stanczyk

We report the results of a user requirements analysis for a medical robotic system that enables doctors to remotely conduct ultrasonography and physical examination on patients. As there are three different user groups in this scenario - doctors, patients, and assistants - we collected user requirements for all of these groups. This analysis forms a basis for the technical specification of the medical robotic system. To gather the user requirements, we conducted a literature review, observed two examinations of a patient conducted by a doctor, organised four workshops with doctors and patients, and quantified the qualitative data in two online surveys. The most important findings of the requirements analysis are that doctors need accurate kinesthetic, tactile, and audiovisual feedback for a proper diagnosis. They need additional patient data apart from the ultrasonography and physical examination (e.g., olfactory information and skin wetness). Doctors, patients, and assistants all want to have a secure audiovisual communication channel during the whole examination and especially patients have concerns regarding safety of the robot arm and data privacy. We present a list of requirements for doctors, patients, and assistants, and discuss their implications for the technical specifications of the system.


Frontiers in Robotics and AI | 2017

To Err Is Robot: How Humans Assess and Act toward an Erroneous Social Robot

Nicole Mirnig; Gerald Stollnberger; Markus Miksch; Susanne Stadler; Manuel Giuliani; Manfred Tscheligi

Collaboration


Dive into the Susanne Stadler's collaboration.

Top Co-Authors

Avatar

Manfred Tscheligi

Austrian Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Astrid Weiss

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michal Kapinus

Brno University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge