Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stela H. Seo is active.

Publication


Featured researches published by Stela H. Seo.


human-robot interaction | 2015

Poor Thing! Would You Feel Sorry for a Simulated Robot?: A comparison of empathy toward a physical and a simulated robot

Stela H. Seo; Denise Geiskkovitch; Masayuki Nakane; Corey King; James Everett Young

In designing and evaluating human-robot interactions and interfaces, researchers often use a simulated robot due to the high cost of robots and time required to program them. However, it is important to consider how interaction with a simulated robot differs from a real robot; that is, do simulated robots provide authentic interaction? We contribute to a growing body of work that explores this question and maps out simulated-versus-real differences, by explicitly investigating empathy: how people empathize with a physical or simulated robot when something bad happens to it. Our results suggest that people may empathize more with a physical robot than a simulated one, a finding that has important implications on the generalizability and applicability of simulated HRI work. Empathy is particularly relevant to social HRI and is integral to, for example, companion and care robots. Our contribution additionally includes an original and reproducible HRI experimental design to induce empathy toward robots in laboratory settings, and an experimentally validated empathy-measuring instrument from psychology for use with HRI. Categories and Subject Descriptors H.5.2 [User Interfaces]: evaluation/methodology General Terms Experimentation and Human Factors.


robot and human interactive communication | 2013

An interface for remote robotic manipulator control that reduces task load and fatigue

Ashish Singh; Stela H. Seo; Yasmeen Hashish; Masayuki Nakane; James Everett Young; Andrea Bunt

Remote control robots are being found in an increasing number of application domains, including search and rescue, exploration, and reconnaissance. There is a large body of HRI research that investigates interface design for remote navigation, control, and sensor monitoring, while aiming for interface enhancements that benefit the remote operator such as improving ease of use, reducing operator mental load, and maximizing awareness of a robots state and remote environment. Even though many remote control robots have multi-degree-of-freedom robotic manipulator arms for interacting with the environment, there is only limited research into easy-to-use remote control interfaces for such manipulators, and many commercial robotic products are still using simplistic interface technologies such as keypads or gamepads with arbitrary mappings to arm morphology. In this paper, we present an original interface for the remote control of a multi-degree of freedom robotic arm. We conducted a controlled experiment to compare our interface to an existing commercial keypad interface and detail our results that indicate our interface was easier to use, required less cognitive task load, and enabled people to complete tasks more quickly. In this paper, we present an original interface for the remote control of a multi-degree of freedom robotic arm. We conducted a controlled experiment to compare our interface to an existing commercial keypad interface and detail our results that indicate our interface was easier to use, required less cognitive task load, and enabled people to complete tasks more quickly.


International Journal of Social Robotics | 2018

Investigating People’s Rapport Building and Hindering Behaviors When Working with a Collaborative Robot

Stela H. Seo; Keelin Griffin; James Everett Young; Andrea Bunt; Susan Prentice; Verónica Loureiro-Rodríguez

Modern industrial robots are increasingly moving toward collaborating with people on complex tasks as team members, and away from working in isolated cages that are separated from people. Collaborative robots are programmed to use social communication techniques with people, enabling human team members to use their existing inter-personal skills to work with robots, such as speech, gestures, or gaze. Research is increasingly investigating how robots can use higher-level social structures such as team dynamics or conflict resolution. One particularly important aspect of human–human teamwork is rapport building: these are everyday social interactions between people that help to develop professional relationships by establishing trust, confidence, and collegiality, but which are formally peripheral to a task at hand. In this paper, we report on our investigations of how and if people apply similar rapport-building behaviors to robot collaborators. First, we synthesized existing human–human rapport knowledge into an initial human–robot interaction framework; this framework includes verbal and non-verbal behaviors, both for rapport building and rapport hindering, that people can be expected to exhibit. We developed a novel mock industrial task scenario that emphasizes ecological validity, and creates a range of social interactions necessary for investigating rapport. Finally, we report on a qualitative study that investigates how people use rapport hindering or building behaviors in our industrial scenario, which reflects how people may interact with robots in industrial settings.


human-robot interaction | 2017

Movers, Shakers, and Those Who Stand Still: Visual Attention-grabbing Techniques in Robot Teleoperation

Daniel J. Rea; Stela H. Seo; Neil D. B. Bruce; James Everett Young

We designed and evaluated a series of teleoperation interface techniques that aim to draw operator attention while mitigating negative effects of interruption. Monitoring live teleoperation video feeds, for example to search for survivors in search and rescue, can be cognitively taxing, particularly for operators driving multiple robots or monitoring multiple cameras. To reduce workload, emerging computer vision techniques can automatically identify and indicate (cue) salient points of potential interest for the operator. However, it is not clear how to cue such points to a preoccupied operator - whether cues would be distracting and a hindrance to operators-and how the design of the cue may impact operator cognitive load, attention drawn, and primary task performance. In this paper, we detail our iterative design process for creating a range of visual attention-grabbing cues that are grounded in psychological literature on human attention, and two formal evaluations that measure attention-grabbing capability and impact on operator performance. Our results show that visually cueing on-screen points of interest does not distract operators, that operators perform poorly without the cues, and detail how particular cue design parameters impact operator cognitive load and task performance. Specifically, full-screen cues can lower cognitive load, but can increase response time; animated cues may improve accuracy, but increase cognitive load. Finally, from this design process we provide tested, and theoretically grounded cues for attention drawing in teleoperation.


human-agent interaction | 2015

Women and Men Collaborating with Robots on Assembly Lines: Designing a Novel Evaluation Scenario for Collocated Human-Robot Teamwork

Stela H. Seo; Jihyang Gu; Seongmi Jeong; Keelin Griffin; James Everett Young; Andrea Bunt; Susan Prentice

This paper presents an original scenario design specifically created for exploring gender-related issues surrounding collaborative human-robot teams on assembly lines. Our methodology is grounded squarely in the need for increased gender work in human-robot interaction. As with most research in social human-robot interaction, investigating and exploring gender issues relies heavily on an evaluation methodology and scenario that aims to maximize ecological validity, so that the lab results can generalize to a real-world social scenario. In this paper, we present our discussion on study elements required for ecological validity in our context, present an original study design that meets these criteria, and present initial pilot results that reflect on our approach and study design.


human robot interaction | 2015

Autonomy, Embodiment, and Obedience to Robots

Denise Geiskkovitch; Stela H. Seo; James Everett Young

We conducted an HRI obedience experiment comparing an autonomous robotic authority to: (i) a remote-controlled robot, and (ii) robots of variant embodiments during a deterrent task. The results suggest that half of people will continue to perform a tedious task under the direction of a robot, even after expressing desire to stop. Further, we failed to find impact of robot embodiment and perceived robot autonomy on obedience. Rather, the robots perceived authority status may be more strongly correlated to obedience.


human robot interaction | 2017

Picassnake: Robot Performance Art

Stela H. Seo; James Everett Young

In this video, we present an artist robot, Picassnake (Figure 2). The robot listens to music, thinks (as in processing), and draws a unique abstract painting. Art has been considered as peoples specialty, that is, a result of peoples creativity, intention, and emotional expression. However, the robots unique paintings make people think and discuss the meaning of art and the relationship between a robot and art, for example, what the art is, what creativity is, the robots abstract painting is art or not, how the robots painting is different from peoples, and so on. Public performance is also a part of art. Many human artists express their emotions, feelings, and artistic senses through performance, for example, singing, acting, miming, and so on. As the painting robot publically performed its painting, this expands the discussion. Is the robot an artist? From our robots performance, there are remaining artifacts, the abstract paintings (Figure 1). These may be served as catalysts for thoughts, discussions, and debates, similar to masterpieces continuously discussed and studied in our human history. The discussion of a robot and art can further expand to many people. After watching this short video clip, we want you to ask yourself. What is the art? Can a robot be an artist?


14th FIRA RoboWorld Congress on Next Wave in Robotics, FIRA 2011 | 2011

Learning of facial gestures using SVMs

Jacky Baltes; Stela H. Seo; Chi Tai Cheng; Meng Cheng Lau; John Anderson

This paper describes the implementation of a fast and accurate gesture recognition system. Image sequences are used to train a standard SVM to recognize Yes, No, and Neutral gestures from different users. We show that our system is able to detect facial gestures with more than 80% accuracy from even small input images.


human robot interaction | 2016

Please continue, we need more data: an exploration of obedience to robots

Denise Geiskkovitch; Derek Cormier; Stela H. Seo; James Everett Young


Discrete Mathematics | 2012

Friendship 3-hypergraphs

Pak Ching Li; G. H. J. van Rees; Stela H. Seo; N.M. Singhi

Collaboration


Dive into the Stela H. Seo's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrea Bunt

University of Manitoba

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Derek Cormier

University of British Columbia

View shared research outputs
Researchain Logo
Decentralizing Knowledge