Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Brian K. Mok is active.

Publication


Featured researches published by Brian K. Mok.


human-robot interaction | 2015

Mechanical Ottoman: How Robotic Furniture Offers and Withdraws Support

David Sirkin; Brian K. Mok; Stephen Yang; Wendy Ju

This paper describes our approach to designing, developing behaviors for, and exploring the use of, a robotic footstool, which we named the mechanical ottoman. By approaching unsuspecting participants and attempting to get them to place their feet on the footstool, and then later attempting to break the engagement and get people to take their feet down, we sought to understand whether and how motion can be used by non-anthropomorphic robots to engage people in joint action. In several embodied design improvisation sessions, we observed a tension between people perceiving the ottoman as a living being, such as a pet, and simultaneously as a functional object, which requests that they place their feet on it—something they would not ordinarily do with a pet. In a follow-up lab study (N=20), we found that most participants did make use of the footstool, although several chose not to place their feet on it for this reason. We also found that participants who rested their feet understood a brief lift and drop movement as a request to withdraw, and formed detailed notions about the footstool’s agenda, ascribing intentions based on its movement alone.


robot and human interactive communication | 2016

Ghost driver: A field study investigating the interaction between pedestrians and driverless vehicles

Dirk Rothenbücher; Jamy Li; David Sirkin; Brian K. Mok; Wendy Ju

How will pedestrians and bicyclists interact with autonomous vehicles when there is no human driver? In this paper, we outline a novel method for performing observational field experiments to investigate interactions with driverless cars. We provide a proof-of-concept study (N=67), conducted at a crosswalk and a traffic circle, which applies this method. In the study, participants encountered a vehicle that appeared to have no driver, but which in fact was driven by a human confederate hidden inside. We constructed a car seat costume to conceal the driver, who was specially trained to emulate an autonomous system. Data included video recordings and participant responses to post-interaction questionnaires. Pedestrians who encountered the car reported that they saw no driver, yet they managed interactions smoothly, except when the car misbehaved by moving into the crosswalk just as they were about to cross. This method is the first of its kind, and we believe that it contributes a valuable technique for safely acquiring empirical data and insights about driverless vehicle interactions. These insights can then be used to design vehicle behaviors well in advance of the broad deployment of autonomous technology.


robot and human interactive communication | 2015

Experiences developing socially acceptable interactions for a robotic trash barrel

Stephen Yang; Brian K. Mok; David Sirkin; Hillary Page Ive; Rohan Maheshwari; Kerstin Fischer; Wendy Ju

Service robots in public places need to both understand environmental cues and move in ways that people can understand and predict. We developed and tested interactions with a trash barrel robot to better understand the implicit protocols for public interaction. In eight lunch-time sessions spread across two crowded campus dining destinations, we experimented with piloting our robot in Wizard of Oz fashion, initiating and responding to requests for impromptu interactions centered on collecting peoples trash. Our studies progressed from open-ended experimentation to testing specific interaction strategies that seemed to evoke clear engagement and responses, both positive and negative. Observations and interviews show that a) people most welcome the robots presence when they need its services and it actively advertises its intent through movement; b) people create mental models of the trash barrel as having intentions and desires; c) mistakes in navigation are indicators of autonomous control, rather than a remote operator; and d) repeated mistakes and struggling behavior polarized responses as either ignoring or endearing.


human robot interaction | 2016

Exploring Shared Control in Automated Driving

Mishel Johns; Brian K. Mok; David Sirkin; Nikhil Gowda; Catherine Smith; Walter J. Talamonti; Wendy Ju

Automated driving systems that share control with human drivers by using haptic feedback through the steering wheel have been shown to have advantages over fully automated systems and manual driving. Here, we describe an experiment to elicit tacit expectations of behavior from such a system. A gaming steering wheel electronically coupled to the steering wheel in a full-car driving simulator allows two participants to share control of the vehicle. One participant was asked to use the gaming wheel to act as the automated driving agent while another participant acted as the car driver. The course provided different information and visuals to the driving agent and the driver to simulate possible automation failures and conflict situations between automation and the driver. The driving agent was also given prompts that specified a communicative goal at various points along the course. Both participants were interviewed before and after the drive, and vehicle data and drive video were collected. Our results suggest that drivers were able to interpret simple trajectory intentions, such as a lane change, conveyed by the driving agent. However, the driving agent was not able to effectively communicate more nuanced, higher level ideas such as availability, primarily due to the steering wheel being the control mechanism. Torque on the steering wheel without warning was seen most often as a failure of automation. Gentle and steady steering movements were viewed more favorably.


human factors in computing systems | 2017

Tunneled In: Drivers with Active Secondary Tasks Need More Time to Transition from Automation

Brian K. Mok; Mishel Johns; David Miller; Wendy Ju

In partially automated driving, rapid transitions of control present a severe hazard. How long does it take a driver to take back control of the vehicle when engaged with other non-driving tasks? In this driving simulator study, we examined the performance of participants (N=30) after an abrupt loss of automated vehicle control. We tested three transition time conditions, with an unstructured transition of control occurring 2s, 5s, or 8s before entering a curve. As participants were occupied with an active secondary task (playing a game on a tablet) while the automated driving mode was enabled, they needed to disengage from the task and regain control of the car when the transition occurred. Few drivers in the 2 second condition were able to safely negotiate the road hazard situation, while the majority of drivers in the 5 or 8 second conditions were able to navigate the hazard situation safely.


human-robot interaction | 2017

Marionette: Enabling On-Road Wizard-of-Oz Autonomous Driving Studies

Peter L. Wang; Srinath Sibi; Brian K. Mok; Wendy Ju

There is a growing need to study the interactions between drivers and their increasingly autonomous vehicles. This paper describes a method of using a low-cost, portable, and versatile driver interaction system in commercial passenger vehicles to enable on-road partial and fully autonomous driving interaction studies. By conducting on-road Wizard-of-Oz studies in naturalistic settings, we can explore a range of driving conditions and scenarios far beyond what can be conducted in laboratory simulator environments. The Marionette system uses off-the-shelf components to create bidirectional communication between the driving controls of a Wizard-of-Oz vehicle operator and a driving study participant. It signals to the study participant what the car is doing and enables researchers to study participant intervention in driving activity. Marionette is designed to be easily replicated for researchers studying partially autonomous driving interaction. This paper describes the design and evaluation of this system.


human robot interaction | 2015

Adventures of an Adolescent Trash Barrel

Stephen Yang; Brian K. Mok; David Sirkin; Wendy Ju

Our demonstration presents the roving trash barrel, a robot that we developed to understand how people perceive and respond to a mobile trashcan that offers its service in public settings. In a field study, we found that considerable coordination is involved in actively collecting trash, including capturing someones attention, signaling an intention to interact, acknowledging the willingness--or implicit signs of unwillingness--to interact, and closing the interaction. In post-interaction interviews, we discovered that people believed that the robot was intrinsically motivated to collect trash, and attributed social mishaps to higher levels of autonomy.


automotive user interfaces and interactive vehicular applications | 2015

Ghost driver: a platform for investigating interactions between pedestrians and driverless vehicles

Dirk Rothenbücher; Jamy Li; David Sirkin; Brian K. Mok; Wendy Ju

How will pedestrians and cyclists interact with self-driving cars when there is no human driver? To find answers to this question we need a secure experimental design in which pedestrians can interact with a car that appears to drive on its own. In Ghost Driver we staged a fake autonomous car by installing LIDARs, cameras and decals on the outside of the vehicle and by covering the driver with a seat costume so that it appeared that there was no driver in the car. In initial field studies we found that this Wizard-of-Oz technique convinced more than 80% of the participants that the car was driving autonomously without a driver. Consequently the Ghost Driver methodology could become a platform for further investigation of how pedestrians or cyclists interact with driverless vehicles.


human-robot interaction | 2014

Empathy: interactions with emotive robotic drawers

Brian K. Mok; Stephen Yang; David Sirkin; Wendy Ju

The role of human-robot interaction is becoming more important as everyday robotic devices begin to permeate into our lives. In this study, we video-prototyped a user’s interactions with a set of robotic drawers. The user and robot each displayed one of five emotional states - angry, happy, indifferent, sad, and timid. The results of our study indicated that the participants of our online questionnaire preferred empathetic drawers to neutral ones. They disliked robotic drawers that displayed emotions orthogonal to the user’s emotions. This showed the importance of displaying emotions, and empathy in particular, when designing robotic devices that share our living and working spaces. Category and Subject Descriptors H.5.m [Information Interface and Presentation]: Miscellaneous.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2016

Behavioral Measurement of Trust in Automation The Trust Fall

David Miller; Mishel Johns; Brian K. Mok; Nikhil Gowda; David Sirkin; Key Jung Lee; Wendy Ju

Stating that one trusts a system is markedly different from demonstrating that trust. To investigate trust in automation, we introduce the trust fall: a two-stage behavioral test of trust. In the trust fall paradigm, first the one learns the capabilities of the system, and in the second phase, the ‘fall,’ one’s choices demonstrate trust or distrust. Our first studies using this method suggest the value of measuring behaviors that demonstrate trust, compared with self-reports of one’s trust. Designing interfaces that encourage appropriate trust in automation will be critical for the safe and successful deployment of partially automated vehicles, and this will rely on a solid understanding of whether these interfaces actually inspire trust and encourage supervision.

Collaboration


Dive into the Brian K. Mok's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge