Jim Vaughan
FX Palo Alto Laboratory
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jim Vaughan.
acm multimedia | 2007
Andreas Girgensohn; Don Kimber; Jim Vaughan; Tao Yang; Frank M. Shipman; Thea Turner; Eleanor G. Rieffel; Lynn Wilcox; Francine Chen; Tony Dunnigan
DOTS (Dynamic Object Tracking System) is an indoor, real-time, multi-camera surveillance system, deployed in a real office setting. DOTS combines video analysis and user interface components to enable security personnel to effectively monitor views of interest and to perform tasks such as tracking a person. The video analysis component performs feature-level foreground segmentation with reliable results even under complex conditions. It incorporates an efficient greedy-search approach for tracking multiple people through occlusion and combines results from individual cameras into multi-camera trajectories. The user interface draws the users. attention to important events that are indexed for easy reference at a later time. Different views within the user interface provide spatial information for easier navigation. Our system, with over twenty video cameras installed in hallways and other public spaces in our office building, has been in constant use for almost a year.
international conference on multimedia and expo | 2007
Tao Yang; Francine Chen; Don Kimber; Jim Vaughan
In this paper we describe the analysis component of an indoor, real-time, multi-camera surveillance system. The analysis includes: (1) a novel feature-level foreground segmentation method which achieves efficient and reliable segmentation results even under complex conditions, (2) an efficient greedy search based approach for tracking multiple people through occlusion, and (3) a method for multi-camera handoff that associates individual trajectories in adjacent cameras. The analysis is used for an 18 camera surveillance system that has been running continuously in an indoor business over the past several months. Our experiments demonstrate that the processing method for people detection and tracking across multiple cameras is fast and robust.
human robot interaction | 2015
Sven G. Kratz; Jim Vaughan; Ryota Mizutani; Don Kimber
Our research focuses on improving the effectiveness and usability of driving mobile telepresence robots by increasing the users sense of immersion during the navigation task. To this end we developed a robot platform that allows immersive navigation using head-tracked stereoscopic video and a HMD. We present the result of an initial user study that compares System Usability Scale (SUS) ratings of a robot teleoperation task using head-tracked stereo vision with a baseline fixed video feed and the effect of a low or high placement of the camera(s). Our results show significantly higher ratings for the fixed video condition and no effect of the camera placement. Future work will focus on examining the reasons for the lower ratings of stereo video and and also exploring further visual navigation interfaces.
international symposium on mixed and augmented reality | 2010
Jun Shingu; Eleanor G. Rieffel; Don Kimber; Jim Vaughan; Pernilla Qvarfordt; Kathleen Tuite
We propose an Augmented Reality (AR) system that helps users take a picture from a designated pose, such as the position and camera angle of an earlier photo. Repeat photography is frequently used to observe and document changes in an object. Our system uses AR technology to estimate camera poses in real time. When a user takes a photo, the camera pose is saved as a “view bookmark”. To support a user in taking a repeat photo, two simple graphics are rendered in an AR viewer on the cameras screen to guide the user to this bookmarked view. The system then uses image adjustment techniques to create an image based on the users repeat photo that is even closer to the original.
human computer interaction with mobile devices and services | 2015
Sven G. Kratz; Daniel Avrahami; Don Kimber; Jim Vaughan; Patrick Proppe; Don Severns
In this paper we report findings from two user studies that explore the problem of establishing common viewpoint in the context of a wearable telepresence system. In our first study, we assessed the ability of a local person (the guide) to identify the view orientation of the remote person by looking at the physical pose of the telepresence device. In the follow-up study, we explored visual feedback methods for communicating the relative viewpoints of the remote user and the guide via a head-mounted display. Our results show that actively observing the pose of the device is useful for viewpoint estimation. However, in the case of telepresence devices without physical directional affordances, a live video feed may yield comparable results. Lastly, more abstract visualizations lead to significantly longer recognition times, but may be necessary in more complex environments.
european conference on computer vision | 2014
Don Kimber; Patrick Proppe; Sven G. Kratz; Jim Vaughan; Bee Liew; Don Severns; Weiqing Su
Polly is an inexpensive, portable telepresence device based on the metaphor of a parrot riding a guide’s shoulder and acting as proxy for remote participants. Although remote users may be anyone with a desire for ‘tele-visits’, we focus on limited mobility users. We present a series of prototypes and field tests that informed design iterations. Our current implementations utilize a smartphone on a stabilized, remotely controlled gimbal that can be hand held, placed on perches or carried by wearable frame. We describe findings from trials at campus, museum and faire tours with remote users, including quadriplegics. We found guides were more comfortable using Polly than a phone and that Polly was accepted by other people. Remote participants appreciated stabilized video and having control of the camera. One challenge is negotiation of movement and view control. Our tests suggest Polly is an effective alternative to telepresence robots, phones or fixed cameras.
user interface software and technology | 2016
Jun Shingu; Patrick Chiu; Sven G. Kratz; Jim Vaughan; Don Kimber
We propose a robust pointing detection with virtual shadow representation for interacting with a public display. Using a depth camera, our shadow is generated by a model with an angled virtual sun light and detects the nearest point as a pointer. The position of the shadow becomes higher when user walks closer, which conveys the notion of correct distance to control the pointer and offers accessibility to the higher area of the display.
robot and human interactive communication | 2016
Jim Vaughan; Sven G. Kratz; Don Kimber
Two related challenges with current teleoperated robotic systems are lack of peripheral vision and awareness, and difficulty or tedium of navigating through remote spaces. We address these challenges by providing an interface with a focus plus context (F+C) view of the robot location, and where the user can navigate simply by looking where they want to go, and clicking or drawing a path on the view to indicate the desired trajectory or destination. The F+C view provides an undistorted, perspectively correct central region surrounded by a wide field of view peripheral portion, and avoids the need for separate views. The navigation method is direct and intuitive in comparison to keyboard or joystick based navigation, which require the user to be in a control loop as the robot moves. Both the F+C views and the direct click navigation were evaluated in a preliminary user study.
international conference on multimedia and expo | 2009
Jun Shingu; Shingo Uchihashi; Tsutomu Abe; Tetsuo Iyoda; Don Kimber; Eleanor G. Rieffel; Jim Vaughan
In this paper, we describe an automatic lighting adjustment method for browsing object images. From a set of images of an object, taken under different lighting conditions, we generate two types of illuminated images: a textural image which eliminates unwanted specular reflections of the object, and a highlight image in which specularities of the object are highly preserved. Our user interface allows viewers to digitally zoom into any region of the image, and the lighting adjusted images are automatically generated for the selected region and displayed. Switching between the textural and the highlight images helps viewers to understand characteristics of the object surface.
acm multimedia | 2008
Don Kimber; Eleanor G. Rieffel; Jim Vaughan; John Doherty
This video shows the Virtual Physics Circus, a kind of playground for experimenting with simple physical models. The system makes it easy to create worlds with common physical objects such as swings, vehicles, ramps, and walls, and interactively play with those worlds. The system can be used as a creative art medium as well as to gain understanding and intuition about physical systems. The system can be controlled by a number of UI devices such as mouse, keyboard, joystick, and tags which are tracked in 6 degrees of freedom.