Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John C. Barnwell is active.

Publication


Featured researches published by John C. Barnwell.


user interface software and technology | 2007

Lucid touch: a see-through mobile device

Daniel Wigdor; Clifton Forlines; Patrick Baudisch; John C. Barnwell; Chia Shen

Touch is a compelling input modality for interactive devices; however, touch input on the small screen of a mobile device is problematic because a users fingers occlude the graphical elements he wishes to work with. In this paper, we present LucidTouch, a mobile device that addresses this limitation by allowing the user to control the application by touching the back of the device. The key to making this usable is what we call pseudo-transparency: by overlaying an image of the users hands onto the screen, we create the illusion of the mobile device itself being semi-transparent. This pseudo-transparency allows users to accurately acquire targets while not occluding the screen with their fingers and hand. Lucid Touch also supports multi-touch input, allowing users to operate the device simultaneously with all 10 fingers. We present initial study results that indicate that many users found touching on the back to be preferable to touching on the front, due to reduced occlusion, higher precision, and the ability to make multi-finger input.


international conference on computer graphics and interactive techniques | 2007

Practical motion capture in everyday surroundings

Daniel Vlasic; Rolf Adelsberger; Giovanni Vannucci; John C. Barnwell; Markus H. Gross; Wojciech Matusik; Jovan Popović

Commercial motion-capture systems produce excellent in-studio reconstructions, but offer no comparable solution for acquisition in everyday environments. We present a system for acquiring motions almost anywhere. This wearable system gathers ultrasonic time-of-flight and inertial measurements with a set of inexpensive miniature sensors worn on the garment. After recording, the information is combined using an Extended Kalman Filter to reconstruct joint configurations of a body. Experimental results show that even motions that are traditionally difficult to acquire are recorded with ease within their natural settings. Although our prototype does not reliably recover the global transformation, we show that the resulting motions are visually similar to the original ones, and that the combined acoustic and intertial system reduces the drift commonly observed in purely inertial systems. Our final results suggest that this system could become a versatile input device for a variety of augmented-reality applications.


Applied Physics Letters | 2011

Experiments on wireless power transfer with metamaterials

Bingnan Wang; Koon Hoo Teo; Tamotsu Nishino; William S. Yerazunis; John C. Barnwell; Jinyun Zhang

In this letter, we propose the use of metamaterials to enhance the evanescent wave coupling and improve the transfer efficiency of a wireless power transfer system based on coupled resonators. A magnetic metamaterial is designed and built for a wireless power transfer system. We show with measurement results that the power transfer efficiency of the system can be improved significantly by the metamaterial. We also show that the fabricated system can be used to transfer power wirelessly to a 40 W light bulb.


user interface software and technology | 2006

Under the table interaction

Daniel Wigdor; Darren Leigh; Clifton Forlines; Samuel E. Shipman; John C. Barnwell; Ravin Balakrishnan; Chia Shen

We explore the design space of a two-sided interactive touch table, designed to receive touch input from both the top and bottom surfaces of the table. By combining two registered touch surfaces, we are able to offer a new dimension of input for co-located collaborative groupware. This design accomplishes the goal of increasing the relative size of the input area of a touch table while maintaining its direct-touch input paradigm. We describe the interaction properties of this two-sided touch table, report the results of a controlled experiment examining the precision of user touches to the underside of the table, and a series of application scenarios we developed for use on inverted and two-sided tables. Finally, we present a list of design recommendations based on our experiences and observations with inverted and two-sided tables.


international conference on computer graphics and interactive techniques | 2007

Prakash: lighting aware motion capture using photosensing markers and multiplexed illuminators

Ramesh Raskar; Hideaki Nii; Bert deDecker; Yuki Hashimoto; Jay W. Summet; Dylan Moore; Yong Zhao; Jonathan Westhues; Paul H. Dietz; John C. Barnwell; Shree K. Nayar; Masahiko Inami; Philippe Bekaert; Michael Noland; Vlad Branzoi; Erich Bruns

In this paper, we present a high speed optical motion capture method that can measure three dimensional motion, orientation, and incident illumination at tagged points in a scene. We use tracking tags that work in natural lighting conditions and can be imperceptibly embedded in attire or other objects. Our system supports an unlimited number of tags in a scene, with each tag uniquely identified to eliminate marker reacquisition issues. Our tags also provide incident illumination data which can be used to match scene lighting when inserting synthetic elements. The technique is therefore ideal for on-set motion capture or real-time broadcasting of virtual sets. Unlike previous methods that employ high speed cameras or scanning lasers, we capture the scene appearance using the simplest possible optical devices - a light-emitting diode (LED) with a passive binary mask used as the transmitter and a photosensor used as the receiver. We strategically place a set of optical transmitters to spatio-temporally encode the volume of interest. Photosensors attached to scene points demultiplex the coded optical signals from multiple transmitters, allowing us to compute not only receiver location and orientation but also their incident illumination and the reflectance of the surfaces to which the photosensors are attached. We use our untethered tag system, called Prakash, to demonstrate methods of adding special effects to captured videos that cannot be accomplished using pure vision techniques that rely on camera images.


The International Journal of Robotics Research | 2010

Vision-guided Robot System for Picking Objects by Casting Shadows

Amit K. Agrawal; Yu Sun; John C. Barnwell; Ramesh Raskar

We present a complete vision-guided robot system for model-based three-dimensional (3D) pose estimation and picking of singulated 3D objects. Our system employs a novel vision sensor consisting of a video camera surrounded by eight flashes (light emitting diodes). By capturing images under different flashes and observing the shadows, depth edges or silhouettes in the scene are obtained. The silhouettes are segmented into different objects and each silhouette is matched across a database of object silhouettes in different poses to find the coarse 3D pose. The database is pre-computed using a computer-aided design (CAD) model of the object. The pose is refined using a fully projective formulation of Lowe’s model-based pose estimation algorithm. The estimated pose is transferred to a robot coordinate system utilizing the hand—eye and camera calibration parameters, which allows the robot to pick the object. Our system outperforms conventional systems using two-dimensional sensors with intensity-based features as well as 3D sensors. We handle complex ambient illumination conditions, challenging specular backgrounds, diffuse as well as specular objects, and texture-less objects, on which traditional systems usually fail. Our vision sensor is capable of computing depth edges in real time and is low cost. Our approach is simple and fast for practical implementation. We present real experimental results using our custom designed sensor mounted on a robot arm to demonstrate the effectiveness of our technique.


international conference on computer graphics and interactive techniques | 2006

Submerging technologies

Paul H. Dietz; Jefferson Y. Han; Jonathan Westhues; John C. Barnwell; William S. Yerazunis

Fountains, reflecting pools, and other water displays have a long history in art and architecture. In the fountain industry, an “interactive fountain” is one that guests can touch or walk inside of to get wet. This is counter to our typical definition of an interactive system which requires sensors to detect user actions, and an output that changes in response to these actions. In this work, techniques for making truly interactive water displays are presented.


Archive | 2006

Inverted direct touch sensitive input devices

Daniel Wigdor; Darren Leigh; Clifton Forlines; Chia Shen; John C. Barnwell; Samuel E. Shipman


Archive | 2009

Positioning an object based on aligned images of the object

Yuri Ivanov; John C. Barnwell; Andrea E. G. Bradshaw


european conference on antennas and propagation | 2011

Wireless power transfer with metamaterials

Bingnan Wang; Koon Hoo Teo; Tamotsu Nishino; William S. Yerazunis; John C. Barnwell; Jinyun Zhang

Collaboration


Dive into the John C. Barnwell's collaboration.

Top Co-Authors

Avatar

William S. Yerazunis

Mitsubishi Electric Research Laboratories

View shared research outputs
Top Co-Authors

Avatar

Joseph Katz

Mitsubishi Electric Research Laboratories

View shared research outputs
Top Co-Authors

Avatar

Dirk Brinkman

Mitsubishi Electric Research Laboratories

View shared research outputs
Top Co-Authors

Avatar

Jonathan Westhues

Mitsubishi Electric Research Laboratories

View shared research outputs
Top Co-Authors

Avatar

Paul H. Dietz

Mitsubishi Electric Research Laboratories

View shared research outputs
Top Co-Authors

Avatar

Bingnan Wang

Mitsubishi Electric Research Laboratories

View shared research outputs
Top Co-Authors

Avatar

Clifton Forlines

Mitsubishi Electric Research Laboratories

View shared research outputs
Top Co-Authors

Avatar

Ramesh Raskar

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge