Gershon Dublon
Massachusetts Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Gershon Dublon.
human factors in computing systems | 2016
Jifei Ou; Gershon Dublon; Chin-Yi Cheng; Felix Heibeck; Karl D.D. Willis; Hiroshi Ishii
This work presents a method for 3D printing hair-like structures on both flat and curved surfaces. It allows a user to design and fabricate hair geometries that are smaller than 100 micron. We built a software platform to let users quickly define the hair angle, thickness, density, and height. The ability to fabricate customized hair-like structures not only expands the library of 3D-printable shapes, but also enables us to design passive actuators and swipe sensors. We also present several applications that show how the 3D-printed hair can be used for designing everyday interactive objects.
ieee sensors | 2011
Gershon Dublon; Laurel S. Pardue; Brian Mayton; Noah Swartz; Nicholas Joliat; Patrick Hurst; Joseph A. Paradiso
We present DoppelLab, an immersive sensor data browser built on a 3-d game engine. DoppelLab unifies independent sensor networks and data sources within the spatial framework of a building. Animated visualizations and sonifications serve as representations of real-time data within the virtual space.
ieee sensors | 2012
Brian Mayton; Gershon Dublon; Sebastian Palacios; Joseph A. Paradiso
We present TRUSS, or Tracking Risk with Ubiquitous Smart Sensing, a novel system that infers and renders safety context on construction sites by fusing data from wearable devices, distributed sensing infrastructure, and video. Wearables stream real-time levels of dangerous gases, dust, noise, light quality, altitude, and motion to base stations that synchronize the mobile devices, monitor the environment, and capture video. At the same time, low-power video collection and processing nodes track the workers as they move through the view of the cameras, identifying the tracks using information from the sensors. These processes together connect the context-mining wearable sensors to the video; information derived from the sensor data is used to highlight salient elements in the video stream. The augmented stream in turn provides users with better understanding of real-time risks, and supports informed decision-making. We tested our system in an initial deployment on an active construction site.
human factors in computing systems | 2015
Edwina Portocarrero; Gershon Dublon; Joseph A. Paradiso; V. Michael Bove
In this paper, we present ListenTree, an audio-haptic display embedded in the natural environment. A visitor to our installation notices a faint sound appearing to emerge from a tree, and might feel a slight vibration under their feet as they approach. By resting their head against the tree, they are able to hear sound through bone conduction. To create this effect, an audio exciter transducer is weatherproofed and attached to the tree trunk underground, transforming the tree into a living speaker that channels audio through its branches. Any source of sound can be played through the tree, including live audio or pre-recorded tracks. For example, we used the ListenTree to display live streaming sound from an outdoor ecological monitoring sensor network, bringing an urban audience into contact with a faraway wetland. Our intervention is motivated by a need for forms of display that fade into the background, inviting attention rather than requiring it. ListenTree points to a future where digital information might become a seamless part of the physical world.
wearable and implantable body sensor networks | 2015
Nan Zhao; Gershon Dublon; Nicholas Gillian; Artem Dementyev; Joseph A. Paradiso
We present a wearable system that uses ambient electromagnetic interference (EMI) as a signature to identify electronic devices and support proxemic interaction. We designed a low cost tool, called EMI Spy, and a software environment for rapid deployment and evaluation of ambient EMI-based interactive infrastructure. EMI Spy captures electromagnetic interference and delivers the signal to a users mobile device or PC through either the devices wired audio input or wirelessly using Bluetooth. The wireless version can be worn on the wrist, communicating with the user;s mobile device in their pocket. Users are able to train the system in less than 1 second to uniquely identify displays in a 2-m radius around them, as well as to detect pointing at a distance and touching gestures on the displays in real-time. The combination of a low cost EMI logger and an open source machine learning tool kit allows developers to quickly prototype proxemic, touch-to-connect, and gestural interaction. We demonstrate the feasibility of mobile, EMI-based device and gesture recognition with preliminary user studies in 3 scenarios, achieving 96% classification accuracy at close range for 6 digital signage displays distributed throughout a building, and 90% accuracy in classifying pointing gestures at neighboring desktop LCD displays. We were able to distinguish 1- and 2-finger touching with perfect accuracy and show indications of a way to determine power consumption of a device via touch. Our system is particularly well-suited to temporary use in a public space, where the sensors could be distributed to support a popup interactive environment anywhere with electronic devices. By designing for low cost, mobile, flexible, and infrastructure-free deployment, we aim to enable a host of new proxemic interfaces to existing appliances and displays
human factors in computing systems | 2012
Gershon Dublon; Joseph A. Paradiso
The tongue is known to have an extremely dense sensing resolution, as well as an extraordinary degree of neuroplasticity, the ability to adapt to and internalize new input. Research has shown that electro-tactile tongue displays paired with cameras can be used as vision prosthetics for the blind or visually impaired; users quickly learn to read and navigate through natural environments, and many describe the signals as an innate sense. However, existing displays are expensive and difficult to adapt. Tongueduino is an inexpensive, vinyl-cut tongue display designed to interface with many types of sensors besides cameras. Connected to a magnetometer, for example, the system provides a user with an internal sense of direction, like a migratory bird. Piezo whiskers allow a user to sense orientation, wind, and the lightest touch. Through tongueduino, we hope to bring electro-tactile sensory substitution beyond the discourse of vision replacement, towards open-ended sensory augmentation that anyone can access.
user interface software and technology | 2015
Jifei Ou; Chin-Yi Cheng; Liang Zhou; Gershon Dublon; Hiroshi Ishii
This work presents a method of 3D printing hair-like structures on both flat and curved surfaces. It allows a user to design and fabricate hair geometry that is smaller than 100 micron. We built a software platform to let one quickly define a hairs angle, thickness, density, and height. The ability to fabricate customized hair-like structures expands the library of 3D-printable shape. We then present several applications to show how the 3D-printed hair can be used for designing toy objects.
Scientific American | 2014
Gershon Dublon; Joseph A. Paradiso
augmented human international conference | 2016
Spencer Russell; Gershon Dublon; Joseph A. Paradiso
Teleoperators and Virtual Environments | 2017
Brian Mayton; Gershon Dublon; Spencer Russell; Evan F. Lynch; Donald Derek Haddad; Vasant Ramasubramanian; Clement Duhart; Glorianna Davenport; Joseph A. Paradiso