Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Artem Dementyev is active.

Publication


Featured researches published by Artem Dementyev.


user interface software and technology | 2014

WristFlex: low-power gesture input with wrist-worn pressure sensors

Artem Dementyev; Joseph A. Paradiso

In this paper we present WristFlex, an always-available on-body gestural interface. Using an array of force sensitive resistors (FSRs) worn around the wrist, the interface can distinguish subtle finger pinch gestures with high accuracy (>80 %) and speed. The system is trained to classify gestures from subtle tendon movements on the wrist. We demonstrate that WristFlex is a complete system that works wirelessly in real-time. The system is simple and light-weight in terms of power consumption and computational overhead. WristFlexs sensor power consumption is 60.7 uW, allowing the prototype to potentially last more then a week on a small lithium polymer battery. Also, WristFlex is small and non-obtrusive, and can be integrated into a wristwatch or a bracelet. We perform user studies to evaluate the accuracy, speed, and repeatability. We demonstrate that the number of gestures can be extended with orientation data from an accelerometer. We conclude by showing example applications.


human factors in computing systems | 2015

NailO: Fingernails as an Input Surface

Hsin-Liu Cindy Kao; Artem Dementyev; Joseph A. Paradiso; Chris Schmandt

We present NailO, a nail-mounted gestural input surface. Using capacitive sensing on printed electrodes, the interface can distinguish on-nail finger swipe gestures with high accuracy (>92%). NailO works in real-time: we miniaturized the system to fit on the fingernail, while wirelessly transmitting the sensor data to a mobile phone or PC. NailO allows one-handed and always-available input, while being unobtrusive and discrete. Inspired by commercial nail stickers, the device blends into the users body, is customizable, fashionable and even removable. We show example applications of using the device as a remote controller when hands are busy and using the system to increase the input space of mobile phones.


user interface software and technology | 2015

SensorTape: Modular and Programmable 3D-Aware Dense Sensor Network on a Tape

Artem Dementyev; Hsin-Liu Cindy Kao; Joseph A. Paradiso

SensorTape is a modular and dense sensor network in a form factor of a tape. SensorTape is composed of interconnected and programmable sensor nodes on a flexible electronics substrate. Each node can sense its orientation with an inertial measurement unit, allowing deformation self-sensing of the whole tape. Also, nodes sense proximity using time-of-flight infrared. We developed network architecture to automatically determine the location of each sensor node, as SensorTape is cut and rejoined. Also, we made an intuitive graphical interface to program the tape. Our user study suggested that SensorTape enables users with different skill sets to intuitively create and program large sensor network arrays. We developed diverse applications ranging from wearables to home sensing, to show low deployment effort required by the user. We showed how SensorTape could be produced at scale using current technologies and we made a 2.3-meter long prototype.


user interface software and technology | 2016

Rovables: Miniature On-Body Robots as Mobile Wearables

Artem Dementyev; Hsin-Liu Cindy Kao; Inrak Choi; Deborah Ajilo; Maggie Xu; Joseph A. Paradiso; Chris Schmandt; Sean Follmer

We introduce Rovables, a miniature robot that can move freely on unmodified clothing. The robots are held in place by magnetic wheels, and can climb vertically. The robots are untethered and have an onboard battery, microcontroller, and wireless communications. They also contain a low-power localization system that uses wheel encoders and IMU, allowing Rovables to perform limited autonomous navigation on the body. In the technical evaluations, we found that Rovables can operate continuously for 45 minutes and can carry up to 1.5N. We propose an interaction space for mobile on-body devices spanning sensing, actuation, and interfaces, and develop application scenarios in that space. Our applications include on-body sensing, modular displays, tactile feedback and interactive clothing and jewelry.


user interface software and technology | 2016

ChainFORM: A Linear Integrated Modular Hardware System for Shape Changing Interfaces

Ken Nakagaki; Artem Dementyev; Sean Follmer; Joseph A. Paradiso; Hiroshi Ishii

This paper presents ChainFORM: a linear, modular, actuated hardware system as a novel type of shape changing interface. Using rich sensing and actuation capability, this modular hardware system allows users to construct and customize a wide range of interactive applications. Inspired by modular and serpentine robotics, our prototype comprises identical modules that connect in a chain. Modules are equipped with rich input and output capability: touch detection on multiple surfaces, angular detection, visual output, and motor actuation. Each module includes a servo motor wrapped with a flexible circuit board with an embedded microcontroller. Leveraging the modular functionality, we introduce novel interaction capability with shape changing interfaces, such as rearranging the shape/configuration and attaching to passive objects and bodies. To demonstrate the capability and interaction design space of ChainFORM, we implemented a variety of applications for both computer interfaces and hands-on prototyping tools.


designing interactive systems | 2017

Exploring Interactions and Perceptions of Kinetic Wearables

Hsin-Liu Cindy Kao; Deborah Ajilo; Oksana Anilionyte; Artem Dementyev; Inrak Choi; Sean Follmer; Chris Schmandt

Jewelry and accessories have long been objects for decorating the human body; however they remain static and non-interactive. This work explores opportunities for accessory-like kinetic wearables and their association with individual style. We developed Kino, a kinetic accessory system which enables both aesthetic and functional clothing-specific design possibilities. We engaged both fashion designers and every-day users to unpack envisioned use cases and perceptions of the system. Participants viewed the devices not as gadgets but as companions due to their close proximity to the body. They envisioned a wide range of usage scenarios, highlighting the complexity of mobility in relation to personal style. We observe how mobility offers opportunities for fluid representations of self, which is unachievable though static clothing and accessories. We also outline how personalized aesthetics is important for the meaning making of novel on-body devices.


wearable and implantable body sensor networks | 2015

EMI Spy: Harnessing electromagnetic interference for low-cost, rapid prototyping of proxemic interaction

Nan Zhao; Gershon Dublon; Nicholas Gillian; Artem Dementyev; Joseph A. Paradiso

We present a wearable system that uses ambient electromagnetic interference (EMI) as a signature to identify electronic devices and support proxemic interaction. We designed a low cost tool, called EMI Spy, and a software environment for rapid deployment and evaluation of ambient EMI-based interactive infrastructure. EMI Spy captures electromagnetic interference and delivers the signal to a users mobile device or PC through either the devices wired audio input or wirelessly using Bluetooth. The wireless version can be worn on the wrist, communicating with the user;s mobile device in their pocket. Users are able to train the system in less than 1 second to uniquely identify displays in a 2-m radius around them, as well as to detect pointing at a distance and touching gestures on the displays in real-time. The combination of a low cost EMI logger and an open source machine learning tool kit allows developers to quickly prototype proxemic, touch-to-connect, and gestural interaction. We demonstrate the feasibility of mobile, EMI-based device and gesture recognition with preliminary user studies in 3 scenarios, achieving 96% classification accuracy at close range for 6 digital signage displays distributed throughout a building, and 90% accuracy in classifying pointing gestures at neighboring desktop LCD displays. We were able to distinguish 1- and 2-finger touching with perfect accuracy and show indications of a way to determine power consumption of a device via touch. Our system is particularly well-suited to temporary use in a public space, where the sensors could be distributed to support a popup interactive environment anywhere with electronic devices. By designing for low cost, mobile, flexible, and infrastructure-free deployment, we aim to enable a host of new proxemic interfaces to existing appliances and displays


Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies | 2017

DualBlink: A Wearable Device to Continuously Detect, Track, and Actuate Blinking For Alleviating Dry Eyes and Computer Vision Syndrome

Artem Dementyev; Christian Holz

Increased visual attention, such as during computer use leads to less blinking, which can cause dry eyes—the leading cause of computer vision syndrome. As people spend more time looking at screens on mobile and desktop devices, computer vision syndrome is becoming epidemic in todays population, leading to blurry vision, fatigue, and a reduced quality of life. One way to alleviate dry eyes is increased blinking. In this paper, we present a series of glasses-mounted devices that track the wearers blink rate and, upon absent blinks, trigger blinks through actuation: light flashes, physical taps, and small puffs of air near the eye. We conducted a user study to evaluate the effectiveness of our devices and found that air puff and physical tap actuations result in a 36% increase in participants’ average blink rate. Air puff thereby struck the best compromise between effective blink actuations and low distraction ratings from participants. In a follow-up study, we found that high intensity, short puffs near the eye were most effective in triggering blinks while receiving only low-rated distraction and invasiveness ratings from participants. We conclude this paper with two miniaturized and self-contained DualBlink prototypes, one integrated into the frame of a pair of glasses and the other one as a clip-on for existing glasses. We believe that DualBlink can serve as an always-available and viable option to treat computer vision syndrome in the future.


user interface software and technology | 2017

SkinBot: A Wearable Skin Climbing Robot

Artem Dementyev; Javier Hernandez; Sean Follmer; Inrak Choi; Joseph A. Paradiso

We introduce SkinBot; a lightweight robot that moves over the skin surface with a two-legged suction-based locomotion mechanism and that captures a wide range of body parameters with an exchangeable multipurpose sensing module. We believe that robots that live on our skin such as SkinBot will enable a more systematic study of the human body and will offer great opportunities to advance many areas such as telemedicine, human-computer interfaces, body care, and fashion.


tangible and embedded interaction | 2016

Towards Self-Aware Materials

Artem Dementyev

We propose a self-aware material in the form-factor of a fabric. This material contains dense sensor nodes on a flexible and stretchable substrate. It is self-configurable and can be manipulated as a traditional craft material, by cutting and joining. The complete shape of this self-sensing material can be tracked by sensing its deformation and stretch. We hope to enable artists and designers to easily make sophisticated sensor networks. This work is a continuation of the SensorTape project, which is a sensor network in the form-factor of a tape.

Collaboration


Dive into the Artem Dementyev's collaboration.

Top Co-Authors

Avatar

Joseph A. Paradiso

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hsin-Liu Cindy Kao

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chris Schmandt

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Deborah Ajilo

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Hiroshi Ishii

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Javier Hernandez

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ken Nakagaki

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge