g Shen
University of Illinois at Urbana–Champaign
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by g Shen.
international conference on mobile systems, applications, and services | 2016
Sheng Shen; He Wang; Romit Roy Choudhury
This paper aims to track the 3D posture of the entire arm - both wrist and elbow - using the motion and magnetic sensors on smartwatches. We do not intend to employ machine learning to train the system on a specific set of gestures. Instead, we aim to trace the geometric motion of the arm, which can then be used as a generic platform for gesture-based applications. The problem is challenging because the arm posture is a function of both elbow and shoulder motions, whereas the watch is only a single point of (noisy) measurement from the wrist. Moreover, while other tracking systems (like indoor/outdoor localization) often benefit from maps or landmarks to occasionally reset their estimates, such opportunities are almost absent here. While this appears to be an under-constrained problem, we find that the pointing direction of the forearm is strongly coupled to the arms posture. If the gyroscope and compass on the watch can be made to estimate this direction, the 3D search space can become smaller; the IMU sensors can then be applied to mitigate the remaining uncertainty. We leverage this observation to design ArmTrak, a system that fuses the IMU sensors and the anatomy of arm joints into a modified hidden Markov model (HMM) to continuously estimate state variables. Using Kinect 2.0 as ground truth, we achieve around 9.2 cm of median error for free-form postures; the errors increase to 13.3 cm for a real time version. We believe this is a step forward in posture tracking, and with some additional work, could become a generic underlay to various practical applications.
workshop on physical analytics | 2015
Rufeng Meng; Sheng Shen; Romit Roy Choudhury; Srihari Nelakuditi
Locations are often expressed in physical coordinates such as an [X, Y] tuple in some coordinate system. Unfortunately, a vast majority of location-based applications desire the semantic translation of coordinates, i.e., store-names like Starbucks, Macys, Panera. Past work has mostly focused on achieving localization accuracy, while assuming that the translation of physical to semantic coordinates will be done manually. In this paper, we explore an opportunity for automatic semantic localization -- the presence of a website corresponding to each physical store. We propose to correlate the information seen in a physical store with that found in websites of the stores around that location, to recognize that store. Specifically, we assume a repository of crowdsourced WiFi-tagged pictures from different stores. By correlating words inside the pictures, against words extracted from store websites, our proposed system can automatically label clusters of pictures, and the corresponding WiFi APs, with the store name. Later, when a user enters a store, her smartphone can scan the WiFi APs and consult a lookup table to recognize the store she is in. Our preliminary experiments with 18 stores in a shopping mall show that, our prototype system could correctly match the text from the physical stores with the text extracted from the corresponding web sites and hence label WiFi APs with store names with an accuracy upwards of 90%, which encourages us to pursue this study further. Moreover, we believe the core idea of correlating physical and web sites has broader applications beyond semantic localization, leading to better product placement and shopping experience, yielding benefits for both store owners and shoppers.
acm/ieee international conference on mobile computing and networking | 2018
Sheng Shen; Mahanth Gowda; Romit Roy Choudhury
A rich body of work has focused on motion tracking techniques using inertial sensors, namely accelerometers, gyroscopes, and magnetometers. Applications of these techniques are in indoor localization, gesture recognition, inventory tracking, vehicular motion, and many others. This paper identifies room for improvement over todays motion tracking techniques. The core observation is that conventional systems have trusted gravity more than the magnetic North to infer the 3D orientation of the object. We find that the reverse is more effective, especially when the object is in continuous fast motion. We leverage this opportunity to design MUSE, a magnetometer-centric sensor fusion algorithm for orientation tracking. Moreover, when the objects motion is somewhat restricted (e.g., human-arm motion restricted by elbow and shoulder joints), we find new methods of sensor fusion to fully leverage the restrictions. Real experiments across a wide range of uncontrolled scenarios show consistent improvement in orientation and location accuracy, without requiring any training or machine learning. We believe this is an important progress in the otherwise mature field of IMU-based motion tracking.
acm special interest group on data communication | 2018
Sheng Shen; Nirupam Roy; Junfeng Guan; Haitham Hassanieh; Romit Roy Choudhury
Active Noise Cancellation (ANC) is a classical area where noise in the environment is canceled by producing anti-noise signals near the human ears (e.g., in Boses noise cancellation headphones). This paper brings IoT to active noise cancellation by combining wireless communication with acoustics. The core idea is to place an IoT device in the environment that listens to ambient sounds and forwards the sound over its wireless radio. Since wireless signals travel much faster than sound, our ear-device receives the sound in advance of its actual arrival. This serves as a glimpse into the future, that we call lookahead, and proves crucial for real-time noise cancellation, especially for unpredictable, wide-band sounds like music and speech. Using custom IoT hardware, as well as lookahead-aware cancellation algorithms, we demonstrate MUTE, a fully functional noise cancellation prototype that outperforms Boses latest ANC headphone. Importantly, our design does not need to block the ear - the ear canal remains open, making it comfortable (and healthier) for continuous use.
GetMobile: Mobile Computing and Communications archive | 2018
Mahanth Gowda; Ashutosh Dhekne; Sheng Shen; Romit Roy Choudhury; Sharon Xue Yang; Lei Yang; Suresh V. Golwalkar; Alexander Essanian
This paper is an experience report on IoT platforms for sports analytics. In our prior work [11], we proposed iBall, a system that explores the possibility of bringing IoT to sports analytics, particularly to the game of Cricket. iBall develops solutions to track a balls 3D trajectory and spin with inexpensive sensors and radios embedded in the ball. Towards this end, iBall performs fusion of wireless and inertial sensory data and integrates them into physics-based motion models of a ball in flight. The median ball location error is at 8cm while rotational error remains below 12° even at the end of the flight. The results do not rely on training, hence we expect the core techniques to extend to other sports like baseball, with some domain-specific modifications.
ubiquitous computing | 2016
Rufeng Meng; Sheng Shen; Romit Roy Choudhury; Srihari Nelakuditi
Most location based services require semantic place names such as Staples, rather than physical coordinates. Past work has mostly focussed on achieving localization accuracy, while assuming that the translation of physical coordinates to semantic names will be done manually. This paper makes an effort to automate this step, by leveraging the presence of a website corresponding to each store and the availability of a repository of WiFi-tagged pictures from different stores. By correlating the text inside the pictures, against the text extracted from store websites, our proposed system, called AutoLabel, can automatically label clusters of pictures, and the corresponding WiFi APs, with store names. Later, when a user enters a store, her mobile device scans the WiFi APs and consults a lookup table to recognize the store she is in. Experiment results from 40 different stores show recognition accuracy upwards of 87%, even with as few as 10 pictures from a store, offering hope that automatic large-scale semantic labeling may indeed be possible from pictures and websites of stores.
Proceedings of on MobiSys 2016 PhD Forum | 2016
Sheng Shen
In this extended abstract, we present our current work in tracking arm postures using only one smartwatch. Our system fuses data from IMU sensors and observations from human kinematics into a hidden Markov model to continuously estimate the 3D arm posture. We hope with some additional work, this could become a useful underlay to a broad class of gesture-based applications. We also discuss potential future works along this direction.
european conference on optical communication | 2013
Sheng Shen; Wei Lu; Xiahe Liu; Long Gong; Zuqing Zhu
networked systems design and implementation | 2017
Mahanth Gowda; Ashutosh Dhekne; Sheng Shen; Romit Roy Choudhury; Lei Yang; Suresh V. Golwalkar; Alexander Essanian
networked systems design and implementation | 2018
Nirupam Roy; Sheng Shen; Haitham Hassanieh; Romit Roy Choudhury