Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tianxing Li is active.

Publication


Featured researches published by Tianxing Li.


ubiquitous computing | 2014

StudentLife: assessing mental health, academic performance and behavioral trends of college students using smartphones

Rui Wang; Fanglin Chen; Zhenyu Chen; Tianxing Li; Gabriella M. Harari; Stefanie M. Tignor; Xia Zhou; Dror Ben-Zeev; Andrew T. Campbell

Much of the stress and strain of student life remains hidden. The StudentLife continuous sensing app assesses the day-to-day and week-by-week impact of workload on stress, sleep, activity, mood, sociability, mental well-being and academic performance of a single class of 48 students across a 10 week term at Dartmouth College using Android phones. Results from the StudentLife study show a number of significant correlations between the automatic objective sensor data from smartphones and mental health and educational outcomes of the student body. We also identify a Dartmouth term lifecycle in the data that shows students start the term with high positive affect and conversation levels, low stress, and healthy sleep and daily activity patterns. As the term progresses and the workload increases, stress appreciably rises while positive affect, sleep, conversation and activity drops off. The StudentLife dataset is publicly available on the web.


acm/ieee international conference on mobile computing and networking | 2015

Human Sensing Using Visible Light Communication

Tianxing Li; Chuankai An; Zhao Tian; Andrew T. Campbell; Xia Zhou

We present LiSense, the first-of-its-kind system that enables both data communication and fine-grained, real-time human skeleton reconstruction using Visible Light Communication (VLC). LiSense uses shadows created by the human body from blocked light and reconstructs 3D human skeleton postures in real time. We overcome two key challenges to realize shadow-based human sensing. First, multiple lights on the ceiling lead to diminished and complex shadow patterns on the floor. We design light beacons enabled by VLC to separate light rays from different light sources and recover the shadow pattern cast by each individual light. Second, we design an efficient inference algorithm to reconstruct user postures using 2D shadow information with a limited resolution collected by photodiodes embedded in the floor. We build a 3 m x 3 m LiSense testbed using off-the-shelf LEDs and photodiodes. Experiments show that LiSense reconstructs the 3D user skeleton at 60 Hz in real time with 10 degrees mean angular error for five body joints.


international conference on mobile systems, applications, and services | 2015

Demo: Real-Time Screen-Camera Communication Behind Any Scene

Tianxing Li; Chuankai An; Xinran Xiao; Andrew T. Campbell; Xia Zhou

We present HiLight, a new form of real-time screen-camera communication without showing any coded images (e.g., barcodes) for off-the-shelf smart devices. HiLight encodes data into pixel translucency change atop any screen content, so that camera-equipped devices can fetch the data by turning their cameras to the screen. HiLight leverages the alpha channel, a well-known concept in computer graphics, to encode bits into the pixel translucency change. By removing the need to directly modify pixel RGB values, HiLight overcomes the key bottleneck of existing designs and enables real-time unobtrusive communication while supporting any screen content. We build a HiLight prototype using off-the-shelf smart devices and demonstrate its efficacy and robustness in practical settings. By offering an unobtrusive, flexible, and lightweight communication channel between screens and cameras, HiLight opens up opportunities for new HCI and context-aware applications, e.g., smart glasses communicating with screens to realize augmented reality.


Mobile Computing and Communications Review | 2015

HiLight: Hiding Bits in Pixel Translucency Changes

Tianxing Li; Chuankai An; Andrew T. Campbell; Xia Zhou

We present HiLight, a new form of unobtrusive screen-camera communication for off-theshelf smart devices. HiLight hides information underlying any images shown on an LED or OLED screen, and camera-equipped smart devices can fetch the information by turning their cameras to the screen. HiLight achieves this by leveraging the transparency (alpha) channel, a well-known concept in computer graphics, to encode bits into pixel translucency changes without modifying pixel color (RGB) values. We demonstrated HiLights feasibility using smartphones. By offering an unobtrusive, flexible, and lightweight communication channel between screens and cameras, HiLight allows new HCI and context-aware applications to emerge.


Proceedings of the 2nd International Workshop on Visible Light Communications Systems | 2015

Visible Light Knows Who You Are

Chuankai An; Tianxing Li; Zhao Tian; Andrew T. Campbell; Xia Zhou

We examine the feasibility of human identification using purely the ubiquitous visible light. Empowered by the Visible Light Communication (VLC), the identification system consists of VLC-enabled LED lights on the ceiling emitting light beacons, and photodiodes on the floor capturing a continuous stream of shadow maps each corresponding to an LED light. We leverage these shadow maps to localize a users key boy joints in the 3D space and recognize the user based on the estimated body parameters (e.g., shoulder width, arm length). Preliminary results with 10 participants show 80\% success rate, i.e., correctly identifying 8 participants out of 10. The mean error of the body parameter estimation is 0.03 m. To extend the system to diverse practical settings, we discuss the our plan of incorporating advanced behavioral features to enhance the identification accuracy and robustness.


Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies | 2017

Reconstructing Hand Poses Using Visible Light

Tianxing Li; Xi Xiong; Yifei Xie; George Hito; Xing-Dong Yang; Xia Zhou

Free-hand gestural input is essential for emerging user interactions. We present Aili, a table lamp reconstructing a 3D hand skeleton in real time, requiring neither cameras nor on-body sensing devices. Aili consists of an LED panel in a lampshade and a few low-cost photodiodes embedded in the lamp base. To reconstruct a hand skeleton, Aili combines 2D binary blockage maps from vantage points of different photodiodes, which describe whether a hand blocks light rays from individual LEDs to all photodiodes. Empowering a table lamp with sensing capability, Aili can be seamlessly integrated into the existing environment. Relying on such low-level cues, Aili entails lightweight computation and is inherently privacy-preserving. We build and evaluate an Aili prototype. Results show that Aili’s algorithm reconstructs a hand pose within 7.2 ms on average, with 10.2° mean angular deviation and 2.5-mm mean translation deviation in comparison to Leap Motion. We also conduct user studies to examine the privacy issues of Leap Motion and solicit feedback on Aili’s privacy protection. We conclude by demonstrating various interaction applications Aili enables.


ubiquitous computing | 2015

Low-power pervasive wi-fi connectivity using WiScan

Tianxing Li; Chuankai An; Ranveer Chandra; Andrew T. Campbell; Xia Zhou

Pervasive Wi-Fi connectivity is attractive for users in places not covered by cellular services (e.g., when traveling abroad). However, the power drain of frequent Wi-Fi scans undermines the devices battery life, preventing users from staying always connected and fetching synced emails and instant message notifications (e.g., WhatsApp). We study the energy overhead of scan and roaming in detail and refer to it as the scan tax problem. Our findings show that the main processor is the primary culprit of the energy overhead. We propose a simple and effective architectural change of offloading scans to the Wi-Fi radio. We design and build WiScan to fully exploit the gain of scan offloading. Our experiments demonstrate that WiScan achieves 90%+ of the maximal connectivity, while saving 50-62% energy for seeking connectivity.


international conference on embedded networked sensor systems | 2017

Ultra-Low Power Gaze Tracking for Virtual Reality

Tianxing Li; Emmanuel S. Akosah; Qiang Liu; Xia Zhou

Tracking users eye fixation direction is crucial to virtual reality (VR): it eases users interaction with the virtual scene and enables intelligent rendering to improve users visual experiences and save system energy. Existing techniques commonly rely on cameras and active infrared emitters, making them too expensive and power-hungry for VR headsets (especially mobile VR headsets). We present LiGaze, a low-cost, low-power approach to gaze tracking tailored to VR. It relies on a few low-cost photodiodes, eliminating the need for cameras and active infrared emitters. Reusing light emitted from the VR screen, LiGaze leverages photodiodes around a VR lens to measure reflected screen light in different directions. It then infers gaze direction by exploiting pupils light absorption property. The core of LiGaze is to deal with screen light dynamics and extract changes in reflected light related to pupil movement. LiGaze infers a 3D gaze vector on the fly using a lightweight regression algorithm. We design and fabricate a LiGaze prototype using off-the-shelf photodiodes. Our comparison to a commercial VR eye tracker (FOVE) shows that LiGaze achieves 6.3° and 10.1° mean within-user and cross-user accuracy. Its sensing and computation consume 791μW in total and thus can be completely powered by a credit-card sized solar cell harvesting energy from indoor lighting. LiGazes simplicity and ultra-low power make it applicable in a wide range of VR headsets to better unleash VRs potential.


Mobile Health - Sensors, Analytic Methods, and Applications | 2017

StudentLife: Using Smartphones to Assess Mental Health and Academic Performance of College Students

Rui Wang; Fanglin Chen; Zhenyu Chen; Tianxing Li; Gabriella M. Harari; Stefanie M. Tignor; Xia Zhou; Dror Ben-Zeev; Andrew T. Campbell

Much of the stress and strain of student life remains hidden. The StudentLife continuous sensing app assesses the day-to-day and week-by-week impact of workload on stress, sleep, activity, mood, sociability, mental well-being and academic performance of a single class of 48 students across a 10 weeks term at Dartmouth College using Android phones. Results from the StudentLife study show a number of significant correlations between the automatic objective sensor data from smartphones and mental health and educational outcomes of the student body. We propose a simple model based on linear regression with lasso regularization that can accurately predict cumulative GPA. We also identify a Dartmouth term lifecycle in the data that shows students start the term with high positive affect and conversation levels, low stress, and healthy sleep and daily activity patterns. As the term progresses and the workload increases, stress appreciably rises while positive affect, sleep, conversation and activity drops off. The StudentLife dataset is publicly available on the web.


user interface software and technology | 2018

Self-Powered Gesture Recognition with Ambient Light

Yichen Li; Tianxing Li; Ruchir A. Patel; Xing-Dong Yang; Xia Zhou

We present a self-powered module for gesture recognition that utilizes small, low-cost photodiodes for both energy harvesting and gesture sensing. Operating in the photovoltaic mode, photodiodes harvest energy from ambient light. In the meantime, the instantaneously harvested power from individual photodiodes is monitored and exploited as clues for sensing finger gestures in proximity. Harvested power from all photodiodes are aggregated to drive the whole gesture-recognition module including the micro-controller running the recognition algorithm. We design robust, lightweight algorithm to recognize finger gestures in the presence of ambient light fluctuations. We fabricate two prototypes to facilitate users interaction with smart glasses and smart watch. Results show 99.7%/98.3% overall precision/recall in recognizing five gestures on glasses and 99.2%/97.5% precision/recall in recognizing seven gestures on the watch. The system consumes 34.6 µW/74.3 µW for the glasses/watch and thus can be powered by the energy harvested from ambient light. We also test systems robustness under varying light intensities, light directions, and ambient light fluctuations, where the system maintains high recognition accuracy (> 96%) in all tested settings.

Collaboration


Dive into the Tianxing Li's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zhenyu Chen

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gabriella M. Harari

University of Texas at Austin

View shared research outputs
Researchain Logo
Decentralizing Knowledge