Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Luyang Liu is active.

Publication


Featured researches published by Luyang Liu.


Proceedings of the 2015 workshop on Wearable Systems and Applications | 2015

Toward Detection of Unsafe Driving with Wearables

Luyang Liu; Cagdas Karatas; Hongyu Li; Sheng Tan; Marco Gruteser; Jie Yang; Yingying Chen; Richard P. Martin

This paper explores the potential for wearable devices to identify driving activities and unsafe driving, without relying on information or sensors in the vehicle. In particular, we study how wrist-mounted inertial sensors such as those in smart watches and fitness trackers, can track steering wheel usage and inputs. Identifying steering wheel usage helps mobile device detect driving and reduce distractions. Tracking steering wheel turning angles can improve vehicle motion tracking by mobile devices and help identify unsafe driving. The approach relies on motion features that allow distinguishing steering from other confounding hand movements. Once steering wheel usage is detected, it also use wrist rotation measurements to infer steering wheel turning angles. Our preliminary experiments show that the technique is 98.9% accurate in detecting driving and can estimate turning angles with average error within two degrees.


ieee international conference computer and communications | 2016

Leveraging wearables for steering and driver tracking

Cagdas Karatas; Luyang Liu; Hongyu Li; Jian Liu; Yan Wang; Sheng Tan; Jie Yang; Yingying Chen; Marco Gruteser; Richard P. Martin

Given the increasing popularity of wearable devices, this paper explores the potential to use wearables for steering and driver tracking. Such capability would enable novel classes of mobile safety applications without relying on information or sensors in the vehicle. In particular, we study how wrist-mounted inertial sensors, such as those in smart watches and fitness trackers, can track steering wheel usage and angle. In particular, tracking steering wheel usage and turning angle provide fundamental techniques to improve driving detection, enhance vehicle motion tracking by mobile devices and help identify unsafe driving. The approach relies on motion features that allow distinguishing steering from other confounding hand movements. Once steering wheel usage is detected, it further uses wrist rotation measurements to infer steering wheel turning angles. Our on-road experiments show that the technique is 99% accurate in detecting steering wheel usage and can estimate turning angles with an average error within 3.4 degrees.


IEEE Transactions on Mobile Computing | 2016

Determining Driver Phone Use by Exploiting Smartphone Integrated Sensors

Yan Wang; Yingying Jennifer Chen; Jie Yang; Marco Gruteser; Richard P. Martin; Hongbo Liu; Luyang Liu; Cagdas Karatas

This paper utilizes smartphone sensing of vehicle dynamics to determine driver phone use, which can facilitate many traffic safety applications. Our system uses embedded sensors in smartphones, i.e., accelerometers and gyroscopes, to capture differences in centripetal acceleration due to vehicle dynamics. These differences combined with angular speed can determine whether the phone is on the left or right side of the vehicle. Our low infrastructure approach is flexible with different turn sizes and driving speeds. Extensive experiments conducted with two vehicles in two different cities demonstrate that our system is robust to real driving environments. Despite noisy sensor readings from smartphones, our approach can achieve a classification accuracy of over 90 percent with a false positive rate of a few percent. We also find that by combining sensing results in a few turns, we can achieve better accuracy (e.g., 95 percent) with a lower false positive rate. In addition, we seek to exploit the electromagnetic field measurement inside a vehicle to complement vehicle dynamics for driver phone sensing under the scenarios when little vehicle dynamics is present, for example, driving straight on highways or standing at roadsides.


international conference on mobile systems, applications, and services | 2017

BigRoad: Scaling Road Data Acquisition for Dependable Self-Driving

Luyang Liu; Hongyu Li; Jian Liu; Cagdas Karatas; Yan Wang; Marco Gruteser; Yingying Chen; Richard P. Martin

Advanced driver assistance systems and, in particular automated driving offers an unprecedented opportunity to transform the safety, efficiency, and comfort of road travel. Developing such safety technologies requires an understanding of not just common highway and city traffic situations but also a plethora of widely different unusual events (e.g., object on the road way and pedestrian crossing highway, etc.). While each such event may be rare, in aggregate they represent a significant risk that technology must address to develop truly dependable automated driving and traffic safety technologies. By developing technology to scale road data acquisition to a large number of vehicles, this paper introduces a low-cost yet reliable solution, BigRoad, that can derive internal driver inputs (i.e., steering wheel angles, driving speed and acceleration) and external perceptions of road environments (i.e., road conditions and front-view video) using a smartphone and an IMU mounted in a vehicle. We evaluate the accuracy of collected internal and external data using over 140 real-driving trips collected in a 3-month time period. Results show that BigRoad can accurately estimate the steering wheel angle with 0.69 degree median error, and derive the vehicle speed with 0.65 km/h deviation. The system is also able to determine binary road conditions with 95% accuracy by capturing a small number of brakes. We further validate the usability of BigRoad by pushing the collected video feed and steering wheel angle to a deep neural network steering wheel angle predictor, showing the potential of massive data acquisition for training self-driving system using BigRoad.


international conference on mobile systems, applications, and services | 2018

Cutting the Cord: Designing a High-quality Untethered VR System with Low Latency Remote Rendering

Luyang Liu; Ruiguang Zhong; Wuyang Zhang; Yunxin Liu; Jiansong Zhang; Lintao Zhang; Marco Gruteser

This paper introduces an end-to-end untethered VR system design and open platform that can meet virtual reality latency and quality requirements at 4K resolution over a wireless link. High-quality VR systems generate graphics data at a data rate much higher than those supported by existing wireless-communication products such as Wi-Fi and 60GHz wireless communication. The necessary image encoding, makes it challenging to maintain the stringent VR latency requirements. To achieve the required latency, our system employs a Parallel Rendering and Streaming mechanism to reduce the add-on streaming latency, by pipelining the rendering, encoding, transmission and decoding procedures. Furthermore, we introduce a Remote VSync Driven Rendering technique to minimize display latency. To evaluate the system, we implement an end-to-end remote rendering platform on commodity hardware over a 60Ghz wireless network. Results show that the system can support current 2160x1200 VR resolution at 90Hz with less than 16ms end-to-end latency, and 4K resolution with 20ms latency, while keeping a visually lossless image quality to the user.


international conference on embedded networked sensor systems | 2018

Automatic Unusual Driving Event Identification for Dependable Self-Driving

Hongyu Li; Hairong Wang; Luyang Liu; Marco Gruteser

This paper introduces techniques to automatically detect driving corner cases from dashcam video and inertial sensors. Developing robust driver assistance and automated driving technologies requires an understanding of not just common highway and city traffic situations but also a plethora of corner cases that may be encountered in billions of miles of driving. Current approaches seek to collect such a catalog of corner cases by driving millions of miles with self-driving prototypes. In contrast, this paper introduces a low-cost yet scalable solution to collect such events from any dashcam-equipped vehicle to take advantage of the billions of miles that humans already drive. It detects unusual events through inertial sensing of sudden human driver reactions and rare visual events through a trained autoencoder deep neural network. We evaluate the system based on more than 120 hours real road driving data. It shows 82% accuracy improvement versus strawman solutions for sudden reaction detection and above 71% accuracy for rare visual views identification. The detection results proved useful for re-training and improving a self-steering algorithm on more complex situations. In terms of computational efficiency, the Android prototype achieves 17Hz frame rate (Nexus 5X).


Proceedings of the Eighth Wireless of the Students, by the Students, and for the Students Workshop on | 2016

Toward detection of unsafe driving with inertial head-mounted sensors

Cagdas Karatas; Hongyu Li; Luyang Liu

This paper explores the potential for the inertial sensors on head-mounted devices (HMDs) such as Google Glass, and mobile devices to identify driving activities and unsafe driving. Particularly, we study whether the inertial sensors on HMDs, can detect whether their user is operating the vehicle and infer where the drivers visual attention is focused at by tracking the drivers head without relying on cameras on the HMD or the vehicle. Detecting users vehicle operation through head movements can help prevent the drivers interactive usage of HMD, which can cause distracted driving. Tracking drivers head movements and the drivers focus of visual attention can detect unsafe driving and additionally enable many safety applications. Our approach relies on head movements that are specific to a driver. Once the system detects the vehicles operation, head movements can be further utilized to detect distracted driving, to predict driver behaviors and to detect inadequate surveillance cases where the driver failed to look in the appropriate place to complete a maneuver.


sensor, mesh and ad hoc communications and networks | 2018

Single-Sensor Motion and Orientation Tracking in a Moving Vehicle

Cagdas Karatas; Luyang Liu; Marco Gruteser; Richard E. Howard


Proceedings of the First ACM International Workshop on Smart, Autonomous, and Connected Vehicular Systems and Services | 2016

Towards safer texting while driving through stop time prediction

Hongyu Li; Luyang Liu; Cagdas Karatas; Jian Liu; Marco Gruteser; Yingying Chen; Yan Wang; Richard P. Martin; Jie Yang


asia pacific workshop on systems | 2017

On Building a Programmable Wireless High-Quality Virtual Reality System Using Commodity Hardware

Ruiguang Zhong; Manni Wang; Zijian Chen; Luyang Liu; Yunxin Liu; Jiansong Zhang; Lintao Zhang; Thomas Moscibroda

Collaboration


Dive into the Luyang Liu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jie Yang

Florida State University

View shared research outputs
Top Co-Authors

Avatar

Yan Wang

Binghamton University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jian Liu

Stevens Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ruiguang Zhong

Beijing University of Posts and Telecommunications

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge