Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Xuan Bao is active.

Publication


Featured researches published by Xuan Bao.


international conference on mobile systems, applications, and services | 2011

TagSense: a smartphone-based approach to automatic image tagging

Chuan Qin; Xuan Bao; Romit Roy Choudhury; Srihari Nelakuditi

Mobile phones are becoming the convergent platform for personal sensing, computing, and communication. This paper attempts to exploit this convergence towards the problem of automatic image tagging. We envision TagSense, a mobile phone based collaborative system that senses the people, activity, and context in a picture, and merges them carefully to create tags on-the-fly. The main challenge pertains to discriminating phone users that are in the picture from those that are not. We deploy a prototype of TagSense on 8 Android phones, and demonstrate its effectiveness through 200 pictures, taken in various social settings. While research in face recognition continues to improve image tagging, TagSense is an attempt to embrace additional dimensions of sensing towards this end goal. Performance comparison with Apple iPhoto and Google Picasa shows that such an out-of-band approach is valuable, especially with increasing device density and greater sophistication in sensing/learning algorithms.


international conference on computer communications | 2013

DataSpotting: Exploiting naturally clustered mobile devices to offload cellular traffic

Xuan Bao; Yin Lin; Uichin Lee; Ivica Rimac; Romit Roy Choudhury

The proliferation of pictures and videos in the Internet is imposing heavy demands on mobile data networks. Though emerging wireless technologies will provide more bandwidth, the increase in demand will easily consume the additional capacity. To alleviate this problem, we explore the possibility of serving user requests from other mobile devices located geographically close to the user. For instance, when Alice reaches areas with high device density - Data Spots - the cellular operator learns Alices content request, and guides her device to nearby devices that have the requested content. Importantly, communication between the nearby devices can be mediated by servers, avoiding many of the known problems of pure ad hoc communication. This paper argues this viability through systematic prototyping, measurements, and measurement-driven analysis.


ubiquitous computing | 2013

Your reactions suggest you liked the movie: automatic content rating via reaction sensing

Xuan Bao; Songchun Fan; Alexander Varshavsky; Kevin A. Li; Romit Roy Choudhury

This paper describes a system for automatically rating content - mainly movies and videos - at multiple granularities. Our key observation is that the rich set of sensors available on todays smartphones and tablets could be used to capture a wide spectrum of user reactions while users are watching movies on these devices. Examples range from acoustic signatures of laughter to detect which scenes were funny, to the stillness of the tablet indicating intense drama. Moreover, unlike in most conventional systems, these ratings need not result in just one numeric score, but could be expanded to capture the users experience. We combine these ideas into an Android based prototype called Pulse, and test it with 11 users each of whom watched 4 to 6 movies on Samsung tablets. Encouraging results show consistent correlation between the users actual ratings and those generated by the system. With more rigorous testing and optimization, Pulse could be a candidate for real-world adoption.


IEEE Transactions on Mobile Computing | 2014

TagSense: Leveraging Smartphones for Automatic Image Tagging

Chuan Qin; Xuan Bao; Romit Roy Choudhury; Srihari Nelakuditi

Mobile phones are becoming the convergent platform for personal sensing, computing, and communication. This paper attempts to exploit this convergence toward the problem of automatic image tagging. We envision TagSense, a mobile phone-based collaborative system that senses the people, activity, and context in a picture, and merges them carefully to create tags on-the-fly. The main challenge pertains to discriminating phone users that are in the picture from those that are not. We deploy a prototype of TagSense on eight Android phones, and demonstrate its effectiveness through 200 pictures, taken in various social settings. While research in face recognition continues to improve image tagging, TagSense is an attempt to embrace additional dimensions of sensing toward this end goal. Performance comparison with Apple iPhoto and Google Picasa shows that such an out-of-band approach is valuable, especially with increasing device density and greater sophistication in sensing and learning algorithms.


workshop on mobile computing systems and applications | 2013

The case for psychological computing

Xuan Bao; Mahanth Gowda; Ratul Mahajan; Romit Roy Choudhury

This paper envisions a new research direction that we call psychological computing. The key observation is that, even though computing systems are missioned to satisfy human needs, there has been little attempt to bring understandings of human need/psychology into core system design. This paper makes the case that percolating psychological insights deeper into the computing layers is valuable, even essential. Through examples from content caching, vehicular systems, and network scheduling, we argue that psychological awareness can not only offer performance gains to known technological problems, but also spawn new kinds of systems that are difficult to conceive otherwise.


acm special interest group on data communication | 2010

VUPoints: collaborative sensing and video recording through mobile phones

Xuan Bao; Romit Roy Choudhury

Mobile phones are becoming a convergent platform for sensing, computation, and communication. This paper envisions VUPoints, a collaborative sensing and video-recording system that takes advantage of this convergence. Ideally, when multiple phones in a social gathering run VUPoints, the output is expected to be a short video-highlights of the occasion, created without human intervention. To achieve this, mobile phones must sense their surroundings and collaboratively detect events that qualify for recording. Short video-clips from different phones can be combined to produce the highlights of the occasion. This paper reports exploratory work towards this longer term project. We present a feasibility study, and show how social events can be sensed through mobile phones and used as triggers for video-recording. While false positives cause inclusion of some uninteresting videos, we believe that further research can significantly improve the efficacy of the system.


workshop on local and metropolitan area networks | 2010

Sensor assisted wireless communication

Naveen Santhapuri; Justin Manweiler; Souvik Sen; Xuan Bao; Romit Roy Choudhury; Srihari Nelakuditiy

The nature of human mobility demands that mobile devices become agile to diverse operating environments. Coping with such diversity requires the device to assess its environment, and trigger appropriate responses to each of them. While existing communication subsystems rely on in-band wireless signals for context-assessment and response, we explore a lateral approach of using out-of-band sensor information. We propose a relatively novel framework that synthesizes in-band and out-of-band information, facilitating more informed communication decisions. We believe that further research in this direction could enable a new kind of device agility, deficient in todays communication systems. Since such a framework is located at the boundaries of mobile sensing and wireless communication, we call it sensor assisted wireless communication.


international conference on distributed computing systems | 2017

PIANO: Proximity-Based User Authentication on Voice-Powered Internet-of-Things Devices

Neil Zhenqiang Gong; Altay Ozen; Yu Wu; Xiaoyu Cao; Richard Shin; Dawn Song; Hongxia Jin; Xuan Bao

Voice is envisioned to be a popular way for humans to interact with Internet-of-Things (IoT) devices. We propose a proximity-based user authentication method (called PIANO) for access control on such voice-powered IoT devices. PIANO leverages the built-in speaker, microphone, and Bluetooth that voice-powered IoT devices often already have. Specifically, we assume that a user carries a personal voice-powered device (e.g., smartphone, smartwatch, or smartglass), which serves as the users identity. When another voice-powered IoT device of the user requires authentication, PIANO estimates the distance between the two devices by playing and detecting certain acoustic signals; PIANO grants access if the estimated distance is no larger than a user-selected threshold. We implemented a proof-of-concept prototype of PIANO. Through theoretical and empirical evaluations, we find that PIANO is secure, reliable, personalizable, and efficient.


international conference on mobile systems, applications, and services | 2014

Demo: Recognizing humans without face recognition

He Wang; Xuan Bao; Romit Roy Choudhury; Srihari Nelakuditi

We envision augmented-reality applications in which an individual looks at other people through her camera-enabled glass (e.g., Google Glass) and obtains information about them. While face recognition would be one approach to this problem, we believe that it may not be always possible to see a person’s face. Our technique is complementary to face recognition, and exploits the intuition that human motion patterns and clothing colors can together encode several bits of information. Treating this information as a “temporary fingerprint”, it may be feasible to recognize an individual with reasonable consistency, while allowing her to turn off the fingerprint when privacy is of concern. We develop InSight, a system implemented using Android Galaxy smartphones and videos taken from Google Glasses. Results from real world experiments involving up to 21 people show that 8 seconds of their motion patterns together with their clothing colors can discriminate them. These results suggest that face recognition may not be the only option for recognizing humans; human diversity lends itself to sensing and could also serve as an effective identifier.


international conference on mobile systems, applications, and services | 2011

Demo: an out-of-band alternative to face recognition

Chuan Qin; Xuan Bao; Romit Roy Choudhury; Srihari Nelakuditi

Smartphones are becoming the convergent platform for personal sensing, computing, and communication. Our work attempts to exploit this convergence towards the problem of automatic image tagging. We envision TagSense, a smartphone based collaborative system that senses the people/activity/context in a picture, and merges them carefully to create tags on-the-fly. The main challenge pertains to discriminating phone users that are in the picture, from those that are not. Our demonstration system consists of 8 Android phones and a laptop. Phones -- with the TagSense application running -- will be randomly distributed to participants. Once a picture is taken by a participant with the phone, tags generated by TagSense will be shown on the phone screen.

Collaboration


Dive into the Xuan Bao's collaboration.

Top Co-Authors

Avatar

Srihari Nelakuditi

University of South Carolina

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chuan Qin

University of South Carolina

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge