Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tanzeem Choudhury is active.

Publication


Featured researches published by Tanzeem Choudhury.


IEEE Communications Magazine | 2010

A survey of mobile phone sensing

Nicholas D. Lane; Emiliano Miluzzo; Hong Lu; Daniel Peebles; Tanzeem Choudhury; Andrew T. Campbell

Mobile phones or smartphones are rapidly becoming the central computer and communication device in peoples lives. Application delivery channels such as the Apple AppStore are transforming mobile phones into App Phones, capable of downloading a myriad of applications in an instant. Importantly, todays smartphones are programmable and come with a growing set of cheap powerful embedded sensors, such as an accelerometer, digital compass, gyroscope, GPS, microphone, and camera, which are enabling the emergence of personal, group, and communityscale sensing applications. We believe that sensor-equipped mobile phones will revolutionize many sectors of our economy, including business, healthcare, social networks, environmental monitoring, and transportation. In this article we survey existing mobile phone sensing algorithms, applications, and systems. We discuss the emerging sensing paradigms, and formulate an architectural framework for discussing a number of the open issues and challenges emerging in the new area of mobile phone sensing research.


IEEE Pervasive Computing | 2008

The Mobile Sensing Platform: An Embedded Activity Recognition System

Tanzeem Choudhury; Sunny Consolvo; Beverly L. Harrison; Jeffrey Hightower; Anthony LaMarca; Louis LeGrand; Ali Rahimi; Adam D. Rea; G. Bordello; Bruce Hemingway; Predrag Klasnja; Karl Koscher; James A. Landay; Jonathan Lester; Danny Wyatt; Dirk Haehnel

Activity-aware systems have inspired novel user interfaces and new applications in smart environments, surveillance, emergency response, and military missions. Systems that recognize human activities from body-worn sensors can further open the door to a world of healthcare applications, such as fitness monitoring, eldercare support, long-term preventive and chronic care, and cognitive assistance. Wearable systems have the advantage of being with the user continuously. So, for example, a fitness application could use real-time activity information to encourage users to perform opportunistic activities. Furthermore, the general public is more likely to accept such activity recognition systems because they are usually easy to turn off or remove.


international conference on embedded networked sensor systems | 2010

The Jigsaw continuous sensing engine for mobile phone applications

Hong Lu; Jun Yang; Zhigang Liu; Nicholas D. Lane; Tanzeem Choudhury; Andrew T. Campbell

Supporting continuous sensing applications on mobile phones is challenging because of the resource demands of long-term sensing, inference and communication algorithms. We present the design, implementation and evaluation of the Jigsaw continuous sensing engine, which balances the performance needs of the application and the resource demands of continuous sensing on the phone. Jigsaw comprises a set of sensing pipelines for the accelerometer, microphone and GPS sensors, which are built in a plug and play manner to support: i) resilient accelerometer data processing, which allows inferences to be robust to different phone hardware, orientation and body positions; ii) smart admission control and on-demand processing for the microphone and accelerometer data, which adaptively throttles the depth and sophistication of sensing pipelines when the input data is low quality or uninformative; and iii) adaptive pipeline processing, which judiciously triggers power hungry pipeline stages (e.g., sampling the GPS) taking into account the mobility and behavioral patterns of the user to drive down energy costs. We implement and evaluate Jigsaw on the Nokia N95 and the Apple iPhone, two popular smartphone platforms, to demonstrate its capability to recognize user activities and perform long term GPS tracking in an energy-efficient manner.


international conference on computer vision | 2007

A Scalable Approach to Activity Recognition based on Object Use

Jianxin Wu; Adebola Osuntogun; Tanzeem Choudhury; Matthai Philipose; James M. Rehg

We propose an approach to activity recognition based on detecting and analyzing the sequence of objects that are being manipulated by the user. In domains such as cooking, where many activities involve similar actions, object-use information can be a valuable cue. In order for this approach to scale to many activities and objects, however, it is necessary to minimize the amount of human-labeled data that is required for modeling. We describe a method for automatically acquiring object models from video without any explicit human supervision. Our approach leverages sparse and noisy readings from RFID tagged objects, along with common-sense knowledge about which objects are likely to be used during a given activity, to bootstrap the learning process. We present a dynamic Bayesian network model which combines RFID and video data to jointly infer the most likely activity and object labels. We demonstrate that our approach can achieve activity recognition rates of more than 80% on a real-world dataset consisting of 16 household activities involving 33 objects with significant background clutter. We show that the combination of visual object recognition with RFID data is significantly more effective than the RFID sensor alone. Our work demonstrates that it is possible to automatically learn object models from video of household activities and employ these models for activity recognition, without requiring any explicit human labeling.


ubiquitous computing | 2006

Mobility detection using everyday GSM traces

Timothy Sohn; Alex Varshavsky; Anthony LaMarca; Mike Y. Chen; Tanzeem Choudhury; Ian E. Smith; Sunny Consolvo; Jeffrey Hightower; William G. Griswold; Eyal de Lara

Recognition of everyday physical activities is difficult due to the challenges of building informative, yet unobtrusive sensors. The most widely deployed and used mobile computing device today is the mobile phone, which presents an obvious candidate for recognizing activities. This paper explores how coarse-grained GSM data from mobile phones can be used to recognize high-level properties of user mobility, and daily step count. We demonstrate that even without knowledge of observed cell tower locations, we can recognize mobility modes that are useful for several application domains. Our mobility detection system was evaluated with GSM traces from the everyday lives of three data collectors over a period of one month, yielding an overall average accuracy of 85%, and a daily step count number that reasonably approximates the numbers determined by several commercial pedometers.


international symposium on wearable computers | 2003

Sensing and modeling human networks using the sociometer

Tanzeem Choudhury; Alex Pentland

Knowledge of how people interact is important in manydisciplines, e.g. organizational behavior, social networkanalysis, information diffusion and knowledge managementapplications. We are developing methods to automaticallyand unobtrusively learn the social network structures thatarise within human groups based on wearable sensors. Atpresent researchers mainly have to rely on questionnaires,surveys or diaries in order to obtain data on physicalinteractions between people. In this paper, we show howsensor measurements from the sociometer can be used tobuild computational models of group interactions. Wepresent results on how we can learn the structure of face-to-face interactions within groups, detect when membersare in face-to-face proximity and also when they are havinga conversation.


acm special interest group on data communication | 2010

NeuroPhone: brain-mobile phone interface using a wireless EEG headset

Andrew T. Campbell; Tanzeem Choudhury; Shaohan Hu; Hong Lu; Matthew K. Mukerjee; Mashfiqui Rabbi; Rajeev D. S. Raizada

Neural signals are everywhere just like mobile phones. We propose to use neural signals to control mobile phones for hands-free, silent and effortless human-mobile interaction. Until recently, devices for detecting neural signals have been costly, bulky and fragile. We present the design, implementation and evaluation of the NeuroPhone system, which allows neural signals to drive mobile phone applications on the iPhone using cheap off-the-shelf wireless electroencephalography (EEG) headsets. We demonstrate a brain-controlled address book dialing app, which works on similar principles to P300-speller brain-computer interfaces: the phone flashes a sequence of photos of contacts from the address book and a P300 brain potential is elicited when the flashed photo matches the person whom the user wishes to dial. EEG signals from the headset are transmitted wirelessly to an iPhone, which natively runs a lightweight classifier to discriminate P300 signals from noise. When a persons contact-photo triggers a P300, his/her phone number is automatically dialed. NeuroPhone breaks new ground as a brain-mobile phone interface for ubiquitous pervasive computing. We discuss the challenges in making our initial prototype more practical, robust, and reliable as part of our on-going research.


ubiquitous computing | 2011

Enabling large-scale human activity inference on smartphones using community similarity networks (csn)

Nicholas D. Lane; Ye Xu; Hong Lu; Shaohan Hu; Tanzeem Choudhury; Andrew T. Campbell; Feng Zhao

Sensor-enabled smartphones are opening a new frontier in the development of mobile sensing applications. The recognition of human activities and context from sensor-data using classification models underpins these emerging applications. However, conventional approaches to training classifiers struggle to cope with the diverse user populations routinely found in large-scale popular mobile applications. Differences between users (e.g., age, sex, behavioral patterns, lifestyle) confuse classifiers, which assume everyone is the same. To address this, we propose Community Similarity Networks (CSN), which incorporates inter-person similarity measurements into the classifier training process. Under CSN every user has a unique classifier that is tuned to their own characteristics. CSN exploits crowd-sourced sensor-data to personalize classifiers with data contributed from other similar users. This process is guided by similarity networks that measure different dimensions of inter-person similarity. Our experiments show CSN outperforms existing approaches to classifier training under the presence of population diversity.


ubiquitous computing | 2011

Passive and In-Situ assessment of mental and physical well-being using mobile sensors

Mashfiqui Rabbi; Shahid Ali; Tanzeem Choudhury; Ethan M. Berke

The idea of continuously monitoring well-being using mobile-sensing systems is gaining popularity. In-situ measurement of human behavior has the potential to overcome the short comings of gold-standard surveys that have been used for decades by the medical community. However, current sensing systems have mainly focused on tracking physical health; some have approximated aspects of mental health based on proximity measurements but have not been compared against medically accepted screening instruments. In this paper, we show the feasibility of a multi-modal mobile sensing system to simultaneously assess mental and physical health. By continuously capturing fine-grained motion and privacy-sensitive audio data, we are able to derive different metrics that reflect the results of commonly used surveys for assessing well-being by the medical community. In addition, we present a case study that highlights how errors in assessment due to the subjective nature of the responses could potentially be avoided by continuous mobile sensing.


international conference on pervasive computing | 2009

Exploring Privacy Concerns about Personal Sensing

Predrag Klasnja; Sunny Consolvo; Tanzeem Choudhury; Richard Beckwith; Jeffrey Hightower

More and more personal devices such as mobile phones and multimedia players use embedded sensing. This means that people are wearing and carrying devices capable of sensing details about them such as their activity, location, and environment. In this paper, we explore privacy concerns about such personal sensing through interviews with 24 participants who took part in a three month study that used personal sensing to detect their physical activities. Our results show that concerns often depended on what was being recorded, the context in which participants worked and lived and thus would be sensed, and the value they perceived would be provided. We suggest ways in which personal sensing can be made more privacy-sensitive to address these concerns.

Collaboration


Dive into the Tanzeem Choudhury's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alex Pentland

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Danny Wyatt

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge