Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tauhidur Rahman is active.

Publication


Featured researches published by Tauhidur Rahman.


international conference on mobile systems, applications, and services | 2014

BodyBeat: a mobile system for sensing non-speech body sounds

Tauhidur Rahman; Alexander Travis Adams; Mi Zhang; Erin Cherry; Bobby Zhou; Huaishu Peng; Tanzeem Choudhury

In this paper, we propose BodyBeat, a novel mobile sensing system for capturing and recognizing a diverse range of non-speech body sounds in real-life scenarios. Non-speech body sounds, such as sounds of food intake, breath, laughter, and cough contain invaluable information about our dietary behavior, respiratory physiology, and affect. The BodyBeat mobile sensing system consists of a custom-built piezoelectric microphone and a distributed computational framework that utilizes an ARM microcontroller and an Android smartphone. The custom-built microphone is designed to capture subtle body vibrations directly from the body surface without being perturbed by external sounds. The microphone is attached to a 3D printed neckpiece with a suspension mechanism. The ARM embedded system and the Android smartphone process the acoustic signal from the microphone and identify non-speech body sounds. We have extensively evaluated the BodyBeat mobile sensing system. Our results show that BodyBeat outperforms other existing solutions in capturing and recognizing different types of important non-speech body sounds.


ubiquitous computing | 2015

DoppleSleep: a contactless unobtrusive sleep sensing system using short-range Doppler radar

Tauhidur Rahman; Alexander Travis Adams; Ruth Ravichandran; Mi Zhang; Shwetak N. Patel; Julie A. Kientz; Tanzeem Choudhury

In this paper, we present DoppleSleep -- a contactless sleep sensing system that continuously and unobtrusively tracks sleep quality using commercial off-the-shelf radar modules. DoppleSleep provides a single sensor solution to track sleep-related physical and physiological variables including coarse body movements and subtle and fine-grained chest, heart movements due to breathing and heartbeat. By integrating vital signals and body movement sensing, DoppleSleep achieves 89.6% recall with Sleep vs. Wake classification and 80.2% recall with REM vs. Non-REM classification compared to EEG-based sleep sensing. Lastly, it provides several objective sleep quality measurements including sleep onset latency, number of awakenings, and sleep efficiency. The contactless nature of DoppleSleep obviates the need to instrument the users body with sensors. Lastly, DoppleSleep is implemented on an ARM microcontroller and a smartphone application that are benchmarked in terms of power and resource usage.


international conference on acoustics, speech, and signal processing | 2012

A personalized emotion recognition system using an unsupervised feature adaptation scheme

Tauhidur Rahman; Carlos Busso

A personalized emotion recognition system aims to tune the model to recognize the expressive behaviors of a targeted person. Such a system can play an important role in various domains including call center and health care applications. Adapting any general emotion recognition system for a particular individual requires speech samples and prior knowledge about their emotional content. These assumptions constrain the use of these techniques in many real scenarios in which no annotated data is available to train or adapt the models. To address this problem, this paper introduces an unsupervised feature adaptation scheme that aims to reduce the mismatch between the acoustic features used to train the system and the acoustic features extracted from the unknown targeted speaker. The adaptation scheme uses our recently proposed iterative feature normalization (IFN) framework. An emotion detection system is trained with the IEMOCAP database. For testing, a database was created by downloading videos from a video-sharing website, containing various interviews from a targeted subject (1.5 hours). The detection system is used to identify emotional speech with and without the proposed feature adaptation scheme. The experimental results indicate that the proposed approach improves the unweighted accuracy from 50.8% to 70.0%.


international conference on robotics and automation | 2012

Indoor robotic terrain classification via angular velocity based hierarchical classifier selection

David Tick; Tauhidur Rahman; Carlos Busso; Nicholas R. Gans

This paper proposes a novel approach to terrain classification by wheeled mobile robots, which utilizes vibration data. In our proposed approach, a mobile robot has the ability to categorize terrain types simply by driving over them. Classification of terrain is based on measurements obtained from an inertial measurement unit strapped directly to the robots chassis. In contrast to the previous approaches, we use acceleration and angular velocity measurements in all cardinal directions to extract over 800 features. Sequential Forward Floating Feature Selection is used to narrow down this large group of features to a set of 15 to 20 that are the most useful. The reduced set of features is used by a Linear Bayes Normal Classifier to classify terrain. Furthermore, different feature sets are generated for different velocity conditions, and the classifier switches based on the current robot velocity. Experimental results are presented that show the strong performance of the proposed system, including 90% accuracy over 20 continuous minutes of driving across different terrains.


human factors in computing systems | 2015

Biogotchi!: An Exploration of Plant-Based Information Displays

Jacqueline T. Chien; François Guimbretière; Tauhidur Rahman; Mark Matthews

In this paper, we discuss the opportunity to use plants as living information displays. This work focuses on systematic plant manipulation for affective individual feedback. Building on centuries of explicit plant manipulation and recent work in HCI, we explore the combination of personal informatics and plant-mediated feedback. We argue that plant-based information displays could offer affective, multi-sensorial and sometimes ambiguous signs for users. We describe our plant manipulation system and report the results of four experiments in this novel design space. We provide guidelines and suggestions for how designers can incorporate plant-based information displays into their work and conclude by exploring specific domains where plant-based displays could be effective as information displays for personal behavior, harnessing their accepted use in everyday settings and affective affordances.


international conference on embedded networked sensor systems | 2016

Nutrilyzer: A Mobile System for Characterizing Liquid Food with Photoacoustic Effect

Tauhidur Rahman; Alexander Travis Adams; Perry Schein; Aadhar Jain; David Erickson; Tanzeem Choudhury

In this paper, we propose Nutrilyzer, a novel mobile sensing system for characterizing the nutrients and detecting adulterants in liquid food with the photoacoustic effect. By listening to the sound of the intensity modulated light or electromagnetic wave with different wavelengths, our mobile photoacoustic sensing system captures unique spectra produced by the transmitted and scattered light while passing through various liquid food. As different liquid foods with different chemical compositions yield uniquely different spectral signatures, Nutrilyzers signal processing and machine learning algorithm learn to map the photoacoustic signature to various liquid food characteristics including nutrients and adulterants. We evaluated Nutrilyzer for milk nutrient prediction (i.e., milk protein) and milk adulterant detection. We have also explored Nutrilyzer for alcohol concentration prediction. The Nutrilyzer mobile system consists of an array of 16 LEDs in ultraviolet, visible and near-infrared region, two piezoelectric sensors and an ARM microcontroller unit, which are designed and fabricated in a printed circuit board and a 3D printed photoacoustic housing.


GetMobile: Mobile Computing and Communications | 2015

BodyBeat: Eavesdropping on our Body Using a Wearable Microphone

Tauhidur Rahman; Alexander Travis Adams; Mi Zhang; Erin Cherry; Tanzeem Choudhury

14 [highLights] f rom munching on a piece of toast and swallowing a sip of coffee to deep breathing after a few laps of running, our body continually makes a wide range of non-speech body sounds, which can be indicative of our dietary behaviour, respiratory physiology, and affect. A wearable system that can continuously capture and recognize different types of body sound with high fidelity can also be used for behavioural tracking and disease diagnosis. BodyBeat is such a mobile sensing system that can detect a diverse range of non-speech body sounds in real-life scenarios. The BodyBeat mobile sensing system consists of a custom-built piezoelectric microphone and a distributed computational framework that utilizes an ARM microcontroller and an Android smartphone. The custom-built microphone is designed to capture subtle body vibrations directly from the body surface without being disturbed by external sounds. The ARM embedded system and the Android smartphone processes the acoustic signal from the microphone and identifies non-speech body sounds. Speech is not the only sound generated by human. Non-speech body sounds such as sounds of food intake, breath, laughter, yawn, and cough contain invaluable information about peoples health and wellbeing. With regard to food intake, body sounds enable us to discriminate characteristics of food and drinks [1, 2]. Longer term tracking of eating sounds could be very useful in dietary monitoring applications. Breathing sounds, generated by the friction caused by the airflow from our lungs through the vocal organs (e.g., trachea, larynx, etc.) to the mouth or nasal cavity [3], are highly indicative of the conditions of our lungs. Sounds of laughter and yawns are good indicators of peoples affect states such as happiness and fatigue. Therefore, automatically tracking these non-speech body sounds can help in early detection of negative health indicators by performing regular dietary monitoring, pulmonary function testing, and affect sensing. We have designed, implemented, and evaluated a mobile sensing system called BodyBeat, which could continuously keep tracking of a diverse set of non-speech body sounds. BodyBeat consists of a custom-made piezoelectric sensor-based microphone, an ARM microcontroller, and


international symposium on wearable computers | 2015

Real time heart rate and breathing detection using commercial motion sensors

Ruth Ravichandran; Tauhidur Rahman; Alexander Travis Adams; Tanzeem Choudhury; Julie A. Kientz; Shwetak N. Patel

In this demo, we present a contactless breathing and heart rate sensing system that continuously and unobtrusively tracks physiological signals using commercial off-the-shelf radar modules. Our system provides a single sensor solution to track physical and physiological variables including coarse body movements as well as subtle and fine-grained chest movements due to breathing and heartbeat. Continuous tracking of these physiological variables especially, throughout the night can be used for sleep stage mining.


international conference on pervasive computing | 2014

Towards personal stress informatics : comparing minimally invasive techniques for measuring daily stress in the wild

Phil Adams; Mashfiqui Rabbi; Tauhidur Rahman; Mark Matthews; Amy Voida; Tanzeem Choudhury; Stephen Voida


conference of the international speech communication association | 2012

Unveiling the acoustic properties that describe the valence dimension

Carlos Busso; Tauhidur Rahman

Collaboration


Dive into the Tauhidur Rahman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Carlos Busso

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar

Mi Zhang

Michigan State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stephen Voida

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar

Erin Cherry

University of Rochester

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge