Harishchandra Dubey
University of Texas at Dallas
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Harishchandra Dubey.
arXiv: Computers and Society | 2015
Harishchandra Dubey; Jing Yang; Nick Constant; Amir Mohammad Amiri; Qing Yang; Kunal Makodiya
The size of multi-modal, heterogeneous data collected through various sensors is growing exponentially. It demands intelligent data reduction, data mining and analytics at edge devices. Data compression can reduce the network bandwidth and transmission power consumed by edge devices. This paper proposes, validates and evaluates Fog Data, a service-oriented architecture for Fog computing. The center piece of the proposed architecture is a low power embedded computer that carries out data mining and data analytics on raw data collected from various wearable sensors used for telehealth applications. The embedded computer collects the sensed data as time series, analyzes it, and finds similar patterns present. Patterns are stored, and unique patterns are transmited. Also, the embedded computer extracts clinically relevant information that is sent to the cloud. A working prototype of the proposed architecture was built and used to carry out case studies on telehealth big data applications. Specifically, our case studies used the data from the sensors worn by patients with either speech motor disorders or cardiovascular problems. We implemented and evaluated both generic and application specific data mining techniques to show orders of magnitude data reduction and hence transmission power savings. Quantitative evaluations were conducted for comparing various data mining techniques and standard data compression techniques. The obtained results showed substantial improvement in system efficiency using the Fog Data architecture.
ieee international conference on smart computing | 2016
Admir Monteiro; Harishchandra Dubey; Leslie Mahler; Qing Yang; Kunal Mankodiya
There is an increasing demand for smart fog-computing gateways as the size of cloud data is growing. This paper presents a Fog computing interface (FIT) for processing clinical speech data. FIT builds upon our previous work on EchoWear, a wearable technology that validated the use of smartwatches for collecting clinical speech data from patients with Parkinsons disease (PD). The fog interface is a low-power embedded system that acts as a smart interface between the smartwatch and the cloud. It collects, stores, and processes the speech data before sending speech features to secure cloud storage. We developed and validated a working prototype of FIT that enabled remote processing of clinical speech data to get speech clinical features such as loudness, short-time energy, zero-crossing rate, and spectral centroid. We used speech data from six patients with PD in their homes for validating FIT. Our results showed the efficacy of FIT as a Fog interface to translate the clinical speech processing chain (CLIP) from a cloud-based backend to a fog-based smart gateway.
ieee uttar pradesh section international conference on electrical computer and electronics engineering | 2016
Rabindra K. Barik; Harishchandra Dubey; Arun Baran Samaddar; R. D. Gupta; Prakash K. Ray
Cloud Geographic Information Systems (GIS) has emerged as a tool for analysis, processing and transmission of geospatial data. The Fog computing is a paradigm where Fog devices help to increase throughput and reduce latency at the edge of the client. This paper developed a Fog Computing based framework named FogGIS for mining analytics from geospatial data. It has been built a prototype using Intel Edison, an embedded microprocessor. FogGIS has validated by doing preliminary analysis including compression and overlay analysis. Results showed that Fog Computing hold a great promise for analysis of geospatial data. Several open source compression techniques have been used for reducing the transmission to the cloud.
international conference on e health networking application services | 2015
Harishchandra Dubey; J. Cody Goldberg; Kunal Mankodiya; Leslie Mahler
Speech-language pathologists (SLPs) frequently use vocal exercises in the treatment of patients with speech disorders. Patients receive treatment in a clinical setting and need to practice outside of the clinical setting to generalize speech goals to functional communication. In this paper, we describe the development of technology that captures mixed speech signals in a group setting and allows the SLP to analyze the speech signals relative to treatment goals. The mixed speech signals are blindly separated into individual signals that are preprocessed before computation of loudness, pitch, shimmer, jitter, semitone standard deviation and sharpness. The proposed method has been previously validated on data obtained from clinical trials of people with Parkinson disease and healthy controls.
arXiv: Distributed, Parallel, and Cluster Computing | 2017
Harishchandra Dubey; Admir Monteiro; Nicholas Constant; Mohammadreza Abtahi; Debanjan Borthakur; Leslie Mahler; Yan Sun; Qing Yang; Umer Akbar; Kunal Mankodiya
In the era when the market segment of Internet of Things (IoT) tops the chart in various business reports, it is apparently envisioned that the field of medicine expects to gain a large benefit from the explosion of wearables and internet-connected sensors that surround us to acquire and communicate unprecedented data on symptoms, medication, food intake, and daily-life activities impacting ones health and wellness. However, IoT-driven healthcare would have to overcome many barriers, such as: 1) There is an increasing demand for data storage on cloud servers where the analysis of the medical big data becomes increasingly complex, 2) The data, when communicated, are vulnerable to security and privacy issues, 3) The communication of the continuously collected data is not only costly but also energy hungry, 4) Operating and maintaining the sensors directly from the cloud servers are non-trial tasks. This book chapter defined Fog Computing in the context of medical IoT. Conceptually, Fog Computing is a service-oriented intermediate layer in IoT, providing the interfaces between the sensors and cloud servers for facilitating connectivity, data transfer, and queryable local database. The centerpiece of Fog computing is a low-power, intelligent, wireless, embedded computing node that carries out signal conditioning and data analytics on raw data collected from wearables or other medical sensors and offers efficient means to serve telehealth interventions. We implemented and tested an fog computing system using the Intel Edison and Raspberry Pi that allows acquisition, computing, storage and communication of the various medical data such as pathological speech data of individuals with speech disorders, Phonocardiogram (PCG) signal for heart rate estimation, and Electrocardiogram (ECG)-based Q, R, S detection.
cooperative and human aspects of software engineering | 2016
Harishchandra Dubey; Matthias R. Mehl; Kunal Mankodiya
This paper presents a novel BigEAR big data framework that employs psychological audio processing chain (PAPC) to process smartphone-based acoustic big data collected when the user performs social conversations in naturalistic scenarios. The overarching goal of BigEAR is to identify moods of the wearer from various activities such as laughing, singing, crying, arguing, and sighing. These annotations are based on ground truth relevant for psychologists who intend to monitor/infer the social context of individuals coping with breast cancer. We pursued a case study on couples coping with breast cancer to know how the conversations affect emotional and social well being. In the state-of-the-art methods, psychologists and their team have to hear the audio recordings for making these inferences by subjective evaluations that not only are time-consuming and costly, but also demand manual data coding for thousands of audio files. The BigEAR framework automates the audio analysis. We computed the accuracy of BigEAR with respect to the ground truth obtained from a human rater. Our approach yielded overall average accuracy of 88.76% on real-world data from couples coping with breast cancer.
international conference on contemporary computing | 2016
Rakesh K. Lenka; Rabindra K. Barik; Noopur Gupta; Syed Mohd Ali; Amiya Kumar Rath; Harishchandra Dubey
In this digitalised world where every information is stored, the data a are growing exponentially. It is estimated that data are doubles itself every two years. Geospatial data are one of the prime contributors to the big data scenario. There are numerous tools of the big data analytics. But not all the big data analytics tools are capabilities to handle geospatial big data. In the present paper, it has been discussed about the recent two popular open source geospatial big data analytical tools i.e. SpatialHadoop and GeoSpark which can be used for analysis and process the geospatial big data in efficient manner. It has compared the architectural view of SpatialHadoop and GeoSpark. Through the architectural comparison, it has also summarised the merits and demerits of these tools according the execution times and volume of the data which has been used.
ambient intelligence | 2018
Harishchandra Dubey; Ramdas Kumaresan; Kunal Mankodiya
Wearable photoplethysmography has recently become a common technology in heart rate (HR) monitoring. General observation is that the motion artifacts change the statistics of the acquired PPG signal. Consequently, estimation of HR from such a corrupted PPG signal is challenging. However, if an accelerometer is also used to acquire the acceleration signal simultaneously, it can provide helpful information that can be used to reduce the motion artifacts in the PPG signal. By dint of repetitive movements of the subjects hands while running, the accelerometer signal is found to be quasi-periodic. Over short-time intervals, it can be modeled by a finite harmonic sum (HSUM). Using the HSUM model, we obtain an estimate of the instantaneous fundamental frequency of the accelerometer signal. Since the PPG signal is a composite of the heart rate information (that is also quasi-periodic) and the motion artifact, we fit a joint HSUM model to the PPG signal. One of the harmonic sums corresponds to the heart-beat component in PPG and the other models the motion artifact. However, the fundamental frequency of the motion artifact has already been determined from the accelerometer signal. Subsequently, the HR is estimated from the joint HSUM model. The mean absolute error in HR estimates was 0.7359 beats per minute (BPM) with a standard deviation of 0.8328 BPM for 2015 IEEE Signal Processing cup data. The ground-truth HR was obtained from the simultaneously acquired ECG for validating the accuracy of the proposed method. The proposed method is compared with four methods that were recently developed and evaluated on the same dataset.
conference of the international speech communication association | 2016
Harishchandra Dubey; Lakshmish Kaushik; Abhijeet Sangwan; John H. L. Hansen
Peer-led team learning (PLTL) is a model for teaching STEM courses where small student groups meet periodically to collaboratively discuss coursework. Automatic analysis of PLTL sessions would help education researchers to get insight into how learning outcomes are impacted by individual participation, group behavior, team dynamics, etc.. Towards this, speech and language technology can help, and speaker diarization technology will lay the foundation for analysis. In this study, a new corpus is established called CRSS-PLTL, that contains speech data from 5 PLTL teams over a semester (10 sessions per team with 5-to-8 participants in each team). In CRSS-PLTL, every participant wears a LENA device (portable audio recorder) that provides multiple audio recordings of the event. Our proposed solution is unsupervised and contains a new online speaker change detection algorithm, termed G 3 algorithm in conjunction with Hausdorff-distance based clustering to provide improved detection accuracy. Additionally, we also exploit cross channel information to refine our diarization hypothesis. The proposed system provides good improvements in diarization error rate (DER) over the baseline LIUM system. We also present higher level analysis such as the number of conversational turns taken in a session, and speaking-time duration (participation) for each speaker.
students conference on engineering and systems | 2012
Harishchandra Dubey; Nandita; Ashutosh Kumar Tiwari
In this work, a pattern recognition system is investigated for blind automatic classification of digitally modulated communication signals. The proposed technique is able to discriminate the type of modulation scheme which is eventually used for demodulation followed by information extraction. The proposed system is composed of two subsystems namely feature extraction sub-system (FESS) and classifier sub-system (CSS). The FESS consists of continuous wavelet transform (CWT) for feature generation and principal component analysis (PCA) for selection of the feature subset which is rich in discriminatory information. The CSS uses the selected features to accurately classify the modulation class of the received signal. The proposed technique uses probabilistic neural network (PNN) and multilayer perceptron forward neural network (MLPFN) for comparative study of their recognition ability. PNN have been found to perform better in terms of classification accuracy as well as testing and training time than MLPFN. The proposed approach is robust to presence of phase offset and additive Gaussian noise.