Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Seyed Ali Rokni is active.

Publication


Featured researches published by Seyed Ali Rokni.


design automation conference | 2016

Plug-n-learn: automatic learning of computational algorithms in human-centered internet-of-things applications

Seyed Ali Rokni; Hassan Ghasemzadeh

Wearable technologies play a central role in human-centered Internet-of-Things applications. Wearables leverage computational and machine learning algorithms to detect events of interest such as physical activities and medical complications. A major obstacle in large-scale utilization of current wearables is that their computational algorithms need to be re-built from scratch upon any changes in the configuration of the network. Retraining of these algorithms requires significant amount of labeled training data, a process that is labor-intensive, time-consuming, and infeasible. We propose an approach for automatic retraining of the machine learning algorithms in real-time without need for any labeled training data. We measure the inherent correlation between observations made by an old sensor view for which trained algorithms exist and the new sensor view for which an algorithm needs to be developed. By applying our real-time multi-view autonomous learning approach, we achieve an accuracy of 80.66% in activity recognition, which is an improvement of 15.96% in the accuracy due to the automatic labeling of the data in the new sensor node. This performance is only 7.96% lower than the experimental upper bound where labeled training data are collected with the new sensor.


international conference on computer aided design | 2016

Autonomous sensor-context learning in dynamic human-centered internet-of-things environments

Seyed Ali Rokni; Hassan Ghasemzadeh

Human-centered Internet-of-Things (IoT) applications utilize computational algorithms such as machine learning and signal processing techniques to infer knowledge about important events such as physical activities and medical complications. The inference is typically based on data collected with wearable sensors or those embedded in the environment. A major obstacle in large-scale utilization of these systems is that the computational algorithms cannot be shared between users or reused in contexts different than the setting in which the training data are collected. For example, an activity recognition algorithm trained for a wrist-band sensor cannot be used on a smartphone worn on the waist. We propose an approach for automatic detection of physical sensor-contexts (e.g., on-body sensor location) without need for collecting new labeled training data. Our techniques enable system designers and end-users to share and reuse computational algorithms that are trained under different contexts and data collection settings. We develop a framework to autonomously identify sensor-context. We propose a gating function to automatically activate the most accurate computational algorithm among a set of shared expert models. Our analysis based on real data collected with human subjects while performing 12 physical activities demonstrate that the accuracy of our multi-view learning is only 7.9% less than the experimental upper bound for activity recognition using a dynamic sensor constantly migrating from one on-body location to another. We also compare our approach with several mixture-of-experts models and transfer learning techniques and demonstrate that our approach outperforms algorithms in both categories.


information processing in sensor networks | 2017

Synchronous dynamic view learning: a framework for autonomous training of activity recognition models using wearable sensors

Seyed Ali Rokni; Hassan Ghasemzadeh

Wearable technologies play a central role in human-centered Internet-of-Things applications. Wearables leverage machine learning algorithms to detect events of interest such as physical activities and medical complications. These algorithms, however, need to be retrained upon any changes in configuration of the system, such as addition/ removal of a sensor to/ from the network or displacement/ misplacement/ mis-orientation of the physical sensors on the body. We challenge this retraining model by stimulating the vision of autonomous learning with the goal of eliminating the labor-intensive, time-consuming, and highly expensive process of collecting labeled training data in dynamic environments. We propose an approach for autonomous retraining of the machine learning algorithms in real-time without need for any new labeled training data. We focus on a dynamic setting where new sensors are added to the system and worn on various body locations. We capture the inherent correlation between observations made by a static sensor view for which trained algorithms exist and the new dynamic sensor views for which an algorithm needs to be developed. By applying our real-time dynamic-view autonomous learning approach, we achieve an average accuracy of 81.1% in activity recognition using three experimental datasets. This amount of accuracy represents more than 13.8% improvement in the accuracy due to the automatic labeling of the sensor data in the newly added sensor. This performance is only 11.2% lower than the experimental upper bound where labeled training data are collected with the new sensor.


JCO Clinical Cancer Informatics | 2018

Digital Health for Geriatric Oncology

Ramin Fallahzadeh; Seyed Ali Rokni; Hassan Ghasemzadeh; Enrique Soto-Perez-de-Celis; Armin Shahrokni

In this review, we describe state-of-the-art digital health solutions for geriatric oncology and explore the potential application of emerging remote health-monitoring technologies in the context of cancer care. We also discuss the benefits and motivations behind adopting technology for symptom monitoring of older adults with cancer. We provide an overview of common symptoms and of the digital solutions-designed remote symptom assessment. We describe state-of-the-art systems for this purpose and highlight the limitations and challenges for the full-scale adoption of such solutions in geriatric oncology. With rapid advances in Internet-of-things technologies, many remote assessment systems have been developed in recent years. Despite showing potential in several health care domains and reliable functionality, few of these solutions have been designed for or tested in older patients with cancer. As a result, the geriatric oncology community lacks a consensus understanding of a possible correlation between remote digital assessments and health-related outcomes. Although the recent development of digital health solutions has been shown to be reliable and effective in many health-related applications, there exists an unmet need for development of systems and clinical trials specifically designed for remote cancer management of older adults with cancer, including developing advanced remote technologies for cancer-related symptom assessment and psychological behavior monitoring at home and developing outcome-oriented study protocols for accurate evaluation of existing or emerging systems. We conclude that perhaps the clearest path to future large-scale use of remote digital health technologies in cancer research is designing and conducting collaborative studies involving computer scientists, oncologists, and patient advocates.


information processing in sensor networks | 2017

A beverage intake tracking system based on machine learning algorithms, and ultrasonic and color sensors: poster abstract

Mahdi Pedram; Seyed Ali Rokni; Ramin Fallahzadeh; Hassan Ghasemzadeh

We present a novel approach for monitoring beverage intake. Our system is composed of an ultrasonic sensor, an RGB color sensor, and machine learning algorithms. The system not only measures beverage volume but also detects beverage types. The sensor unit is lightweight that can be mounted on the lid of any drinking bottle. Our experimental results demonstrate that the proposed approach achieves more than 97% accuracy in beverage type classification. Furthermore, our regression-based volume measurement has a nominal error of 3%.


Archive | 2017

Smart Medication Management, Current Technologies, and Future Directions

Seyed Ali Rokni; Hassan Ghasemzadeh; Niloofar Hezarjaribi


national conference on artificial intelligence | 2018

Personalized Human Activity Recognition Using Convolutional Neural Networks.

Seyed Ali Rokni; Marjan Nourollahi; Hassan Ghasemzadeh


Journal of Geriatric Oncology | 2018

Gait speed and survival of older surgical patient with cancer: Prediction after machine learning

Keyvan Sasani; Helen N. Catanese; Alireza Ghods; Seyed Ali Rokni; Hassan Ghasemzadeh; Robert J. Downey; Armin Shahrokni


IEEE Transactions on Mobile Computing | 2018

Autonomous Training of Activity Recognition Algorithms in Mobile Sensors: A Transfer Learning Approach in Context-Invariant Views

Seyed Ali Rokni; Hassan Ghasemzadeh


information processing in sensor networks | 2017

Poster Abstract: A Beverage Intake Tracking System Based on Machine Learning Algorithms, and Ultrasonic and Color Sensors

Mahdi Pedram; Seyed Ali Rokni; Ramin Fallahzadeh; Hassan Ghasemzadeh

Collaboration


Dive into the Seyed Ali Rokni's collaboration.

Top Co-Authors

Avatar

Hassan Ghasemzadeh

Washington State University

View shared research outputs
Top Co-Authors

Avatar

Armin Shahrokni

Memorial Sloan Kettering Cancer Center

View shared research outputs
Top Co-Authors

Avatar

Ramin Fallahzadeh

Washington State University

View shared research outputs
Top Co-Authors

Avatar

Mahdi Pedram

Washington State University

View shared research outputs
Top Co-Authors

Avatar

Alireza Ghods

Washington State University

View shared research outputs
Top Co-Authors

Avatar

Helen N. Catanese

Washington State University

View shared research outputs
Top Co-Authors

Avatar

Keyvan Sasani

Washington State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Robert J. Downey

Memorial Sloan Kettering Cancer Center

View shared research outputs
Researchain Logo
Decentralizing Knowledge