Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Richard A. Farneth is active.

Publication


Featured researches published by Richard A. Farneth.


knowledge discovery and data mining | 2017

A Data-driven Process Recommender Framework

Sen Yang; Xin Dong; Leilei Sun; Yichen Zhou; Richard A. Farneth; Hui Xiong; Randall S. Burd; Ivan Marsic

We present an approach for improving the performance of complex knowledge-based processes by providing data-driven step-by-step recommendations. Our framework uses the associations between similar historic process performances and contextual information to determine the prototypical way of enacting the process. We introduce a novel similarity metric for grouping traces into clusters that incorporates temporal information about activity performance and handles concurrent activities. Our data-driven recommender system selects the appropriate prototype performance of the process based on user-provided context attributes. Our approach for determining the prototypes discovers the commonly performed activities and their temporal relationships. We tested our system on data from three real-world medical processes and achieved recommendation accuracy up to an F1 score of 0.77 (compared to an F1 score of 0.37 using ZeroR) with 63.2% of recommended enactments being within the first five neighbors of the actual historic enactments in a set of 87 cases. Our framework works as an interactive visual analytic tool for process mining. This work shows the feasibility of data-driven decision support system for complex knowledge-based processes.


designing interactive systems | 2017

Exploring Design Opportunities for a Context-Adaptive Medical Checklist Through Technology Probe Approach

Leah Kulp; Aleksandra Sarcevic; Richard A. Farneth; Omar Z. Ahmed; Dung Mai; Ivan Marsic; Randall S. Burd

This paper explores the workflow and use of an interactive medical checklist for trauma resuscitation--an emerging technology developed for trauma team leaders to support decision making and task coordination among team members. We used a technology probe approach and ethnographic methods, including video review, interviews, and content analysis of checklist logs, to examine how team leaders use the checklist probe during live resuscitations. We found that team leaders of various experience levels use the technology differently. Some leaders frequently glance at the checklist and take notes during task performance, while others place the checklist on a stand and only interact with the checklist when checking items. We compared checklist timestamps to task activities and found that most items are checked off after tasks are performed. We conclude by discussing design implications and new design opportunities for a future dynamic, adaptive checklist.


Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies | 2017

Progress Estimation and Phase Detection for Sequential Processes

Xinyu Li; Yanyi Zhang; Jianyu Zhang; Moliang Zhou; Shuhong Chen; Yue Gu; Yueyang Chen; Ivan Marsic; Richard A. Farneth; Randall S. Burd

Process modeling and understanding are fundamental for advanced human-computer interfaces and automation systems. Most recent research has focused on activity recognition, but little has been done on sensor-based detection of process progress. We introduce a real-time, sensor-based system for modeling, recognizing and estimating the progress of a work process. We implemented a multimodal deep learning structure to extract the relevant spatio-temporal features from multiple sensory inputs and used a novel deep regression structure for overall completeness estimation. Using process completeness estimation with a Gaussian mixture model, our system can predict the phase for sequential processes. The performance speed, calculated using completeness estimation, allows online estimation of the remaining time. To train our system, we introduced a novel rectified hyperbolic tangent (rtanh) activation function and conditional loss. Our system was tested on data obtained from the medical process (trauma resuscitation) and sports events (Olympic swimming competition). Our system outperformed the existing trauma-resuscitation phase detectors with a phase detection accuracy of over 86%, an F1-score of 0.67, a completeness estimation error of under 12.6%, and a remaining-time estimation error of less than 7.5 minutes. For the Olympic swimming dataset, our system achieved an accuracy of 88%, an F1-score of 0.58, a completeness estimation error of 6.3% and a remaining-time estimation error of 2.9 minutes.


ieee international conference on healthcare informatics | 2017

Language-Based Process Phase Detection in the Trauma Resuscitation

Yue Gu; Xinyu Li; Shuhong Chen; Hunagcan Li; Richard A. Farneth; Ivan Marsic; Randall S. Burd

Process phase detection has been widely used in surgical process modeling (SPM) to track process progression. These studies mostly used video and embedded sensor data, but spoken language also provides rich semantic information directly related to process progression. We present a long-short term memory (LSTM) deep learning model to predict trauma resuscitation phases using verbal communication logs. We first use an LSTM to extract the sentence meaning representations, and then sequentially feed them into another LSTM to extract the mean-ing of a sentence group within a time window. This information is ultimately used for phase prediction. We used 24 manually-transcribed trauma resuscitation cases to train, and the remain-ing 6 cases to test our model. We achieved 79.12% accuracy, and showed performance advantages over existing visual-audio systems for critical phases of the process. In addition to language information, we evaluated a multimodal phase prediction structure that also uses audio input. We finally identified the challenges of substituting manual transcription with automatic speech recognition in trauma resuscitation.


information processing in sensor networks | 2017

3D activity localization with multiple sensors: poster abstract

Xinyu Li; Yanyi Zhang; Jianyu Zhang; Shuhong Chen; Yue Gu; Richard A. Farneth; Ivan Marsic; Randall S. Burd

We present a deep learning framework for fast 3D activity localization and tracking in a dynamic and crowded real world setting. Our training approach reverses the traditional activity localization approach, which first estimates the possible location of activities and then predicts their occurrence. Instead, we first trained a deep convolutional neural network for activity recognition using depth video and RFID data as input, and then used the activation maps of the network to locate the recognized activity in the 3D space. Our system achieved around 20cm average localization error (in a 4m by 5m room) which is comparable to Kinects body skeleton tracking error (10-20cm), but our system tracks activities instead of Kinects location of people.We present a deep learning framework for fast 3D activity localization and tracking in a dynamic and crowded real world setting. Our training approach reverses the traditional activity localization approach, which first estimates the possible location of activities and then predicts their occurrence. Instead, we first trained a deep convolutional neural network for activity recognition using depth video and RFID data as input, and then used the activation maps of the network to locate the recognized activity in the 3D space. Our system achieved around 20cm average localization error (in a 4m × 5m room) which is comparable to Kinects body skeleton tracking error (10--20cm), but our system tracks activities instead of Kinects location of people.


information processing in sensor networks | 2017

CAR - a deep learning structure for concurrent activity recognition: poster abstract

Yanyi Zhang; Xinyu Li; Jianyu Zhang; Shuhong Chen; Moliang Zhou; Richard A. Farneth; Ivan Marsic; Randall S. Burd

We introduce the Concurrent Activity Recognizer (CAR) - an efficient deep learning structure that recognizes complex concurrent teamwork activities from multimodal data. We implemented the system in a challenging medical setting, where it recognizes 35 different activities using Kinect depth video and data from passive RFID tags on 25 types of medical objects. Our preliminary results showed our system achieved an 84% average accuracy with 0.20 F1-Score.


ieee international conference on healthcare informatics | 2017

Evaluation of Trace Alignment Quality and its Application in Medical Process Mining

Moliang Zhou; Sen Yang; Xinyu Li; Shuyu Lv; Shuhong Chen; Ivan Marsic; Richard A. Farneth; Randall S. Burd

Trace alignment algorithms have been used in process mining for discovering the consensus treatment procedures and process deviations. Different alignment algorithms, however, may produce very different results. No widely-adopted method exists for evaluating the results of trace alignment. Existing reference-free evaluation methods cannot adequately and comprehensively assess the alignment quality. We analyzed and compared the existing evaluation methods, identifying their limitations, and introduced improvements in two reference-free evaluation methods. Our approach assesses the alignment result globally instead of locally, and therefore helps the algorithm to optimize overall alignment quality. We also introduced a novel metric to measure the alignment complexity, which can be used as a constraint on alignment algorithm optimization. We tested our evaluation methods on a trauma resuscitation dataset and provided the medical explanation of the activities and patterns identified as deviations using our proposed evaluation methods.


ieee international conference on healthcare informatics | 2018

Intention Mining in Medical Process: A Case Study in Trauma Resuscitation

Sen Yang; Weiqing Ni; Xin Dong; Shuhong Chen; Richard A. Farneth; Aleksandra Sarcevic; Ivan Marsic; Randall S. Burd


Journal of Biomedical Informatics | 2018

An approach to automatic process deviation detection in a time-critical clinical process

Sen Yang; Aleksandra Sarcevic; Richard A. Farneth; Shuhong Chen; Omar Z. Ahmed; Ivan Marsic; Randall S. Burd


information processing in sensor networks | 2017

Poster Abstract: CAR - A Deep Learning Structure for Concurrent Activity Recognition

Yanyi Zhang; Xinyu Li; Jianyu Zhang; Shuhong Chen; Moliang Zhou; Richard A. Farneth; Ivan Marsic; Randall S. Burd

Collaboration


Dive into the Richard A. Farneth's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Randall S. Burd

Children's National Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge