Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Trevor Cohen is active.

Publication


Featured researches published by Trevor Cohen.


Journal of Biomedical Informatics | 2009

Empirical Distributional Semantics: Methods and Biomedical Applications

Trevor Cohen; Dominic Widdows

Over the past 15 years, a range of methods have been developed that are able to learn human-like estimates of the semantic relatedness between terms from the way in which these terms are distributed in a corpus of unannotated natural language text. These methods have also been evaluated in a number of applications in the cognitive science, computational linguistics and the information retrieval literatures. In this paper, we review the available methodologies for derivation of semantic relatedness from free text, as well as their evaluation in a variety of biomedical and other applications. Recent methodological developments, and their applicability to several existing applications are also discussed.


Journal of Biomedical Informatics | 2010

Reflective Random Indexing and indirect inference: A scalable method for discovery of implicit connections

Trevor Cohen; Roger W. Schvaneveldt; Dominic Widdows

The discovery of implicit connections between terms that do not occur together in any scientific document underlies the model of literature-based knowledge discovery first proposed by Swanson. Corpus-derived statistical models of semantic distance such as Latent Semantic Analysis (LSA) have been evaluated previously as methods for the discovery of such implicit connections. However, LSA in particular is dependent on a computationally demanding method of dimension reduction as a means to obtain meaningful indirect inference, limiting its ability to scale to large text corpora. In this paper, we evaluate the ability of Random Indexing (RI), a scalable distributional model of word associations, to draw meaningful implicit relationships between terms in general and biomedical language. Proponents of this method have achieved comparable performance to LSA on several cognitive tasks while using a simpler and less computationally demanding method of dimension reduction than LSA employs. In this paper, we demonstrate that the original implementation of RI is ineffective at inferring meaningful indirect connections, and evaluate Reflective Random Indexing (RRI), an iterative variant of the method that is better able to perform indirect inference. RRI is shown to lead to more clearly related indirect connections and to outperform existing RI implementations in the prediction of future direct co-occurrence in the MEDLINE corpus.


Current Opinion in Critical Care | 2008

New perspectives on error in critical care.

Vimla L. Patel; Trevor Cohen

Purpose of reviewDespite unprecedented attention on the issue of medical error over the last 8 years, there is little evidence of widely available improvements in patient safety. The present review addresses some alternative approaches to the study of human error, and their implications for the characterization of medical error. Recent findingsThe complex nature of healthcare work has been proposed as a primary barrier to the implementation of effective safety measures. Approaches to error, based on individual accountability, cannot address this complexity. Strategies to eradicate error fail to appreciate that error detection and recovery are integral to the function of complex cognitive systems. Through investigation of the emergence of and recovery from error, one can identify new approaches for error management. SummaryThe present review discusses contemporary approaches to error that are able to address the complex nature of critical care work. Instead of producing situation-specific ‘quick fixes’, they are more likely to reveal generalizable mechanisms of error that can support widely applicable solutions.


Journal of Biomedical Informatics | 2011

Toward automated workflow analysis and visualization in clinical environments

Mithra Vankipuram; Kanav Kahol; Trevor Cohen; Vimla L. Patel

Lapses in patient safety have been linked to unexpected perturbations in clinical workflow. The effectiveness of workflow analysis becomes critical to understanding the impact of these perturbations on patient outcome. The typical methods used for workflow analysis, such as ethnographic observations and interviewing, are limited in their ability to capture activities from different perspectives simultaneously. This limitation, coupled with the complexity and dynamic nature of clinical environments makes understanding the nuances of clinical workflow difficult. The methods proposed in this research aim to provide a quantitative means of capturing and analyzing workflow. The approach taken utilizes recordings of motion and location of clinical teams that are gathered using radio identification tags and observations. This data is used to model activities in critical care environments. The detected activities can then be replayed in 3D virtual reality environments for further analysis and training. Using this approach, the proposed system augments existing methods of workflow analysis, allowing for capture of workflow in complex and dynamic environments. The system was tested with a set of 15 simulated clinical activities that when combined represent workflow in trauma units. A mean recognition rate of 87.5% was obtained in automatically recognizing the activities.


ieee international conference semantic computing | 2010

The Semantic Vectors Package: New Algorithms and Public Tools for Distributional Semantics

Dominic Widdows; Trevor Cohen

Distributional semantics is the branch of natural language processing that attempts to model the meanings of words, phrases and documents from the distribution and usage of words in a corpus of text. In the past three years, research in this area has been accelerated by the availability of the Semantic Vectors package, a stable, fast, scalable, and free software package for creating and exploring concepts in distributional models. This paper introduces the broad field of distributional semantics, the role of vector models within this field, and describes some of the results that have been made possible by the Semantic Vectors package. These applications of Semantic Vectors have so far included contributions to medical informatics and knowledge discovery, analysis of scientific articles, and even Biblical scholarship. Of particular interest is the recent emergence of models that take word order and other ordered structures into account, using permutation of coordinates to model directional relationships and semantic predicates.


International Journal of Medical Informatics | 2006

Customizing clinical narratives for the electronic medical record interface using cognitive methods

Pallav Sharda; Amar K. Das; Trevor Cohen; Vimla L. Patel

OBJECTIVE As healthcare practice transitions from paper-based to computer-based records, there is increasing need to determine an effective electronic format for clinical narratives. Our research focuses on utilizing a cognitive science methodology to guide the conversion of medical texts to a more structured, user-customized presentation in the electronic medical record (EMR). DESIGN We studied the use of discharge summaries by psychiatrists with varying expertise-experts, intermediates, and novices. Experts were given two hypothetical emergency care scenarios with narrative discharge summaries and asked to verbalize their clinical assessment. Based on the results, the narratives were presented in a more structured form. Intermediate and novice subjects received a narrative and a structured discharge summary, and were asked to verbalize their assessments of each. MEASUREMENTS A qualitative comparison of the interview transcripts of all subjects was done by analysis of recall and inference made with respect to level of expertise. RESULTS For intermediate and novice subjects, recall was greater with the structured form than with the narrative. Novices were also able to make more inferences (not always accurate) from the structured form than with the narrative. Errors occurred in assessments using the narrative form but not the structured form. CONCLUSIONS Our cognitive methods to study discharge summary use enabled us to extract a conceptual representation of clinical narratives from end-users. This method allowed us to identify clinically relevant information that can be used to structure medical text for the EMR and potentially improve recall and reduce errors.


Journal of Biomedical Informatics | 2011

Recovery at the edge of error: Debunking the myth of the infallible expert

Vimla L. Patel; Trevor Cohen; Tripti Murarka; Joanne Olsen; Srujana Kagita; Sahiti Myneni; Timothy G. Buchman; Vafa Ghaemmaghami

The notion that human error should not be tolerated is prevalent in both the public and personal perception of the performance of clinicians. However, researchers in other safety-critical domains have long since abandoned the quest for zero defects as an impractical goal, choosing to focus instead on the development of strategies to enhance the ability to recover from error. This paper presents a cognitive framework for the study of error recovery, and the results of our empirical research into error detection and recovery in the critical care domain, using both laboratory-based and naturalistic approaches. Both attending physicians and residents were prone to commit, detect and recover from errors, but the nature of these errors was different. Experts corrected the errors as soon as they detected them and were better able to detect errors requiring integration of multiple elements in the case. Residents were more cautious in making decisions showing a slower error recovery pattern, and the detected errors were more procedural in nature with specific patient outcomes. Error detection and correction are shown to be dependent on expertise, and on the nature of the everyday tasks of the clinicians concerned. Understanding the limits and failures of human decision-making is important if we are to build robust decision-support systems to manage the boundaries of risk of error in decision-making. Detection and correction of potential error is an integral part of cognitive work in the complex, critical care workplace.


Journal of Biomedical Informatics | 2010

Reflective random indexing for semi-automatic indexing of the biomedical literature

Vidya Vasuki; Trevor Cohen

The rapid growth of biomedical literature is evident in the increasing size of the MEDLINE research database. Medical Subject Headings (MeSH), a controlled set of keywords, are used to index all the citations contained in the database to facilitate search and retrieval. This volume of citations calls for efficient tools to assist indexers at the US National Library of Medicine (NLM). Currently, the Medical Text Indexer (MTI) system provides assistance by recommending MeSH terms based on the title and abstract of an article using a combination of distributional and vocabulary-based methods. In this paper, we evaluate a novel approach toward indexer assistance by using nearest neighbor classification in combination with Reflective Random Indexing (RRI), a scalable alternative to the established methods of distributional semantics. On a test set provided by the NLM, our approach significantly outperforms the MTI system, suggesting that the RRI approach would make a useful addition to the current methodologies.


Journal of Biomedical Informatics | 2011

Making sense: Sensor-based investigation of clinician activities in complex critical care environments

Thomas George Kannampallil; Zhe Li; Min Zhang; Trevor Cohen; David J. Robinson; Amy Franklin; Jiajie Zhang; Vimla L. Patel

In many respects, the critical care workplace resembles a paradigmatic complex system: on account of the dynamic and interactive nature of collaborative clinical work, these settings are characterized by non-linear, inter-dependent and emergent activities. Developing a comprehensive understanding of the work activities in critical care settings enables the development of streamlined work practices, better clinician workflow and most importantly, helps in the avoidance of and recovery from potential errors. Sensor-based technology provides a flexible and viable way to complement human observations by providing a mechanism to capture the nuances of certain activities with greater precision and timing. In this paper, we use sensor-based technology to capture the movement and interactions of clinicians in the Trauma Center of an Emergency Department (ED). Remarkable consistency was found between sensor data and human observations in terms of clinician locations and interactions. With this validation and greater precision with sensors, ED environment was characterized in terms of (a) the degree of randomness or entropy in the environment, (b) the movement patterns of clinicians, (c) interactions with other clinicians and finally, (d) patterns of collaborative organization with team aggregation and dispersion. Based on our results, we propose three opportunities for the use of sensor technologies in critical care settings: as a mechanism for real-time monitoring and analysis for ED activities, education and training of clinicians, and perhaps most importantly, investigating the root-causes, origins and progression of errors in the ED. Lessons learned and the challenges encountered in designing and implementing the sensor technology sensor data are discussed.


Journal of Biomedical Informatics | 2008

Simulating expert clinical comprehension: Adapting latent semantic analysis to accurately extract clinical concepts from psychiatric narrative

Trevor Cohen; Brett Blatter; Vimla L. Patel

Cognitive studies reveal that less-than-expert clinicians are less able to recognize meaningful patterns of data in clinical narratives. Accordingly, psychiatric residents early in training fail to attend to information that is relevant to diagnosis and the assessment of dangerousness. This manuscript presents cognitively motivated methodology for the simulation of expert ability to organize relevant findings supporting intermediate diagnostic hypotheses. Latent Semantic Analysis is used to generate a semantic space from which meaningful associations between psychiatric terms are derived. Diagnostically meaningful clusters are modeled as geometric structures within this space and compared to elements of psychiatric narrative text using semantic distance measures. A learning algorithm is defined that alters components of these geometric structures in response to labeled training data. Extraction and classification of relevant text segments is evaluated against expert annotation, with system-rater agreement approximating rater-rater agreement. A range of biomedical informatics applications for these methods are suggested.

Collaboration


Dive into the Trevor Cohen's collaboration.

Top Co-Authors

Avatar

Vimla L. Patel

New York Academy of Medicine

View shared research outputs
Top Co-Authors

Avatar

Hua Xu

University of Texas Health Science Center at Houston

View shared research outputs
Top Co-Authors

Avatar

Sahiti Myneni

University of Texas Health Science Center at Houston

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Elmer V. Bernstam

University of Texas Health Science Center at Houston

View shared research outputs
Top Co-Authors

Avatar

Thomas C. Rindflesch

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Nathan K. Cobb

Georgetown University Medical Center

View shared research outputs
Top Co-Authors

Avatar

Thomas George Kannampallil

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar

Khalid F. Almoosa

University of Texas Health Science Center at Houston

View shared research outputs
Researchain Logo
Decentralizing Knowledge