Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kevin Livingston is active.

Publication


Featured researches published by Kevin Livingston.


BMC Bioinformatics | 2011

The gene normalization task in BioCreative III

Zhiyong Lu; Hung Yu Kao; Chih-Hsuan Wei; Minlie Huang; Jingchen Liu; Cheng-Ju Kuo; Chun-Nan Hsu; Richard Tzong-Han Tsai; Hong-Jie Dai; Naoaki Okazaki; Han-Cheol Cho; Martin Gerner; Illés Solt; Shashank Agarwal; Feifan Liu; Dina Vishnyakova; Patrick Ruch; Martin Romacker; Fabio Rinaldi; Sanmitra Bhattacharya; Padmini Srinivasan; Hongfang Liu; Manabu Torii; Sérgio Matos; David Campos; Karin Verspoor; Kevin Livingston; W. John Wilbur

BackgroundWe report the Gene Normalization (GN) challenge in BioCreative III where participating teams were asked to return a ranked list of identifiers of the genes detected in full-text articles. For training, 32 fully and 500 partially annotated articles were prepared. A total of 507 articles were selected as the test set. Due to the high annotation cost, it was not feasible to obtain gold-standard human annotations for all test articles. Instead, we developed an Expectation Maximization (EM) algorithm approach for choosing a small number of test articles for manual annotation that were most capable of differentiating team performance. Moreover, the same algorithm was subsequently used for inferring ground truth based solely on team submissions. We report team performance on both gold standard and inferred ground truth using a newly proposed metric called Threshold Average Precision (TAP-k).ResultsWe received a total of 37 runs from 14 different teams for the task. When evaluated using the gold-standard annotations of the 50 articles, the highest TAP-k scores were 0.3297 (k=5), 0.3538 (k=10), and 0.3535 (k=20), respectively. Higher TAP-k scores of 0.4916 (k=5, 10, 20) were observed when evaluated using the inferred ground truth over the full test set. When combining team results using machine learning, the best composite system achieved TAP-k scores of 0.3707 (k=5), 0.4311 (k=10), and 0.4477 (k=20) on the gold standard, representing improvements of 12.4%, 21.8%, and 26.6% over the best team results, respectively.ConclusionsBy using full text and being species non-specific, the GN task in BioCreative III has moved closer to a real literature curation task than similar tasks in the past and presents additional challenges for the text mining community, as revealed in the overall team results. By evaluating teams using the gold standard, we show that the EM algorithm allows team submissions to be differentiated while keeping the manual annotation effort feasible. Using the inferred ground truth we show measures of comparative performance between teams. Finally, by comparing team rankings on gold standard vs. inferred ground truth, we further demonstrate that the inferred ground truth is as effective as the gold standard for detecting good team performance.


BMC Bioinformatics | 2015

KaBOB: ontology-based semantic integration of biomedical databases

Kevin Livingston; Michael Bada; William A. Baumgartner; Lawrence Hunter

BackgroundThe ability to query many independent biological databases using a common ontology-based semantic model would facilitate deeper integration and more effective utilization of these diverse and rapidly growing resources. Despite ongoing work moving toward shared data formats and linked identifiers, significant problems persist in semantic data integration in order to establish shared identity and shared meaning across heterogeneous biomedical data sources.ResultsWe present five processes for semantic data integration that, when applied collectively, solve seven key problems. These processes include making explicit the differences between biomedical concepts and database records, aggregating sets of identifiers denoting the same biomedical concepts across data sources, and using declaratively represented forward-chaining rules to take information that is variably represented in source databases and integrating it into a consistent biomedical representation. We demonstrate these processes and solutions by presenting KaBOB (the Knowledge Base Of Biomedicine), a knowledge base of semantically integrated data from 18 prominent biomedical databases using common representations grounded in Open Biomedical Ontologies. An instance of KaBOB with data about humans and seven major model organisms can be built using on the order of 500 million RDF triples. All source code for building KaBOB is available under an open-source license.ConclusionsKaBOB is an integrated knowledge base of biomedical data representationally based in prominent, actively maintained Open Biomedical Ontologies, thus enabling queries of the underlying data in terms of biomedical concepts (e.g., genes and gene products, interactions and processes) rather than features of source-specific data schemas or file formats. KaBOB resolves many of the issues that routinely plague biomedical researchers intending to work with data from multiple data sources and provides a platform for ongoing data integration and development and for formal reasoning over a wealth of integrated biomedical data.


intelligent user interfaces | 2003

Beyond broadcast

Kevin Livingston; Mark Dredze; Kristian J. Hammond; Lawrence Birnbaum

The work presented in this paper takes a novel approach to the task of providing information to viewers of broadcast news. Instead of considering the broadcast news as the end product, this work uses it as a starting point to dynamically build an information space for the user to explore. This information space is designed to satisfy the users information needs, by containing more breadth, depth, and points of view than the original broadcast story. The architecture and current implementation are discussed, and preliminary results from the analysis of some its components are presented


intelligent user interfaces | 2003

Beyond broadcast: a demo

Kevin Livingston; Mark Dredze; Kristian J. Hammond; Lawrence Birnbaum

This research discusses a method for delivering just-in-time information to television viewers to provide more depth and more breadth to television broadcasts. A novel aspect of this research is that it uses broadcast news as a starting point for gathering information regarding specific stories, as opposed to considering the broadcast version to be the end of the viewers exploration. This work is implemented in Cronkite, a system that provides viewers with expanded coverage of broadcast news stories.


Briefings in Bioinformatics | 2016

The digital revolution in phenotyping

Anika Oellrich; Nigel Collier; Tudor Groza; Dietrich Rebholz-Schuhmann; Nigam H. Shah; Olivier Bodenreider; Mary Regina Boland; Ivo I. Georgiev; Hongfang Liu; Kevin Livingston; Augustin Luna; Ann-Marie Mallon; Prashanti Manda; Peter N. Robinson; Gabriella Rustici; Michelle Simon; Liqin Wang; Rainer Winnenburg; Michel Dumontier

Phenotypes have gained increased notoriety in the clinical and biological domain owing to their application in numerous areas such as the discovery of disease genes and drug targets, phylogenetics and pharmacogenomics. Phenotypes, defined as observable characteristics of organisms, can be seen as one of the bridges that lead to a translation of experimental findings into clinical applications and thereby support ‘bench to bedside’ efforts. However, to build this translational bridge, a common and universal understanding of phenotypes is required that goes beyond domain-specific definitions. To achieve this ambitious goal, a digital revolution is ongoing that enables the encoding of data in computer-readable formats and the data storage in specialized repositories, ready for integration, enabling translational research. While phenome research is an ongoing endeavor, the true potential hidden in the currently available data still needs to be unlocked, offering exciting opportunities for the forthcoming years. Here, we provide insights into the state-of-the-art in digital phenotyping, by means of representing, acquiring and analyzing phenotype data. In addition, we provide visions of this field for future research work that could enable better applications of phenotype data.


Journal of Biomedical Semantics | 2013

Representing annotation compositionality and provenance for the Semantic Web

Kevin Livingston; Michael Bada; Lawrence Hunter; Karin Verspoor

BackgroundThough the annotation of digital artifacts with metadata has a long history, the bulk of that work focuses on the association of single terms or concepts to single targets. As annotation efforts expand to capture more complex information, annotations will need to be able to refer to knowledge structures formally defined in terms of more atomic knowledge structures. Existing provenance efforts in the Semantic Web domain primarily focus on tracking provenance at the level of whole triples and do not provide enough detail to track how individual triple elements of annotations were derived from triple elements of other annotations.ResultsWe present a task- and domain-independent ontological model for capturing annotations and their linkage to their denoted knowledge representations, which can be singular concepts or more complex sets of assertions. We have implemented this model as an extension of the Information Artifact Ontology in OWL and made it freely available, and we show how it can be integrated with several prominent annotation and provenance models. We present several application areas for the model, ranging from linguistic annotation of text to the annotation of disease-associations in genome sequences.ConclusionsWith this model, progressively more complex annotations can be composed from other annotations, and the provenance of compositional annotations can be represented at the annotation level or at the level of individual elements of the RDF triples composing the annotations. This in turn allows for progressively richer annotations to be constructed from previous annotation efforts, the precise provenance recording of which facilitates evidence-based inference and error tracking.


intelligent user interfaces | 2007

Knowledge acquisition from simplified text

Kevin Livingston; Christopher K. Riesbeck

The problem of entering and integrating new knowledge into a logic-based knowledge base is substantial. Our solution is to provide a natural language interface, which reads simplified English, enabled by a knowledge-based memory-retrieval driven natural language understander. This paper presents a set of tools and interfaces for interacting with such a system, and a discussion of the underlying Reader system, the reading component of the Learning Reader project. The interfaces presented provide direct feedback about what portions of the text are understood, and what interpretations are being produced from it. In addition tools are presented to, among other things, provide example sentences, to facilitate users producing simplified English text suitable for the Reader.


intelligent user interfaces | 2005

Task aware information access for diagnosis of manufacturing problems

Lawrence Birnbaum; Wallace J. Hopp; Seyed M. R. Iravani; Kevin Livingston; Biying Shou; Thomas M. Tirpak

Pinpoint is a promising first step towards using a rich model of task context in proactive and dynamic IR systems. Pinpoint allows a user to navigate decision tree representations of problem spaces, built by domain experts, while dynamically entering annotations specific to their problem. The system then automatically generates queries to information repositories based on both the users annotations and location in the problem space, producing results that are both task focused and problem specific. Initial feedback from users and domain experts has been positive.


national aerospace and electronics conference | 2000

A data parallel implementation of an intelligent reasoning system

Kevin Livingston; Jennifer Seitzer

We present an implementation of a data parallel system. A sequential knowledge-based deductive and inductive system, INDED, is transformed into a parallel system. In this parallel system the learning algorithm, the fundamental component of the induction engine, is distributed among many processors. The parallel system is implemented with a master node and several worker nodes. The master node is responsible for coordinating the activity of the worker nodes, and organizing the overall learning process. All the worker nodes share the processing of the basic induction algorithms and report their results to the master node. The goal of the data parallel system is to produce, more efficiently, rules that are equal to or better than those produced by the serial system. In this paper, we present the architecture of the parallel version of INDED, and comparison results involving execution speeds and quality of generated rules of the new parallel system to those of the serial system.


national conference on artificial intelligence | 2007

Integrating natural language, knowledge representation and reasoning, and analogical processing to learn by reading

Kenneth D. Forbus; Christopher K. Riesbeck; Lawrence Birnbaum; Kevin Livingston; Abhishek B. Sharma; Leo Ureel

Collaboration


Dive into the Kevin Livingston's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lawrence Hunter

University of Colorado Denver

View shared research outputs
Top Co-Authors

Avatar

Michael Bada

University of Colorado Denver

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark Dredze

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge