Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Aditya Kalyanpur is active.

Publication


Featured researches published by Aditya Kalyanpur.


Journal of Web Semantics | 2007

Pellet: A practical OWL-DL reasoner

Evren Sirin; Bijan Parsia; Bernardo Cuenca Grau; Aditya Kalyanpur; Yarden Katz

In this paper, we present a brief overview of Pellet: a complete OWL-DL reasoner with acceptable to very good performance, extensive middleware, and a number of unique features. Pellet is the first sound and complete OWL-DL reasoner with extensive support for reasoning with individuals (including nominal support and conjunctive query), user-defined datatypes, and debugging support for ontologies. It implements several extensions to OWL-DL including a combination formalism for OWL-DL ontologies, a non-monotonic operator, and preliminary support for OWL/Rule hybrid reasoning. Pellet is written in Java and is open source.


Journal of Web Semantics | 2006

Swoop: A Web Ontology Editing Browser

Aditya Kalyanpur; Bijan Parsia; Evren Sirin; Bernardo Cuenca Grau; James A. Hendler

In this paper, we describe Swoop, a hypermedia inspired Ontology Browser and Editor based on OWL, the recently standardized Web-oriented ontology language. After discussing the design rationale and architecture of Swoop, we focus mainly on its features, using illustrative examples to highlight its use. We demonstrate that with its Web-metaphor, adherence to OWL recommendations and key unique features, such as Collaborative Annotation using Annotea, Swoop acts as a useful and efficient Web Ontology development tool. We conclude with a list of future plans for Swoop, that should further increase its overall appeal and accessibility.


international semantic web conference | 2007

Finding all justifications of OWL DL entailments

Aditya Kalyanpur; Bijan Parsia; Matthew Horridge; Evren Sirin

Finding the justifications of an entailment (that is, all the minimal set of axioms sufficient to produce an entailment) has emerged as a key inference service for the Web Ontology Language (OWL). Justifications are essential for debugging unsatisfiable classes and contradictions. The availability of justifications as explanations of entailments improves the understandability of large and complex ontologies. In this paper, we present several algorithms for computing all the justifications of an entailment in an OWL-DL Ontology and show, by an empirical evaluation, that even a reasoner independent approach works well on real ontologies.


international world wide web conferences | 2005

Debugging OWL ontologies

Bijan Parsia; Evren Sirin; Aditya Kalyanpur

As an increasingly large number of OWL ontologies become available on the Semantic Web and the descriptions in the ontologies become more complicated, finding the cause of errors becomes an extremely hard task even for experts. Existing ontology development environments provide some limited support, in conjunction with a reasoner, for detecting and diagnosing errors in OWL ontologies. Typically these are restricted to the mere detection of, for example, unsatisfiable concepts. We have integrated a number of simple debugging cues generated from our description logic reasoner, Pellet, in our hypertextual ontology development environment, Swoop. These cues, in conjunction with extensive undo/redo and Annotea based collaboration support in Swoop, significantly improve the OWL debugging experience, and point the way to more general improvements in the presentation of an ontology to new users.


Journal of Web Semantics | 2005

Debugging unsatisfiable classes in OWL ontologies

Aditya Kalyanpur; Bijan Parsia; Evren Sirin; James A. Hendler

As an increasingly large number of OWL ontologies become available on the Semantic Web and the descriptions in the ontologies become more complicated, finding the cause of errors becomes an extremely hard task even for experts. Existing ontology development environments provide some limited support, in conjunction with a reasoner, for reporting errors in OWL ontologies. Typically, these are restricted to the mere detection of, for example, unsatisfiable concepts. However, the diagnosis and resolution of the bug is not supported at all. For example, no explanation is given as to why the error occurs (e.g., by pinpointing the root clash, or axioms in the ontology responsible for the clash) or how dependencies between classes cause the error to propagate (i.e., by distinguishing root from derived unsatisfiable classes). In the former case, information from the internals of a description logic tableaux reasoner can be extracted and presented to the user (glass box approach); while in the latter case, the reasoner can be used as an oracle for a certain set of questions and the asserted structure of the ontology can be used to help isolate the source of the problems (black box approach). Based on the two approaches, we have integrated a number of debugging cues generated from our reasoner, Pellet, in our hypertextual ontology development environment, Swoop. A conducted usability evaluation demonstrates that these debugging cues significantly improve the OWL debugging experience, and point the way to more general improvements in the presentation of an ontology to users.


european semantic web conference | 2006

Repairing unsatisfiable concepts in OWL ontologies

Aditya Kalyanpur; Bijan Parsia; Evren Sirin; Bernardo Cuenca-Grau

In this paper, we investigate the problem of repairing unsatisfiable concepts in an OWL ontology in detail, keeping in mind the user perspective as much as possible. We focus on various aspects of the repair process – improving the explanation support to help the user understand the cause of error better, exploring various strategies to rank erroneous axioms (with motivating use cases for each strategy), automatically generating repair plans that can be customized easily, and suggesting appropriate axiom edits where possible to the user. Based on the techniques described, we present a preliminary version of an interactive ontology repair tool and demonstrate its applicability in practice.


International Journal on Semantic Web and Information Systems | 2005

A Tool for Working with Web Ontologies

Aditya Kalyanpur; Bijan Parsia; James A. Hendler

The task of building an open and scalable ontology browsing and editing tool based on OWL, the first standardized Web-oriented ontology language, requires the rethinking of critical user interface and ontological engineering issues. In this article, we describe Swoop, a browser and editor specifically tailored to OWL ontologies. Taking a “Web view†of things has proven quite instructive, and we discuss some insights into Web ontologies that we gained through our experience with Swoop, including issues related to the display, navigation, editing, and collaborative annotation of OWL ontological data.


Ibm Journal of Research and Development | 2012

Automatic knowledge extraction from documents

James Fan; Aditya Kalyanpur; David Gondek; David A. Ferrucci

Access to a large amount of knowledge is critical for success at answering open-domain questions for DeepQA systems such as IBM Watson™. Formal representation of knowledge has the advantage of being easy to reason with, but acquisition of structured knowledge in open domains from unstructured data is often difficult and expensive. Our central hypothesis is that shallow syntactic knowledge and its implied semantics can be easily acquired and can be used in many areas of a question-answering system. We take a two-stage approach to extract the syntactic knowledge and implied semantics. First, shallow knowledge from large collections of documents is automatically extracted. Second, additional semantics are inferred from aggregate statistics of the automatically extracted shallow knowledge. In this paper, we describe in detail what kind of shallow knowledge is extracted, how it is automatically done from a large corpus, and how additional semantics are inferred from aggregate statistics. We also briefly discuss the various ways extracted knowledge is used throughout the IBM DeepQA system.


Ibm Journal of Research and Development | 2012

A framework for merging and ranking of answers in DeepQA

David Gondek; Adam Lally; Aditya Kalyanpur; James W. Murdock; P. A. Duboue; Lixin Zhang; Yue Pan; Z. M. Qiu; Chris Welty

The final stage in the IBM DeepQA pipeline involves ranking all candidate answers according to their evidence scores and judging the likelihood that each candidate answer is correct. In DeepQA, this is done using a machine learning framework that is phase-based, providing capabilities for manipulating the data and applying machine learning in successive applications. We show how this design can be used to implement solutions to particular challenges that arise in applying machine learning for evidence-based hypothesis evaluation. Our approach facilitates an agile development environment for DeepQA; evidence scoring strategies can be easily introduced, revised, and reconfigured without the need for error-prone manual effort to determine how to combine the various evidence scores. We describe the framework, explain the challenges, and evaluate the gain over a baseline machine learning approach.


international semantic web conference | 2007

Matching patient records to clinical trials using ontologies

Chintan Patel; James J. Cimino; Julian Dolby; Achille Fokoue; Aditya Kalyanpur; Aaron Kershenbaum; Li Ma; Edith Schonberg; Kavitha Srinivas

This paper describes a large case study that explores the applicability of ontology reasoning to problems in the medical domain. We investigate whether it is possible to use such reasoning to automate common clinical tasks that are currently labor intensive and error prone, and focus our case study on improving cohort selection for clinical trials. An obstacle to automating such clinical tasks is the need to bridge the semantic gulf between raw patient data, such as laboratory tests or specific medications, and the way a clinician interprets this data. Our key insight is that matching patients to clinical trials can be formulated as a problem of semantic retrieval. We describe the technical challenges to building a realistic case study, which include problems related to scalability, the integration of large ontologies, and dealing with noisy, inconsistent data. Our solution is based on the SNOMED CT ® ontology, and scales to one year of patient records (approx. 240, 000 patients).

Collaboration


Dive into the Aditya Kalyanpur's collaboration.

Researchain Logo
Decentralizing Knowledge