Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Annette ten Teije is active.

Publication


Featured researches published by Annette ten Teije.


knowledge acquisition, modeling and management | 2006

From natural language to formal proof goal : Structured goal formalisation applied to medical guidelines

Steffen Staab; Vojtěch Svátek; Ruud Stegers; Annette ten Teije; Frank van Harmelen; Svatek

Invited Talks.- Information and Influence in Social Networks.- Learning, Logic, and Probability: A Unified View.- Knowledge Acquisition.- KARaCAs: Knowledge Acquisition with Repertory Grids and Formal Concept Analysis for Dialog System Construction.- Capturing Quantified Constraints in FOL, Through Interaction with a Relationship Graph.- Assisting Domain Experts to Formulate and Solve Constraint Satisfaction Problems.- Knowledge Acquisition Evaluation Using Simulated Experts.- Stochastic Foundations for the Case-Driven Acquisition of Classification Rules.- From Natural Language to Formal Proof Goal.- Reuse: Revisiting Sisyphus-VT.- Ontology Engineering.- Role Organization Model in Hozo.- Verification and Refactoring of Ontologies with Rules.- Ontology Selection for the Real Semantic Web: How to Cover the Queens Birthday Dinner?.- Ontology Engineering, Scientific Method and the Research Agenda.- Ontology Learning.- Ontology Enrichment Through Automatic Semantic Annotation of On-Line Glossaries.- Discovering Semantic Sibling Groups from Web Documents with XTREEM-SG.- Designing and Evaluating Patterns for Ontology Enrichment from Texts.- Ontology Mapping and Evolution.- Semantic Metrics.- Matching Unstructured Vocabularies Using a Background Ontology.- Distributed Multi-contextual Ontology Evolution - A Step Towards Semantic Autonomy.- An Evaluation Method for Ontology Complexity Analysis in Ontology Evolution.- Semantic Search.- Semantic Search Components: A Blueprint for Effective Query Language Interfaces.- SemSearch: A Search Engine for the Semantic Web.- Rich Personal Semantic Web Clients: Scenario and a Prototype.- User Interfaces.- i dee: An Integrated and Interactive Data Exploration Environment Used for Ontology Design.- Evaluating a Thesaurus Browser for an Audio-visual Archive.- Knowledge Discovery.- Frequent Pattern Discovery from OWL DLP Knowledge Bases.- Engineering and Learning of Adaptation Knowledge in Case-Based Reasoning.- A Methodological View on Knowledge-Intensive Subgroup Discovery.- Iterative Bayesian Network Implementation by Using Annotated Association Rules.- Semantics from Networks and Crowds.- Multilayered Semantic Social Network Modeling by Ontology-Based User Profiles Clustering: Application to Collaborative Filtering.- Towards Knowledge Management Based on Harnessing Collective Intelligence on the Web.- A Formal Approach to Qualitative Reasoning on Topological Properties of Networks.- Applications.- Towards a Knowledge Ecosystem.- A Tool for Management and Reuse of Software Design Knowledge.- The ODESeW Platform as a Tool for Managing EU Projects: The Knowledge Web Case Study.


Semantic Web archive | 2011

Comparison of reasoners for large ontologies in the OWL 2 EL profile

Kathrin Dentler; Ronald Cornet; Annette ten Teije; Nicolette F. de Keizer

This paper provides a survey to and a comparison of state-of-the-art Semantic Web reasoners that succeed in classifying large ontologies expressed in the tractable OWL 2 EL profile. Reasoners are characterized along several dimensions: The first dimension comprises underlying reasoning characteristics, such as the employed reasoning method and its correctness as well as the expressivity and worst-case computational complexity of its supported language and whether the reasoner supports incremental classification, rules, justifications for inconsistent concepts and ABox reasoning tasks. The second dimension is practical usability: whether the reasoner implements the OWL API and can be used via OWLlink, whether it is available as Protege plugin, on which platforms it runs, whether its source is open or closed and which license it comes with. The last dimension contains performance indicators that can be evaluated empirically, such as classification, concept satisfiability, subsumption checking and consistency checking performance as well as required heap space and practical correctness, which is determined by comparing the computed concept hierarchies with each other. For the very large ontology SNOMED CT, which is released both in stated and inferred form, we test whether the computed concept hierarchies are correct by comparing them to the inferred form of the official distribution. The reasoners are categorized along the defined characteristics and benchmarked against well-known biomedical ontologies. The main conclusion from this study is that reasoners vary significantly with regard to all included characteristics, and therefore a critical assessment and evaluation of requirements is needed before selecting a reasoner for a real-life application.


Journal of Web Semantics | 2009

Marvin: Distributed reasoning over large-scale Semantic Web data

Eyal Oren; Spyros Kotoulas; George Anadiotis; Ronny Siebes; Annette ten Teije; Frank van Harmelen

Many Semantic Web problems are difficult to solve through common divide-and-conquer strategies, since they are hard to partition. We present Marvin, a parallel and distributed platform for processing large amounts of RDF data, on a network of loosely coupled peers. We present our divide-conquer-swap strategy and show that this model converges towards completeness. Within this strategy, we address the problem of making distributed reasoning scalable and load-balanced. We present SpeedDate, a routing strategy that combines data clustering with random exchanges. The random exchanges ensure load balancing, while the data clustering attempts to maximise efficiency. SpeedDate is compared against random and deterministic (DHT-like) approaches, on performance and load-balancing. We simulate parameters such as system size, data distribution, churn rate, and network topology. The results indicate that SpeedDate is near-optimally balanced, performs in the same order of magnitude as a DHT-like approach, and has an average throughput per node that scales with i for i items in the system. We evaluate our overall Marvin system for performance, scalability, load balancing and efficiency.


Artificial Intelligence in Medicine | 2007

Extraction and use of linguistic patterns for modelling medical guidelines

Radu Serban; Annette ten Teije; Frank van Harmelen; Mar Marcos; Cristina Polo-Conde

OBJECTIVE The quality of knowledge updates in evidence-based medical guidelines can be improved and the effort spent for updating can be reduced if the knowledge underlying the guideline text is explicitly modelled using the so-called linguistic guideline patterns, mappings between a text fragment and a formal representation of its corresponding medical knowledge. METHODS AND MATERIAL Ontology-driven extraction of linguistic patterns is a method to automatically reconstruct the control knowledge captured in guidelines, which facilitates a more effective modelling and authoring of medical guidelines. We illustrate by examples the use of this method for generating and instantiating linguistic patterns in the text of a guideline for treatment of breast cancer, and evaluate the usefulness of these patterns in the modelling of this guideline. RESULTS We developed a methodology for extracting and using linguistic patterns in guideline formalization, to aid the human modellers in guideline formalization and reduce the human modelling effort. Using automatic transformation rules for simple linguistic patterns, a good recall (between 72% and 80%) is obtained in selecting the procedural knowledge relevant for the guideline model, even though the precision of the guideline model generated automatically covers only between 20% and 35% of the human-generated guideline model. These results indicate the suitability of our method as a pre-processing step in medical guideline formalization. CONCLUSIONS Modelling and authoring of medical texts can benefit from our proposed method. As pre-requisites for generating automatically a skeleton of the guideline model from the procedural part of the guideline text, to aid the human modeller, the medical terminology used by the guideline must have a good overlap with existing medical thesauri and its procedural knowledge must obey linguistic regularities that can be mapped into the control constructs of the target guideline modelling language.


Artificial Intelligence in Medicine | 2013

Automated generation of patient-tailored electronic care pathways by translating computer-interpretable guidelines into hierarchical task networks

Arturo González-Ferrer; Annette ten Teije; Juan Fdez-Olivares; Krystyna Milian

OBJECTIVE This paper describes a methodology which enables computer-aided support for the planning, visualization and execution of personalized patient treatments in a specific healthcare process, taking into account complex temporal constraints and the allocation of institutional resources. To this end, a translation from a time-annotated computer-interpretable guideline (CIG) model of a clinical protocol into a temporal hierarchical task network (HTN) planning domain is presented. MATERIALS AND METHODS The proposed method uses a knowledge-driven reasoning process to translate knowledge previously described in a CIG into a corresponding HTN Planning and Scheduling domain, taking advantage of HTNs known ability to (i) dynamically cope with temporal and resource constraints, and (ii) automatically generate customized plans. The proposed method, focusing on the representation of temporal knowledge and based on the identification of workflow and temporal patterns in a CIG, makes it possible to automatically generate time-annotated and resource-based care pathways tailored to the needs of any possible patient profile. RESULTS The proposed translation is illustrated through a case study based on a 70 pages long clinical protocol to manage Hodgkins disease, developed by the Spanish Society of Pediatric Oncology. We show that an HTN planning domain can be generated from the corresponding specification of the protocol in the Asbru language, providing a running example of this translation. Furthermore, the correctness of the translation is checked and also the management of ten different types of temporal patterns represented in the protocol. By interpreting the automatically generated domain with a state-of-art HTN planner, a time-annotated care pathway is automatically obtained, customized for the patients and institutional needs. The generated care pathway can then be used by clinicians to plan and manage the patients long-term care. CONCLUSION The described methodology makes it possible to automatically generate patient-tailored care pathways, leveraging an incremental knowledge-driven engineering process that starts from the expert knowledge of medical professionals. The presented approach makes the most of the strengths inherent in both CIG languages and HTN planning and scheduling techniques: for the former, knowledge acquisition and representation of the original clinical protocol, and for the latter, knowledge reasoning capabilities and an ability to deal with complex temporal and resource constraints. Moreover, the proposed approach provides immediate access to technologies such as business process management (BPM) tools, which are increasingly being used to support healthcare processes.


adaptive agents and multi-agents systems | 2002

An analysis of multi-agent diagnosis

Nico Roos; Annette ten Teije; A. Bos; Cees Witteveen

This paper analyzes the use of a Multi-Agent System for Model-Based Diagnosis. In a large dynamical system, it is often infeasible or even impossible to maintain a model of the whole system. Instead, several incomplete models of the system have to be used to establish a diagnosis and to detect possible faults. These models may also be physically distributed.A Multi-Agent System of diagnostic agents may offer solutions for establishing a global diagnosis. If we use a separate agent for each incomplete model of the system, establishing a global diagnosis becomes a problem of cooperation and negotiation between the diagnostic agents. This raises the question whether `a set of diagnostic agents, each having an incomplete model of the system, can (efficiently) determine the same global diagnosis as an ideal single diagnostic agent having the combined knowledge of these agents?


Artificial Intelligence in Medicine | 1999

A study of PROforma, a development methodology for clinical procedures

Arjen Vollebregt; Annette ten Teije; Frank van Harmelen; Johan van der Lei; Mees Mosseveld

Knowledge engineering has shown that besides the general methodologies from software engineering it is useful to develop special purpose methodologies for knowledge based systems (KBS). PROforma is a newly developed methodology for a specific type of knowledge based systems. PROforma is intended for decision support systems and in particular for clinical procedures in the medical domain. This paper reports on an evaluation study of PROforma, and on the trade-off that is involved between general purpose and special purpose development methods in Knowledge Engineering and Medical AI. Our method for evaluating PROforma is based on re-engineering a realistic system in two methodologies: the new and special purpose KBS methodology PROforma and the widely accepted, and more general KBS methodology CommonKADS. The four most important results from our study are as follows. Firstly, PROforma has some strong points which are also strong related to requirements of medical reasoning. Secondly, PROforma has some weak points, but none of them are in any way related to the special purpose nature of PROforma. Thirdly, a more general method like CommonKADS works better in the analysis phase than the more special purpose method PROforma. Finally, to support a complementary use of the methodologies, we propose a mapping between their respective languages.


artificial intelligence in medicine in europe | 2007

Maintaining Formal Models of Living Guidelines Efficiently

Andreas Seyfang; Begoña Martínez-Salvador; Radu Serban; Jolanda Wittenberg; Silvia Miksch; Mar Marcos; Annette ten Teije; Kitty Rosenbrand

Translating clinical guidelines into formal models is beneficial in many ways, but expensive. The progress in medical knowledge requires clinical guidelines to be updated at relatively short intervals, leading to the term living guideline. This causes potentially expensive, frequent updates of the corresponding formal models. When performing these updates, there are two goals: The modelling effort must be minimised and the links between the original document and the formal model must be maintained. In this paper, we describe our solution, using tools and techniques developed during the Protocure II project.


artificial intelligence in medicine in europe | 2001

Using Critiquing for Improving Medical Protocols: Harder than It Seems

Mar Marcos; Geert Berger; Frank van Harmelen; Annette ten Teije; Hugo Roomans; Silvia Miksch

Medical protocols are widely recognised to provide clinicians with high-quality and up-to-date recommendations. A critical condition for this is of course that the protocols themselves are of high quality. In this paper we investigate the use of critiquing for improving the quality of medical protocols. We constructed a detailed formal model of the jaundice protocol of the American Association of Pediatrics in the Asbru representation language. We recorded the actions performed by a pediatrician while solving a set of test cases.We then compared these expert actions with the steps recommended by the formalised protocol, and analysed the differences that we observed. Even our relatively small test set of 7 cases revealed many mismatches between the actions performed by the expert and the protocol recommendations, which suggest improvements of the protocol. A major problem in our case study was to establish a mapping between the actions performed by the expert and the steps suggested by the protocol. We discuss the reasons for this difficulty, and assess its consequences for the automation of the critiquing process.


artificial intelligence in medicine in europe | 2013

Rule-Based Formalization of Eligibility Criteria for Clinical Trials

Zhisheng Huang; Annette ten Teije; Frank van Harmelen

In this extended abstract, we propose a rule-based formalization of eligibility criteria for clinical trials. The rule-based formalization is implemented by using the logic programming language Prolog. Compared with existing formalizations such as pattern-based and script-based languages, the rule-based formalization has the advantages of being declarative, expressive, reusable and easy to maintain. Our rule-based formalization is based on a general framework for eligibility criteria containing three types of knowledge: (1) trial-specific knowledge, (2) domain-specific knowledge and (3) common knowledge. This framework enables the reuse of several parts of the formalization of eligibility criteria. We have implemented the proposed rule-based formalization in SemanticCT, a semantically-enabled system for clinical trials, showing the feasibility of using our rule-based formalization of eligibility criteria for supporting patient recruitment in clinical trial systems.

Collaboration


Dive into the Annette ten Teije's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Silvia Miksch

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Riaño

Rovira i Virgili University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Qing Hu

VU University Amsterdam

View shared research outputs
Top Co-Authors

Avatar

Radu Serban

VU University Amsterdam

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge