Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Peter Mork is active.

Publication


Featured researches published by Peter Mork.


Journal of Biomedical Informatics | 2010

Methodological Review: Cloud computing: A new business paradigm for biomedical information sharing

Arnon Rosenthal; Peter Mork; Maya Hao Li; Jean Stanford; David Koester; Patti Reynolds

We examine how the biomedical informatics (BMI) community, especially consortia that share data and applications, can take advantage of a new resource called cloud computing. Clouds generally offer resources on demand. In most clouds, charges are pay per use, based on large farms of inexpensive, dedicated servers, sometimes supporting parallel computing. Substantial economies of scale potentially yield costs much lower than dedicated laboratory systems or even institutional data centers. Overall, even with conservative assumptions, for applications that are not I/O intensive and do not demand a fully mature environment, the numbers suggested that clouds can sometimes provide major improvements, and should be seriously considered for BMI. Methodologically, it was very advantageous to formulate analyses in terms of component technologies; focusing on these specifics enabled us to bypass the cacophony of alternative definitions (e.g., exactly what does a cloud include) and to analyze alternatives that employ some of the component technologies (e.g., an institutions data center). Relative analyses were another great simplifier. Rather than listing the absolute strengths and weaknesses of cloud-based systems (e.g., for security or data preservation), we focus on the changes from a particular starting point, e.g., individual lab systems. We often find a rough parity (in principle), but one needs to examine individual acquisitions--is a loosely managed lab moving to a well managed cloud, or a tightly managed hospital data center moving to a poorly safeguarded cloud?


international conference on data engineering | 2004

Adapting a generic match algorithm to align ontologies of human anatomy

Peter Mork; Philip A. Bernstein

The difficulty inherent in schema matching has led to the development of several generic match algorithms. We describe how we adapted general approaches to the specific task of aligning two ontologies of human anatomy, the Foundational Model of Anatomy and the GALEN Common Reference Model. Our approach consists of three phases: lexical, structural and hierarchical, which leverage different aspects of the ontologies as they are represented in a generic meta-model. Lexical matching identifies concepts with similar names. Structural matching identifies concepts whose neighbors are similar. Finally, hierarchical matching identifies concepts with similar descendants. We conclude by reporting on the lessons we learned.


international conference on management of data | 2010

OpenII: an open source information integration toolkit

Len Seligman; Peter Mork; Alon Y. Halevy; Kenneth P. Smith; Michael J. Carey; Kuang Chen; Chris Wolf; Jayant Madhavan; Akshay Kannan; Doug Burdick

OpenII (openintegration.org) is a collaborative effort to create a suite of open-source tools for information integration (II). The project is leveraging the latest developments in II research to create a platform on which integration tools can be built and further research conducted. In addition to a scalable, extensible platform, OpenII includes industrial-strength components developed by MITRE, Google, UC-Irvine, and UC-Berkeley that interoperate through a common repository in order to solve II problems. Components of the toolkit have been successfully applied to several large-scale US government II challenges.


Journal on Data Semantics | 2008

The Harmony Integration Workbench

Peter Mork; Len Seligman; Arnon Rosenthal; Joel Korb; Chris Wolf

A key aspect of any data integration endeavor is determining the relationships between the source schemata and the target schema. This schema integration task must be tackled regardless of the integration architecture or mapping formalism. In this paper, we provide a task model for schema integration. We use this breakdown to motivate a workbench for schema integration in which multiple tools share a common knowledge repository. In particular, the workbench facilitates the interoperation of research prototypes for schema matching (which automatically identify likely semantic correspondences) with commercial schema mapping tools (which help produce instance-level transformations). Currently, each of these tools provides its own ad hoc representation of schemata and mappings; combining these tools requires aligning these representations. The workbench provides a common representation so that these tools can more rapidly be combined.


international conference on data engineering | 2006

Integration Workbench: Integrating Schema Integration Tools

Peter Mork; Arnon Rosenthal; Joel Korb; Ken Samuel

A key aspect of any data integration endeavor is establishing a transformation that translates instances of one or more source schemata into instances of a target schema. This schema integration task must be tackled regardless of the integration architecture or mapping formalism. In this paper we provide a task model for schema integration. We use this breakdown to motivate a workbench for schema integration in which multiple tools share a common knowledge repository. In particular, the workbench facilitates the interoperation of research prototypes for schema matching (which automatically identify likely semantic correspondences) with commercial schema mapping tools (which help produce instance-level transformations). Currently, each of these tools provides its own ad hoc representation of schemata and mappings; combining these tools requires aligning these representations. The workbench provides a common representation so that these tools can more rapidly be combined.


Artificial Intelligence in Medicine | 2007

Comparing two approaches for aligning representations of anatomy

Songmao Zhang; Peter Mork; Olivier Bodenreider; Philip A. Bernstein

OBJECTIVEnTo analyze the comparison, through their results, of two distinct approaches applied to aligning two representations of anatomy.nnnMATERIALSnBoth approaches use a combination of lexical and structural techniques. In addition, the first approach takes advantage of domain knowledge, while the second approach treats alignment as a special case of schema matching. The same versions of FMA and GALEN were aligned by each approach. Two thousand one hundred and ninety-nine concept matches were obtained by both approaches.nnnMETHODS AND RESULTSnFor matches identified by one approach only (337 and 336, respectively), we analyzed the reasons that caused the other approach to fail.nnnCONCLUSIONSnThe first approach could be improved by addressing partial lexical matches and identifying matches based solely on structural similarity. The second approach may be improved by taking into account synonyms in FMA and identifying semantic mismatches. However, only 33% of the possible one-to-one matches among anatomical concepts were identified by the two approaches together. New directions need to be explored in order to handle more complex matches.


international conference on conceptual modeling | 2007

Teaching a schema translator to produce O/R views

Peter Mork; Philip A. Bernstein; Sergey Melnik

This paper describes a rule-based algorithm to derive a relational schema from an extended entity-relationship model. Our work is based on an approach by Atzeni and Torlone in which the source EER model is imported into a universal metamodel, a series of transformations are performed to eliminate constructs not appearing in the relational metamodel, and the result is exported. Our algorithm includes novel features that are needed for practical object to relational mapping systems: First, it generates forward-and reverse-views that transform instances of the source model into instances of the target and back again. These views automate the object-to-relational (O/R) mapping. Second, it supports a flexible mapping of inheritance hierarchies to flat relations that subsumes and extends prior approaches. Third, it propagates incremental updates of the source model into incremental updates of the target. We prove the algorithms correctness and demonstrate its practicality in an implementation.


international conference on data engineering | 2009

Galaxy: Encouraging Data Sharing among Sources with Schema Variants

Peter Mork; Len Seligman; Arnon Rosenthal; Michael Morse; Chris Wolf; Jeffrey Hoyt; Kenneth P. Smith

This demonstration presents Galaxy, a schema manager that facilitates easy and correct data sharing among autonomous but related, evolving data sources. Galaxy reduces heterogeneity by helping database developers identify, reuse, customize, and advertise related schema components. The central idea is that as schemata are customized, Galaxy maintains a derivation graph, and exploits it for data exchange, discovery, and multi-database query over the galaxy of related data sources. Using a set of schemata from the biomedical domain, we demonstrate how Galaxy facilitates schema and data sharing.


international conference on management of data | 2010

Exploring schema similarity at multiple resolutions

Kenneth P. Smith; Craig Bonaceto; Chris Wolf; Beth Yost; Michael Morse; Peter Mork; Doug Burdick

Large, dynamic, and ad-hoc organizations must frequently initiate data integration and sharing efforts with insufficient awareness of how organizational data sources are related. Decision makers need to reason about data model interactions much as they do about data instance interactions in OLAP: at multiple levels of granularity. We demonstrate an integrated environment for exploring schema similarity across multiple resolutions. Users visualize and interact with clusters of related schemas using a tool named Affinity. Within any cluster, users may drill-down to examine the extent and content of schema overlap. Further drill down enables users to explore fine-grained element-level correspondences between between two selected schemas.


information reuse and integration | 2011

Unity: Speeding the creation of community vocabularies for information integration and reuse

Kenneth P. Smith; Peter Mork; Len Seligman; Peter Leveille; Beth Yost; Maya Hao Li; Chris Wolf

Many data sharing communities create data standards (“hub” schemata) to speed information integration by increasing reuse of both data definitions and mappings. Unfortunately, creation of these standards and the mappings to the enterprises implemented systems is both time consuming and expensive. This paper presents Unity, a novel tool for speeding the development of a community vocabulary, which includes both a standard schema and the necessary mappings. We present Unitys scalable algorithms for creating vocabularies and its novel human computer interface which gives the integrator a powerful environment for refining the vocabulary. We then describe Unitys extensive reuse of data structures and algorithms from the OpenII information integration framework, which not only sped the construction of Unity but also results in reuse of the artifacts produced by Unity: vocabularies serve as the basis of information exchanges, and also can be reused as thesauri by other tools within the OpenII framework. Unity has been applied to real U.S. government information integration challenges.

Collaboration


Dive into the Peter Mork's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge