Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Richard Marciano is active.

Publication


Featured researches published by Richard Marciano.


international conference on management of data | 1999

XML-based information mediation with MIX

Chaitanya K. Baru; Amarnath Gupta; Bertram Ludäscher; Richard Marciano; Yannis Papakonstantinou; Pavel Velikhov; Vincent Chu

The MIX mediator system, MIX<italic>m</italic>, is developed as part of the MIX Project at the San Diego Supercomputer Center, and the University of California, San Diego.<supscrpt>1</supscrpt> MIX<italic>m</italic> uses XML as the common model for data exchange. Mediator views are expressed in XMAS (<italic>XML Matching And Structuring Language</italic>), a declarative XML query language. To facilitate user-friendly query formulation and for optimization purposes, MIX<italic>m</italic> employs XML DTDs as a structural description (in effect, a “schema”) of the exchanged data. The novel features of the system include:<list><item>Data exchange and integration solely relies on XML, i.e., instance and schema information is represented by XML documents and XML DTDs, respectively. XML queries are denoted in XMAS, which builds upon ideas of languages like XML-QL, MSL, Yat, and UnQL. Additionally, XMAS features powerful grouping and order constructs for generating new integrated XML “objects” from existing ones. </item><item>The graphical user interface BBQ (<italic>Blended Browsing and Querying</italic>) is driven by the mediator view DTD and integrates browsing and querying of XML data. Complex queries can be constructed in an intuitive way, resembling QBE. Due to the nested nature of XML data and DTDs, BBQ provides graphical means to specify the nesting and grouping of query results. </item><item>Query evaluation can be demand-driven, i.e., by the users navigation into the mediated view. </item></list>


International Journal of Geographic Information Systems | 1996

Local interpolation using a distributed parallel supercomputer

Marc P. Armstrong; Richard Marciano

Abstract Large spatial interpolation problems present significant computational challenges even for the fastest workstations. In this paper we demonstrate how parallel processing can be used to reduce computation times to levels that are suitable for interactive interpolation analyses of large spatial databases. Though the approach developed in this paper can be used with a wide variety of interpolation algorithms, we specifically contrast the results obtained from a global ‘brute force’ inverse–distance weighted interpolation algorithm with those obtained using a much more efficient local approach. The parallel versions of both implementations are superior to their sequential counterparts. However, the local version of the parallel algorithm provides the best overall performance.


Computers & Geosciences | 1997

Massively parallel strategies for local spatial interpolation

Marc P. Armstrong; Richard Marciano

Abstract An inverse distance weighted interpolation algorithm is implemented using three massively parallel SIMD computer systems. The algorithm, which is based on a strategy that reduces search for control points to the local neighborhood of each interpolated cell, attempts to exploit hardware communication paths provided by the system during the local search process. To evaluate the performance of the algorithm a set of computational experiments was conducted in which the number of control points used to interpolate a 240 × 800 grid was increased from 1000 to 40,000 and the number of k -nearest control points used to compute a value at each grid location was increased from one to eight. The results show that the number of processing elements used in each experimental run significantly affected performance. In fact, a slower but larger processor grid outperformed a faster but smaller configuration. The results obtained, however, are roughly comparable to those obtained using a superscalar workstation. To remedy such performance shortcomings, future work should explore spatially adaptive approaches to parallelism as well as alternative parallel architectures.


acm international conference on digital libraries | 1999

XML-based information mediation for digital libraries

Chaitanya K. Baru; Vincent Chu; Amarnath Gupta; Bertram Ludäscher; Richard Marciano; Yannis Papakonstantinou; Pavel Velikhov

We demonstrate a prototype distributed architecture for a digital library, using technology being developed under the MIX Project at the San Diego Supercomputer Center (SDSC) and the University of California, San Diego. The architecture is based on XML-based modeling of metadata; use of an XML query language, and associated mediator middleware, to query distributed metadata sources; and the use of a storage system middleware to access distributed, archived data sets.


Computers & Geosciences | 1994

Parallel processing of spatial statistics

Marc P. Armstrong; Claire E. Pavlik; Richard Marciano

The computational intensity of spatial statistics, including measures of spatial association, has hindered their application to large empirical data sets. Computing environments using parallel processing have the potential to eliminate this problem. In this paper, we develop a method for processing a computationally intensive measure of spatial association (G) in parallel and present the performance enhancements obtained. Timing results are presented for a single processor and for 2–14 parallel processors operating on data sets containing 256–1600 point observations. The results indicate that significant improvements in processing time can be achieved using parallel architectures.


International Journal of Geographic Information Systems | 1995

Massively parallel processing of spatial statistics

Marc P. Armstrong; Richard Marciano

Abstract Statistical measures of spatial association have significant computational requirements when large data sets are analysed. In this paper, a measure of spatial association, G(d), is used to illustrate how a massively parallel computer can be used to address the computational requirements of spatial statistical analysis. The statistical algorithms were implemented using two MasPar MP-1 computers, one with 8192 and the other with 16834 processors, and an MP-2 machine with 4096 processors. The results demonstrate that substantial reductions in processing times can be achieved using massively parallel architectures: when compared to a superscalar workstation, speed-up values in excess of 20 were obtained. The design of parallel programmes, however, requires careful planning since many factors under programmer control affect the efficiency of the resulting computations.


international workshop on research issues in data engineering | 2001

Towards self-validating knowledge-based archives

Bertram Ludäscher; Richard Marciano; Reagan Moore

Digital archives are dedicated to the long-term preservation of electronic information and have the mandate to enable sustained access despite a rapidly changing information infrastructure. Current archival approaches build upon standardized data formats and simple metadata mechanisms for collection management, but do not involve high-level conceptual models and knowledge representations. This results in serious limitations, not only for expressing various kinds of information and knowledge about the archived data, but also for creating infrastructure independent, self-validating and self-instantiating archives. To overcome these limitations, we first propose a scalable XML-based archival infrastructure, based on standard tools, and subsequently show how this architecture can be extended to a model-based framework, where higher-level knowledge representations become an integral part of the archive and the ingestion/migration processes. This allows us to maximize infrastructure independence by archiving generic, executable specifications of: archival constraints (i.e., model validators); and archival transformations that are part of the ingestion process. The proposed architecture facilitates construction of self-validating and self-instantiating knowledge-based archives. We illustrate our overall approach and report on first experiences using a sample collection from a collaboration with the National Archives and Records Administration (NARA).


Proceedings of The Asist Annual Meeting | 2008

Identifying best practices and skills for workforce development in data curation

P. Bryan Heidorn; Helen R. Tobbo; G. Sayeed Choudhury; Christopher L. Greer; Richard Marciano

The nature of science and scholarship is being transformed by the ability to collect and integrate vast quantities of information. Some sciences such as ecology and environmental science are inherently integrative, requiring the combination of many types of information from many sources in order to answer more complex questions than has been previously possible. This new information and the information management tools designed to deal with this volume of data will help us make informed decisions that will impact human health and prosperity. To enable this cross-scale, interdisciplinary integration for the coming generations of scholars, data must be managed to facilitate interoperability, preservation, and sharing. This panel will explore best practices in data curation and models of education for new data curation professionals.


Library Trends | 2005

Prototype Preservation Environments

Reagan Moore; Richard Marciano

The Persistent Archive Testbed and National Archives and Records Administration (NARA) research prototype persistent archive are examples of preservation environments. Both projects are using data grids to implement data management infrastructure that can manage technology evolution. Data grids are software systems that provide persistent names to digital entities, manage data that are distributed across multiple types of storage systems, and provide support for preservation metadata. A persistent archive federates multiple data grids to provide the fault tolerance and disaster recovery mechanisms essential for long-term preservation. The capabilities of the prototype persistent archives will be presented, along with examples of how the capabilities are used to support the preservation of email, Web crawls, office products, image collections, and electronic records.


digital government research | 2006

Regionalizing integrated watershed management: a strategic vision

Keith Pezzoli; Richard Marciano; John Robertus

In addressing the conference theme, Integrating Information Technology and Social Science Research for Effective Government, this paper examines the challenges that government agencies face while trying to protect and restore water quality from a watershed management standpoint. Our geographic focus is the San Diego city-region and its neighboring jurisdictions (including Mexico). We find that there is a pressing need to develop a dynamic regional information system that can help guide and track individual development projects (micro-development) in the context of the larger (macro-development) of whole watersheds. Yet serious constraints stand in the way. Fortunately, advances taking place in certain scientific, sociotechnical and regulatory domains are promising. Three stand out: (1) the growth of sustainability science and emergence of cyberinfrastructure for multiscalar environmental monitoring, (2) the mobilization of what the National Research Council calls knowledge-action collaboratives---including university-government-community partnerships, and (3) regulatory innovation calling for watershed-based approaches to environmental policy and planning. We need a concerted strategy to integrate and take full advantage of these trends. This paper provides a strategic vision along such lines. A case study on digital systems for environmental mitigation and tracking is also presented. Digital government research themes related to this case study include: (1) long-term preservation and archiving of government records, (2) integration of data grids and geographic information systems, and (3) citizen interactions through transparency of and universal access to digital records.

Collaboration


Dive into the Richard Marciano's collaboration.

Top Co-Authors

Avatar

Reagan Moore

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Amarnath Gupta

University of California

View shared research outputs
Top Co-Authors

Avatar

Chaitan Baru

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael Wan

San Diego Supercomputer Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Richard Frost

San Diego Supercomputer Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge