Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Douglas Stott Parker is active.

Publication


Featured researches published by Douglas Stott Parker.


Frontiers in Neuroinformatics | 2009

Efficient, distributed and interactive neuroimaging data analysis using the LONI pipeline

Ivo D. Dinov; John D. Van Horn; Kamen Lozev; Rico Magsipoc; Petros Petrosyan; Zhizhong Liu; Allan MacKenzie-Graham; Paul R. Eggert; Douglas Stott Parker; Arthur W. Toga

The LONI Pipeline is a graphical environment for construction, validation and execution of advanced neuroimaging data analysis protocols (Rex et al., 2003). It enables automated data format conversion, allows Grid utilization, facilitates data provenance, and provides a significant library of computational tools. There are two main advantages of the LONI Pipeline over other graphical analysis workflow architectures. It is built as a distributed Grid computing environment and permits efficient tool integration, protocol validation and broad resource distribution. To integrate existing data and computational tools within the LONI Pipeline environment, no modification of the resources themselves is required. The LONI Pipeline provides several types of process submissions based on the underlying server hardware infrastructure. Only workflow instructions and references to data, executable scripts and binary instructions are stored within the LONI Pipeline environment. This makes it portable, computationally efficient, distributed and independent of the individual binary processes involved in pipeline data-analysis workflows. We have expanded the LONI Pipeline (V.4.2) to include server-to-server (peer-to-peer) communication and a 3-tier failover infrastructure (Grid hardware, Sun Grid Engine/Distributed Resource Management Application API middleware, and the Pipeline server). Additionally, the LONI Pipeline provides three layers of background-server executions for all users/sites/systems. These new LONI Pipeline features facilitate resource-interoperability, decentralized computing, construction and validation of efficient and robust neuroimaging data-analysis workflows. Using brain imaging data from the Alzheimers Disease Neuroimaging Initiative (Mueller et al., 2005), we demonstrate integration of disparate resources, graphical construction of complex neuroimaging analysis protocols and distributed parallel computing. The LONI Pipeline, its features, specifications, documentation and usage are available online (http://Pipeline.loni.ucla.edu).


international conference on data engineering | 1995

Improving SQL with generalized quantifiers

Ping-Yu Hsu; Douglas Stott Parker

A generalized quantifier is a particular kind of operator on sets. Coming under increasing attention recently by linguists and logicians, they correspond to many useful natural language phrases, including phrases like: three, Chamberlins three, more than three, fewer than three, at most three, all but three, no more than three, not more than half the, at least two and not more than three, no students, most male and all female, etc. Reasoning about quantifiers is a source of recurring problems for most SQL users, and leads to both confusion and incorrect expression of queries. By adopting a more modern and natural model of quantification these problems can be alleviated. We show how generalized quantifiers can be used to improve the SQL interface.<<ETX>>


international conference on data engineering | 1989

The Tangram stream query processing system

Douglas Stott Parker; Richard R. Muntz; H.L. Chau

Tangram, an environment for modeling which is under development at UCLA, is discussed. One of the driving concepts behind Tangram has been the combination of large-scale data access and data reduction with a powerful programming environment. The Tangram environment is based on PROLOG, extending it with a number of features, including process management, distributed database access, and generalized stream processing. The authors describe the Tangram stream processor, the part of the Tangram environment performing query processing on large streams of data. The paradigm of transducers on streams is used throughout this system, providing a database flow computation capability.<<ETX>>


Computing in Science and Engineering | 2000

Monte Carlo arithmetic: how to gamble with floating point and win

Douglas Stott Parker; Brad Pierce; Paul R. Eggert

How sensitive to rounding errors are the results generated from a particular code running on a particular machine applied to a particular input? Monte Carlo arithmetic illustrates the potential for tools to support new kinds of a posteriori round-off error analysis.


very large data bases | 1979

Algorithmic Applications For A New Result On Multivalued Dependencies

Douglas Stott Parker; Claude Delobel

Recently Delobel and Parker have shown that Multivalued dependencies (MVDs) may be represented as Boolean switching functions, in much the same way as Functional dependencies (FDs) can be represented as Boolean implications. This permits all FD and MVD inferences to be made as logical (Boolean) inferences, a significant plus because the FD/MVD inference axioms are fairly complex. This paper reviews some of the basic implications of this result and outlines new applications in FD/MVD membership testing, generation of dependency closure, cover, and keys, and testing for lossless and independent decompositions.


data and knowledge engineering | 1988

Formal properties of net-based knowledge representation schemes

Paolo Atzeni; Douglas Stott Parker

In the spirit of integrating data base and artificial intelligence techniques, a number of concepts widely used in relational data base theory are introduced in a knowledge representation scheme. A simple network model, which allows the representation of types, is-a relationships and disjointness constraints is considered. The concepts of consistency and redundancy are introduced and characterized by means of implication of constraints and systems of inference rules, and by means of graph theoretic concepts.


international conference on database theory | 1986

Set Containment Inference

Paolo Atzeni; Douglas Stott Parker

Type hierarchies and type inclusion inference are now standard in many knowledge representation schemes. In this paper, we show how to determine consistency and inference for collections of statements of the form mammal isa vertebrate.


Software - Practice and Experience | 2005

Perturbing and evaluating numerical programs without recompilation: the wonglediff way

Paul R. Eggert; Douglas Stott Parker

wonglediff is a program that tests the sensitivity of arbitrary program executables or processes to changes that are introduced by a process that runs in parallel. On Unix and Linux kernels, wonglediff creates a supervisor process that runs applications and, on the fly, introduces desired changes to their process state. When execution terminates, it then summarizes the resulting changes in the output files. The technique employed has a variety of uses. This paper describes an implementation of wonglediff that checks the sensitivity of programs to random changes in the floating‐point rounding modes. It runs a program several times, ‘wongling’ it each time: randomly toggling the IEEE‐754 rounding mode of the program as it executes. By comparing the resulting output, one gets a poor mans numerical stability analysis for the program. Although the analysis does not give any kind of guarantee about a programs stability, it can reveal genuine instability, and it does serve as a particularly useful and revealing idiot light. In our implementation, differences among the output files from the programs multiple runs are summarized in a report. This report is in fact an HTML version of the output file, with inline mark‐up summarizing individual differences among the multiple instances. When viewed with a browser, the differences can be highlighted or rendered in many different ways. Copyright


computer science and software engineering | 2008

Architectural Principles of the "Streamonas" Data Stream Management System and Performance Evaluation Based on the Linear Road Benchmark

Panayiotis Adamos Michael; Douglas Stott Parker

Data stream management systems (DSMSs) receive large overheads when queries directly access the serial non-indexed incoming stream. Our novel architecture, presented in this work, addresses this problem by indexing the incoming dataflow based on a specially designed data structure. The role of this data structure is as fundamental for our DSMS as the role of a relation in a relational DBMS. The architecture achieves reusability, query parallelism and O(1) constant time complexity access to streamed data. The system managed to run the maximum level of difficulty the linear road benchmark has (10 expressways), demonstrating excellent performance results.


discovery science | 1998

TDDA, a Data Mining Tool for Text Databases: A Case History in a Lung Cancer Text Database

Jeffrey A. Goldman; Wesley W. Chu; Douglas Stott Parker; Robert M. Goldman

In this paper, we give a case history illustrating the real world application of a useful technique for data mining in text databases. The technique, Term Domain Distribution Analysis (TDDA), consists of keeping track of term frequencies for specific finite domains, and announcing significant differences from standard frequency distributions over these domains as a hypothesis. In the case study presented, the domain of terms was the pair right, left, over which we expected a uniform distribution. In analyzing term frequencies in a thoracic lung cancer database, the TDDA technique led to the surprising discovery that primary thoracic lung cancer tumors appear in the right lung more often than the left lung, with a ratio of 3:2. Treating the text discovery as a hypothesis, we verified this relationship against the medical literature in which primary lung tumor sites were reported, using a standard χ2 statistic. We subsequently developed a working theoretical model of lung cancer that may explain the discovery

Collaboration


Dive into the Douglas Stott Parker's collaboration.

Top Co-Authors

Avatar

Paul R. Eggert

University of California

View shared research outputs
Top Co-Authors

Avatar

Paolo Atzeni

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cauligi S. Raghavendra

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kamen Lozev

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rico Magsipoc

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zhizhong Liu

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge