Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where D. Stott Parker is active.

Publication


Featured researches published by D. Stott Parker.


international conference on management of data | 2007

Map-reduce-merge: simplified relational data processing on large clusters

Hung-Chih Yang; Ali Dasdan; Ruey-Lung Hsiao; D. Stott Parker

Map-Reduce is a programming model that enables easy development of scalable parallel applications to process a vast amount of data on large clusters of commodity machines. Through a simple interface with two functions, map and reduce, this model facilitates parallel implementation of many real-world tasks such as data processing jobs for search engines and machine learning. However,this model does not directly support processing multiple related heterogeneous datasets. While processing relational data is a common need, this limitation causes difficulties and/or inefficiency when Map-Reduce is applied on relational operations like joins. We improve Map-Reduce into a new model called Map-Reduce-Merge. It adds to Map-Reduce a Merge phase that can efficiently merge data already partitioned and sorted (or hashed) by map and reduce modules. We also demonstrate that this new model can express relational algebra operators as well as implement several join algorithms.


Neuroscience | 2009

Phenomics: the systematic study of phenotypes on a genome-wide scale

Robert M. Bilder; Fred W. Sabb; Tyrone D. Cannon; Edythe D. London; J.D. Jentsch; D. Stott Parker; Russell A. Poldrack; Christopher J. Evans; Nelson B. Freimer

Phenomics is an emerging transdiscipline dedicated to the systematic study of phenotypes on a genome-wide scale. New methods for high-throughput genotyping have changed the priority for biomedical research to phenotyping, but the human phenome is vast and its dimensionality remains unknown. Phenomics research strategies capable of linking genetic variation to public health concerns need to prioritize development of mechanistic frameworks that relate neural systems functioning to human behavior. New approaches to phenotype definition will benefit from crossing neuropsychiatric syndromal boundaries, and defining phenotypic features across multiple levels of expression from proteome to syndrome. The demand for high throughput phenotyping may stimulate a migration from conventional laboratory to web-based assessment of behavior, and this offers the promise of dynamic phenotyping-the iterative refinement of phenotype assays based on prior genotype-phenotype associations. Phenotypes that can be studied across species may provide greatest traction, particularly given rapid development in transgenic modeling. Phenomics research demands vertically integrated research teams, novel analytic strategies and informatics infrastructure to help manage complexity. The Consortium for Neuropsychiatric Phenomics at UCLA has been supported by the National Institutes of Health Roadmap Initiative to illustrate these principles, and is developing applications that may help investigators assemble, visualize, and ultimately test multi-level phenomics hypotheses. As the transdiscipline of phenomics matures, and work is extended to large-scale international collaborations, there is promise that systematic new knowledge bases will help fulfill the promise of personalized medicine and the rational diagnosis and treatment of neuropsychiatric syndromes.


Frontiers in Neuroinformatics | 2011

The Cognitive Atlas: Toward a Knowledge Foundation for Cognitive Neuroscience

Russell A. Poldrack; Aniket Kittur; Donald J. Kalar; Eric N. Miller; Christian Seppa; Yolanda Gil; D. Stott Parker; Fred W. Sabb; Robert M. Bilder

Cognitive neuroscience aims to map mental processes onto brain function, which begs the question of what “mental processes” exist and how they relate to the tasks that are used to manipulate and measure them. This topic has been addressed informally in prior work, but we propose that cumulative progress in cognitive neuroscience requires a more systematic approach to representing the mental entities that are being mapped to brain function and the tasks used to manipulate and measure mental processes. We describe a new open collaborative project that aims to provide a knowledge base for cognitive neuroscience, called the Cognitive Atlas (accessible online at http://www.cognitiveatlas.org), and outline how this project has the potential to drive novel discoveries about both mind and brain.


PLOS ONE | 2010

Neuroimaging Study Designs, Computational Analyses and Data Provenance Using the LONI Pipeline

Ivo D. Dinov; Kamen Lozev; Petros Petrosyan; Zhizhong Liu; Paul R. Eggert; Jonathan Pierce; Alen Zamanyan; Shruthi Chakrapani; John D. Van Horn; D. Stott Parker; Rico Magsipoc; Kelvin T. Leung; Boris A. Gutman; Roger P. Woods; Arthur W. Toga

Modern computational neuroscience employs diverse software tools and multidisciplinary expertise to analyze heterogeneous brain data. The classical problems of gathering meaningful data, fitting specific models, and discovering appropriate analysis and visualization tools give way to a new class of computational challenges—management of large and incongruous data, integration and interoperability of computational resources, and data provenance. We designed, implemented and validated a new paradigm for addressing these challenges in the neuroimaging field. Our solution is based on the LONI Pipeline environment [3], [4], a graphical workflow environment for constructing and executing complex data processing protocols. We developed study-design, database and visual language programming functionalities within the LONI Pipeline that enable the construction of complete, elaborate and robust graphical workflows for analyzing neuroimaging and other data. These workflows facilitate open sharing and communication of data and metadata, concrete processing protocols, result validation, and study replication among different investigators and research groups. The LONI Pipeline features include distributed grid-enabled infrastructure, virtualized execution environment, efficient integration, data provenance, validation and distribution of new computational tools, automated data format conversion, and an intuitive graphical user interface. We demonstrate the new LONI Pipeline features using large scale neuroimaging studies based on data from the International Consortium for Brain Mapping [5] and the Alzheimers Disease Neuroimaging Initiative [6]. User guides, forums, instructions and downloads of the LONI Pipeline environment are available at http://pipeline.loni.ucla.edu.


Software - Practice and Experience | 1996

Aesthetics-based graph layout for human consumption

Michael K. Coleman; D. Stott Parker

Automatic graph layout is an important and long-studied problem. The basic straight-edge graph layout problem is to find spatial positions for the nodes of an input graph that maximize some measure of desirability. When graph layout is intended for human consumption, we call this measure of desirability an aesthetic. We seek an algorithm that produces graph layouts of high aesthetic quality not only for general graphs, but also for specific classes of graphs, such as trees and directed acyclic graphs. The Aesthetic Graph Layout (AGLO) approach described in this paper models graph layout as a multiobjective optimization problem, where the value of a layout is determined by multiple user-controlled layout aesthetics. The current AGLO algorithm combines the power and flexibility of the simulated annealing approach of Davidson and Harel (1989) with the relative speed of the method of Fruchterman and Reingold (1991). In addition, it is more general, and incorporates several new layout aesthetics to support new layout styles. Using these aesthetics, we are able to produce pleasing displays for graphs on which these other methods flounder.


Cognitive Neuropsychiatry | 2009

Cognitive ontologies for neuropsychiatric phenomics research

Robert M. Bilder; Fred W. Sabb; D. Stott Parker; Donald J. Kalar; Wesley W. Chu; Jared Fox; Nelson B. Freimer; Russell A. Poldrack

Now that genome-wide association studies (GWAS) are dominating the landscape of genetic research on neuropsychiatric syndromes, investigators are being faced with complexity on an unprecedented scale. It is now clear that phenomics, the systematic study of phenotypes on a genome-wide scale, comprises a rate-limiting step on the road to genomic discovery. To gain traction on the myriad paths leading from genomic variation to syndromal manifestations, informatics strategies must be deployed to navigate increasingly broad domains of knowledge and help researchers find the most important signals. The success of the Gene Ontology project suggests the potential benefits of developing schemata to represent higher levels of phenotypic expression. Challenges in cognitive ontology development include the lack of formal definitions of key concepts and relations among entities, the inconsistent use of terminology across investigators and time, and the fact that relations among cognitive concepts are not likely to be well represented by simple hierarchical “tree” structures. Because cognitive concept labels are labile, there is a need to represent empirical findings at the cognitive test indicator level. This level of description has greater consistency, and benefits from operational definitions of its concepts and relations to quantitative data. Considering cognitive test indicators as the foundation of cognitive ontologies carries several implications, including the likely utility of cognitive task taxonomies. The concept of cognitive “test speciation” is introduced to mark the evolution of paradigms sufficiently unique that their results cannot be “mated” productively with others in meta-analysis. Several projects have been initiated to develop cognitive ontologies at the Consortium for Neuropsychiatric Phenomics (www.phenomics.ucla.edu), in the hope that these ultimately will enable more effective collaboration, and facilitate connections of information about cognitive phenotypes to other levels of biological knowledge. Several free web applications are available already to support examination and visualisation of cognitive concepts in the literature (PubGraph, PubAtlas, PubBrain) and to aid collaborative development of cognitive ontologies (Phenowiki and the Cognitive Atlas). It is hoped that these tools will help formalise inference about cognitive concepts in behavioural and neuroimaging studies, and facilitate discovery of the genetic bases of both healthy cognition and cognitive disorders.


knowledge discovery and data mining | 2010

Topic dynamics: an alternative model of bursts in streams of topics

Dan He; D. Stott Parker

For some time there has been increasing interest in the problem of monitoring the occurrence of topics in a stream of events, such as a stream of news articles. This has led to different models of bursts in these streams, i.e., periods of elevated occurrence of events. Today there are several burst definitions and detection algorithms, and their differences can produce very different results in topic streams. These definitions also share a fundamental problem: they define bursts in terms of an arrival rate. This approach is limiting; other stream dimensions can matter. We reconsider the idea of bursts from the standpoint of a simple kind of physics. Instead of focusing on arrival rates, we reconstruct bursts as a dynamic phenomenon, using kinetics concepts from physics -- mass and velocity -- and derive momentum, acceleration, and force from these. We refer to the result as topic dynamics, permitting a hierarchical, expressive model of bursts as intervals of increasing momentum. As a sample application, we present a topic dynamics model for the large PubMed/MEDLINE database of biomedical publications, using the MeSH (Medical Subject Heading) topic hierarchy. We show our model is able to detect bursts for MeSH terms accurately as well as efficiently.


ACM Sigada Ada Letters | 1985

Saving traces for Ada debugging

Carol LeDoux; D. Stott Parker

A trace database model for debugging concurrent Ada programs is presented. In this approach, trace information is captured in an historical database and queried using Prolog. This model was used to build a prototype debugger, called Your Own Debugger for Ada (YODA). The design of YODA is described and a trace analysis of a sample program exhibiting misuse of shared data is presented. Because the trace database model is flexible and general, it can aid diagnosis of a variety of runtime errors.


SIAM Journal on Computing | 1998

Conditions for Optimality of the Huffman Algorithm

D. Stott Parker

A new general formulation of Huffman tree construction is presented which has broad application. Recall that the Huffman algorithm forms a tree, in which every node has some associated weight, by specifying at every step of the construction which nodes are to be combined to form a new node with a new combined weight. We characterize a wide class of weight combination functions, the quasilinear functions, for which the Huffman algorithm produces optimal trees under correspondingly wide classes of cost criteria. In addition, known results about Huffman tree construction and related concepts from information theory and from the theory of convex functions are tied together. Suggestions for possible future applications are given.


international conference on data engineering | 1986

Formal properties of net-based knowledge representation schemes

Paolo Atzeni; D. Stott Parker

In the spirit of integrating data base and artificial intelligence techniques, a number of concepts widely used in relational data base theory are introduced in a knowledge representation scheme. A simple network model, which allows the representation of types, is-a relationships and disjointness constraints is considered. The concepts of consistency and redundancy are introduced and characterized by means of implication of constraints and systems of inference rules, and by means of graph theoretic concepts.

Collaboration


Dive into the D. Stott Parker's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Arthur W. Toga

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Carlo Zaniolo

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hung-chih Yang

University of California

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge