David Socha
University of Washington
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by David Socha.
workshop on parallel & distributed debugging | 1988
David Socha; Mary L. Bailey; David Notkin
Voyeur is a prototype system that facilitates the construction of application-specific, visual views of parallel programs. These views range from textual views showing the contents of variables to graphical maps of the state of the computational domain of the program. These views have been instrumental in quickly detecting bugs that would have been difficult to detect otherwise.
acm sigplan symposium on principles and practice of parallel programming | 1988
David Notkin; Lawrence Snyder; David Socha; Mary L. Bailey; Bruce Forstall; Kevin Gates; Raymond Greenlaw; William G. Griswold; Thomas J. Holman; Richard Korry; Gemini Lasswell; Robert Mitchell; Philip A. Nelson
Experience from over five years of building nonshared memory parallel programs using the Poker Parallel Programming Environment has positioned us to evaluate our approach to defining and developing parallel programs. This paper presents the more significant results of our evaluation of Poker. The evaluation is driving our next effort in parallel programming environment; many of the results should be sufficiently general to apply to other related efforts.
conference on computer supported cooperative work | 2016
Josh D. Tenenberg; Wolff-Michael Roth; David Socha
Awareness is one of the central concepts in Computer Supported Cooperative Work, though it has often been used in several different senses. Recently, researchers have begun to provide a clearer conceptualization of awareness that provides concrete guidance for the structuring of empirical studies of awareness and the development of tools to support awareness. Such conceptions, however, do not take into account newer understandings of shared intentionality among cooperating actors that recently have been defined by philosophers and empirically investigated by psychologists and psycho-linguists. These newer conceptions highlight the common ground and socially recursive inference that underwrites cooperative behavior. And it is this inference that is often seamlessly carried out in collocated work, so easy to take for granted and hence overlook, that will require computer support if such work is to be partially automated or carried out at a distance. Ignoring the inferences required in achieving common ground may thus focus a researcher or designer on surface forms of “heeding” that miss the underlying processes of intention shared in and through activity that are critical for cooperation to succeed. Shared intentionality thus provides a basis for reconceptualizing awareness in CSCW research, building on and augmenting existing notions. In this paper, we provide a philosophically grounded conception of awareness based on shared intentionality, demonstrate how it accounts for behavior in an empirical study of two individuals in collocated, tightly-coupled work, and provide implications of this conception for the design of computational systems to support tightly-coupled collaborative work.
distributed memory computing conference | 1990
Lawrence Snyder; David Socha
We present a new polynomial time algorithm for allocating array elements to the processor memories of parallel computers. The algorithm produces, for sufficiently large arrays, partitionings that are balanced, near-rectangular and near-bulky. Balanced means each allocation is assigned the minimal number of elements. Near-rectangular means that each allocation is at most two off from the optimal aspect ratio in each dimension and has at most two jogs along each edge. A jog is where the boundary deviates from a straight line. Near-bulky means that each allocation has a near maximal ratio of interior/exterior points. For an I x J array of points and a K x K array of processors the algorithm produces balanced nearbulky partitionings when I, J 2 4K and produces balanced, near-bulky and near-rectangular allocations when I, J 2 8K. These bounds are not tight. A variant of the algorithm produces allocations with at most six neighbors per allocation for arbitrary stencils. Using these near-rectangular allocations incurs little additional cost for compilers for distributed memory parallel computers, and the extra run-time cost usually is offset by the advantage of balanced allocations.
distributed memory computing conference | 1990
David Socha
This paper proposes a scheme for compiling an important class of iterative algorithms into efficient code for distributed memory computers. The programmer provides a description of the problem in Spot: a data parallel SIMD language that uses iterations as the unit of synchronization and is based on grids of data points. The data parallel description is in terms of a single point of the data space, with implicit communication semantics, and a set of numerical boundary conditions. The compiler eliminates the need for multi-tasking by ‘(expanding” the single-point code into multiplepoint code that executes over rectangular regions of points. Using rectangle intersection and difference operations on these regions allows the compiler to automatically insert the required communication calls and to hide communication latency by overlapping comput at ion and communication. The multiple-point code may be specialized, at compile-time, to the size and shape of different allocations, or it may use table-driven for-loops to adapt, at run-time, to the shape and size of the allocations. We show how to generalize this strategy to produce code for the near-rectangular shaped allocations required for balanced partitionings of rectangular arrays.
international symposium on software testing and analysis | 2006
Hana Ševčíková; Alan Borning; David Socha; Wolf Gideon Bleek
Automated tests can play a key role in ensuring system quality in software development. However, significant problems arise in automating tests of stochastic algorithms. Normally, developers write tests that simply check whether the actual result is equal to the expected result (perhaps within some tolerance). But for stochastic algorithms, restricting ourselves in this way severely limits the kinds of tests we can write: either to trivial tests, or to fragile and hard-tounderstand tests that rely on a particular seed for a random number generator. A richer and more powerful set of tests is possible if we accommodate tests of statistical properties of the results of running an algorithm many times. The work described in this paper has been done in the context of a real-world application, a large-scale simulation of urban development designed to inform major decisions about land use and transportation. We describe our earlier experience with using automated testing for this system, in which we took a conventional approach, and the resulting difficulties. We then present a statistically based approach for testing stochastic algorithms based on hypothesis testing. Three different ways of constructing such tests are given, which cover the most commonly used distributions. We evaluate these tests in terms of frequency of failing when they should and when they should not, and conclude with guidelines and practical suggestions for implementing such unit tests for other stochastic applications.
Codesign | 2017
Wolff-Michael Roth; David Socha; Josh D. Tenenberg
Abstract Codesigning tends to be identified with collaborative endeavours to produce designs. In this study, grounded in an anthropology of making, we propose a radically different use of the ‘co-’ that emphasises the continued becoming and mutual shaping of people-and-materials-becoming-design. An extended case study of a design critique presentation from a graduate course in industrial design is used to exemplify this different perspective. It expands upon the common use of the understanding of codesigning by bringing to the fore not only the back-and-forth movement of people and evolving designs in correspondence with each other but also the transverse movement, which is the intertwining streams of perduring life. Codesign is thus understood as a process of designer, materials and designed objects coming into correspondence while corresponding (conversing) with each other, and all designing is understood as codesigning The approach decentres common agent-centred notions of designing to focus on the continued becoming-design that shapes designers and their materials alike.
international conference on software engineering | 2013
David Socha; Josh D. Tenenberg
This paper argues that understanding how professional software developers use diagrams and sketches in their work is an underexplored terrain. We illustrate this by summarizing a number of studies on sketching and diagramming across a variety of domains, and arguing for their limited generalizability. In order to develop further insight, we describe the design of a research project we are embarking upon and its grounding theoretical assumptions.
digital government research | 2006
Paul Waddell; Alan Borning; Hana Ševčíková; David Socha
This demo will give an introduction to Opus, the Open Platform for Urban Simulation, an Open Source platform for building simulations of land use, activity-based travel demand, and dynamic traffic assignment. It is a result of an international collaboration of research teams working on integrated land use, transportation and environmental modeling. We have developed a new version of UrbanSim - a simulation system for modeling urban development, originally demonstrated at the Digital Government 2004 Conference - as a component of Opus. We will demonstrate usage of UrbanSim for different stakeholder types, from modelers to policy makers.
international conference on software engineering | 2016
David Socha; Robin Adams; Kelly Franznick; Wolff-Michael Roth; Kevin J. Sullivan; Josh D. Tenenberg; Skip Walter
This paper presents a vision of how the Internet of Things will impact the study of software engineering by 2025 and beyond. The following questions guide this inquiry. What will it mean to be able to deploy hundreds of sensors and data collectors running concurrently over months to gather very large and rich datasets of the physical, digital, and social aspects of software engineering organizations and the products and services those organizations create? How might such datasets change the types of research questions that can be addressed? What sort of tools will be needed to allow interdisciplinary communities of researchers to collaboratively analyse such datasets? How might such datasets help us understand the principles governing the interplay of physical, cyber, and social aspects of software engineering and its products, and automate aspects of such systems?