Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David M. Ungar is active.

Publication


Featured researches published by David M. Ungar.


dynamic languages symposium | 2009

Hosting an object heap on manycore hardware: an exploration

David M. Ungar; Sam S. Adams

In order to construct a test-bed for investigating new programming paradigms for future manycore systems (i.e. those with at least a thousand cores), we are building a Smalltalk virtual machine that attempts to efficiently use a collection of 56-on-chip caches of 64KB each to host a multi-megabyte object heap. In addition to the cost of inter-core communication, two hardware characteristics influenced our design: the absence of hardware-provided cache-coherence, and the inability to move a single object from one cores cache to anothers without changing its address. Our design relies on an object table, and the exploitation of a user-managed caching regime for read-mostly objects. At almost every stage of our process, we obtained measurements in order to guide the evolution of our system.n The architecture and performance characteristics of a manycore platform confound old intuitions by deviating from both traditional multicore systems and from distributed systems. The implementor confronts a wide variety of design choices, such as when to share address space, when to share memory as opposed to sending a message, and how to eke out the most performance from a memory system that is far more tightly integrated than a distributed system yet far less centralized than in a several-core system. Our system is far from complete, let alone optimal, but our experiences have helped us develop new intuitions needed to rise to the manycore software challenge.


conference on object-oriented programming systems, languages, and applications | 2010

Harnessing emergence for manycore programming: early experience integrating ensembles, adverbs, and object-based inheritance

David M. Ungar; Sam S. Adams

We believe that embracing nondeterminism and harnessing emergence have great potential to simplify the task of programming manycore processors. To that end, we have designed and implemented Ly, pronounced Lee, a new parallel programming language built around two new concepts: (i) ensembles which provide for parallel execution and replace all collections and (ii) iterators, and adverbs, which modify the parallel behavior of messages sent to ensembles. The broad issues around programming in this fashion still need investigation, but, after our initial Ly programming experience, we have identified some specific issues that must be addressed in integrating these concepts into an object-based language, including empty ensembles, partial message understanding, non-local returns from ensemble members, and unintended ensembles.


sigplan symposium on new ideas new paradigms and reflections on programming and software | 2014

Korz: Simple, Symmetric, Subjective, Context-Oriented Programming

David M. Ungar; Harold Ossher; Doug Kimelman

Korz is a new computational model that provides for context-oriented programming by combining implicit arguments and multiple dispatch in a slot-based model. This synthesis enables the writing of software that supports contextual variation along multiple dimensions, and graceful evolution of that software to support new, unexpected dimensions of variability, without the need for additional mechanism such as layers or aspects. With Korz, a system consists of a sea of method and data slots in a multidimensional space. There is no fixed organization of slots into objects - a slot pertains to a number of objects instead of being contained by a single object - and slots can come together according to the implicit context in any given situation, yielding subjective objects. There is no dominant decomposition, and no dimension holds sway over any other. IDE support is essential for managing complexity when working with the slot space and with subjectivity, allowing the task at hand to dictate what subspaces to isolate and what dominance of dimensions to use when presenting nested views to the user. We have implemented a prototype interpreter and IDE, and used it on several examples. This early experience has revealed much that needs to be done, but has also shown considerable promise. It seems that Korzs particular combination of concepts, each well-known from the past, is indeed more powerful than the sum of its parts.


dynamic languages symposium | 2017

Dynamic atomicity: optimizing swift memory management

David M. Ungar; David Grove; Hubertus Franke

Swift is a modern multi-paradigm programming language with an extensive developer community and open source ecosystem. Swift 3s memory management strategy is based on Automatic Reference Counting (ARC) augmented with unsafe APIs for manually-managed memory. We have seen ARC consume as much as 80% of program execution time. A significant portion of ARCs direct performance cost can be attributed to its use of atomic machine instructions to protect reference count updates from data races. Consequently, we have designed and implemented dynamic atomicity, an optimization which safely replaces atomic reference-counting operations with nonatomic ones where feasible. The optimization introduces a store barrier to detect possibly intra-thread references, compiler-generated recursive reference-tracers to find all affected objects, and a bit of state in each reference count to encode its atomicity requirements. Using a suite of 171 microbenchmarks, 9 programs from the Computer Language Benchmarks Game, and the Richards benchmark, we performed a limit study by unsafely making all reference counting operations nonatomic. We measured potential speedups of up to 220% on the microbenchmarks, 120% on the Benchmarks Game and 70% on Richards. By automatically reducing ARC overhead, our optimization both improves Swift 3s performance and reduces the temptation for performance-oriented programmers to resort to unsafe manual memory management. Furthermore, the machinery implemented for dynamic atomicity could also be employed to obtain cheaper thread-safe Swift data structures, or to augment ARC with optional cycle detection or a backup tracing garbage collector.


Proceedings of the 5th International Workshop on Context-Oriented Programming | 2013

Enterprise context: a rich source of requirements for context-oriented programming

Sam S. Adams; Suparna Bhattacharya; Bob Friedlander; John K. Gerken; Doug Kimelman; Jim Kraemer; Harold Ossher; John T. Richards; David M. Ungar; Mark N. Wegman

We introduce the domain of enterprise context, as opposed to personal or execution context, and we present requirements for context-oriented programming technology arising out of this broader notion of context. We illustrate enterprise context with scenarios in which data from across an enterprise, as well as data from outside an enterprise, are all brought to bear as context in any situation where they are relevant and can factor into making better decisions and achieving better outcomes. We suggest enterprise context as a rich source of requirements for context-oriented programming models, languages, and virtual machines. In particular, we raise issues such as scale, integration, relevance, temporality, protection, privacy, provenance, policy in general, and valuation. And, for this workshop, we propose enterprise context as one perspective for discussion of new language and VM features: How do proposed features support such a domain?


Companion Proceedings of the 14th International Conference on Modularity | 2015

Subjective, multidimensional modularity with korz

Harold Ossher; David M. Ungar; Doug Kimelman

Korz is a new computational model that provides for context-oriented programming by combining implicit arguments and multiple dispatch in a slot-based model. This synthesis enables the writing of software that supports contextual variation along multiple dimensions, and graceful evolution of that software to support new, unexpected dimensions of variability, without the need for additional mechanism such as layers or aspects. With Korz, a system consists of a sea of method and data slots in a multidimensional space. There is no fixed organization of slots into objects – a slot pertains to a number of objects instead of being contained by a single object – and slots can come together according to the implicit context in any given situation, yielding subjective objects. There is no dominant decomposition, and no dimension holds sway over any other. IDE support is essential for managing complexity when working with the slot space and with subjectivity, allowing the task at hand to dictate what subspaces to isolate and what dominance of dimensions to use when presenting nested views to the user. We have implemented a prototype interpreter and IDE, and used it on several examples. This early experience has revealed much that needs to be done, but has also shown promise. It seems that Korzs particular combination of concepts, each well-known from the past, is indeed more powerful than the sum of its parts.


acm conference on systems programming languages and applications software for humanity | 2012

Workshop on relaxing synchronization for multicore and manycore scalability (RACES 2012)

Andrew P. Black; Theo D'Hondt; Doug Kimelman; Martin C. Rinard; David M. Ungar

Massively-parallel systems are coming: core counts keep rising whether conventional cores as in multicore and manycore systems, or specialized cores as in GPUs. Conventional wisdom has been to utilize this parallelism by reducing synchronization to the minimum required to preserve determinism in particular, by eliminating data races. However, Amdahls law implies that on highly-parallel systems even a small amount of synchronization that introduces serialization will limit scaling. Thus, we are forced to confront the trade-off between synchronization and the ability of an implementation to scale performance with the number of processors: synchronization inherently limits parallelism. This workshop focuses on harnessing parallelism by limiting synchronization, even to the point where programs will compute inconsistent or approximate rather than exact answers.


Proceedings of the 2012 ACM workshop on Relaxing synchronization for multicore and manycore scalability | 2012

Does better throughput require worse latency

David M. Ungar; Doug Kimelman; Sam S. Adams; Mark N. Wegman

Let throughput denote the amount of application-level work performed in unit time, normalized to the amount of work that would be accomplished with perfect linear scaling. Let latency denote the mean time required for a thread on one core to observe a change effected by a thread on another core, normalized to the best latency possible for the given platform. Might it be true that algorithms that improve application-level throughput worsen inter-core application-level latency? As techniques for improving performance have evolved from mutex-and-locks to race-and-repair, each seems to have offered more throughput at the expense of increased latency.


conference on object-oriented programming systems, languages, and applications | 2011

Multicore, manycore, and cloud computing: is a new programming language paradigm required?

S. Tucker Taft; Joshua J. Bloch; Robert L. Bocchino; Sebastian Burckhardt; Hassan Chafi; Russ Cox; Benedict R. Gaster; Guy L. Steele; David M. Ungar

Most of the mainstream programming languages in use today originated in the 70s and 80s. Even the scripting languages in growing use today tend to be based on paradigms established twenty years ago. Does the arrival of multicore, manycore, and cloud computing mean that we need to establish a new set of programming languages with new paradigms, or should we focus on adding more parallel programming features to our existing programming languages?n Consistent with the SPLASH theme of the Internet as the world-wide Virtual Machine, and the Onward! theme focused on the future of Software Language Design, this panel will discuss the role that programming languages should play in this new distributed, highly parallel computing milieu. Do we need new languages with new programming paradigms, and if so, what should these new languages look like?


Archive | 2011

Inconsistency Robustness for Scalability in Interactive Concurrent‑Update In-Memory MOLAP Cubes

David M. Ungar; Doug Kimelman; Sam S. Adams

Collaboration


Dive into the David M. Ungar's collaboration.

Top Co-Authors

Avatar

Andrew P. Black

Portland State University

View shared research outputs
Top Co-Authors

Avatar

Martin C. Rinard

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Theo D'Hondt

Vrije Universiteit Brussel

View shared research outputs
Researchain Logo
Decentralizing Knowledge