Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Charlie Curtsinger is active.

Publication


Featured researches published by Charlie Curtsinger.


symposium on operating systems principles | 2011

Dthreads: efficient deterministic multithreading

Tongping Liu; Charlie Curtsinger; Emery D. Berger

Multithreaded programming is notoriously difficult to get right. A key problem is non-determinism, which complicates debugging, testing, and reproducing errors. One way to simplify multithreaded programming is to enforce deterministic execution, but current deterministic systems for C/C++ are incomplete or impractical. These systems require program modification, do not ensure determinism in the presence of data races, do not work with general-purpose multithreaded programs, or run up to 8.4× slower than pthreads. This paper presents Dthreads, an efficient deterministic multithreading system for unmodified C/C++ applications that replaces the pthreads library. Dthreads enforces determinism in the face of data races and deadlocks. Dthreads works by exploding multithreaded applications into multiple processes, with private, copy-on-write mappings to shared memory. It uses standard virtual memory protection to track writes, and deterministically orders updates by each thread. By separating updates from different threads, Dthreads has the additional benefit of eliminating false sharing. Experimental results show that Dthreads substantially outperforms a state-of-the-art deterministic runtime system, and for a majority of the benchmarks evaluated here, matches and occasionally exceeds the performance of pthreads.


architectural support for programming languages and operating systems | 2013

STABILIZER: statistically sound performance evaluation

Charlie Curtsinger; Emery D. Berger

Researchers and software developers require effective performance evaluation. Researchers must evaluate optimizations or measure overhead. Software developers use automatic performance regression tests to discover when changes improve or degrade performance. The standard methodology is to compare execution times before and after applying changes. Unfortunately, modern architectural features make this approach unsound. Statistically sound evaluation requires multiple samples to test whether one can or cannot (with high confidence) reject the null hypothesis that results are the same before and after. However, caches and branch predictors make performance dependent on machine-specific parameters and the exact layout of code, stack frames, and heap objects. A single binary constitutes just one sample from the space of program layouts, regardless of the number of runs. Since compiler optimizations and code changes also alter layout, it is currently impossible to distinguish the impact of an optimization from that of its layout effects. This paper presents Stabilizer, a system that enables the use of the powerful statistical techniques required for sound performance evaluation on modern architectures. Stabilizer forces executions to sample the space of memory configurations by repeatedly re-randomizing layouts of code, stack, and heap objects at runtime. Stabilizer thus makes it possible to control for layout effects. Re-randomization also ensures that layout effects follow a Gaussian distribution, enabling the use of statistical tests like ANOVA. We demonstrate Stabilizers efficiency (<7% median overhead) and its effectiveness by evaluating the impact of LLVMs optimizations on the SPEC CPU2006 benchmark suite. We find that, while -O2 has a significant impact relative to -O1, the performance impact of -O3 over -O2 optimizations is indistinguishable from random noise.


design, automation, and test in europe | 2013

Probabilistic timing analysis on conventional cache designs

Leonidas Kosmidis; Charlie Curtsinger; Eduardo Quiñones; Jaume Abella; Emery D. Berger; Francisco J. Cazorla

Probabilistic timing analysis (PTA), a promising alternative to traditional worst-case execution time (WCET) analyses, enables pairing time bounds (named probabilistic WCET or pWCET) with an exceedance probability (e.g., 10−16), resulting in far tighter bounds than conventional analyses. However, the applicability of PTA has been limited because of its dependence on relatively exotic hardware: fully-associative caches using random replacement. This paper extends the applicability of PTA to conventional cache designs via a software-only approach. We show that, by using a combination of compiler techniques and runtime system support to randomise the memory layout of both code and data, conventional caches behave as fully-associative ones with random replacement.


international conference on software engineering | 2016

DoubleTake: fast and precise error detection via evidence-based dynamic analysis

Tongping Liu; Charlie Curtsinger; Emery D. Berger

Programs written in unsafe languages like C and C++ often suffer from errors like buffer overflows, dangling pointers, and memory leaks. Dynamic analysis tools like Valgrind can detect these errors, but their overhead — primarily due to the cost of instrumenting every memory read and write — makes them too heavyweight for use in deployed applications and makes testing with them painfully slow. The result is that much deployed software remains susceptible to these bugs, which are notoriously difficult to track down.This paper presents evidence-based dynamic analysis, an approach that enables these analyses while imposing minimal overhead (under 5%), making it practical for the first time to perform these analyses in deployed settings. The key insight of evidence-based dynamic analysis is that for a class of errors, it is possible to ensure that evidence that they happened at some point in the past remains for later detection. Evidence-based dynamic analysis allows execution to proceed at nearly full speed until the end of an epoch (e.g., a heavyweight system call). It then examines program state to check for evidence that an error occurred at some time during that epoch. If so, it rolls back execution and re-executes the code with instrumentation activated to pinpoint the error.We present DoubleTake, a prototype evidence-based dynamic analysis framework. DoubleTake is practical and easy to deploy, requiring neither custom hardware, compiler, nor operating system support. We demonstrate DoubleTake’s generality and efficiency by building dynamic analyses that find buffer overflows, memory use-after-free errors, and memory leaks. Our evaluation shows that DoubleTake is efficient, imposing under 5% overhead on average, making it the fastest such system to date. It is also precise: DoubleTake pinpoints the location of these errors to the exact line and memory addresses where they occur, providing valuable debugging information to programmers.


Communications of The ACM | 2016

AutoMan: a platform for integrating human-based and digital computation

Daniel W. Barowy; Charlie Curtsinger; Emery D. Berger; Andrew McGregor

Humans can perform many tasks with ease that remain difficult or impossible for computers. Crowdsourcing platforms like Amazon Mechanical Turk make it possible to harness human-based computational power at an unprecedented scale, but their utility as a general-purpose computational platform remains limited. The lack of complete automation makes it difficult to orchestrate complex or interrelated tasks. Recruiting more human workers to reduce latency costs real money, and jobs must be monitored and rescheduled when workers fail to complete their tasks. Furthermore, it is often difficult to predict the length of time and payment that should be budgeted for a given task. Finally, the results of human-based computations are not necessarily reliable, both because human skills and accuracy vary widely, and because workers have a financial incentive to minimize their effort. We introduce AutoMan, the first fully automatic crowdprogramming system. AutoMan integrates human-based computations into a standard programming language as ordinary function calls that can be intermixed freely with traditional functions. This abstraction lets AutoMan programmers focus on their programming logic. An AutoMan program specifies a confidence level for the overall computation and a budget. The AutoMan runtime system then transparently manages all details necessary for scheduling, pricing, and quality control. AutoMan automatically schedules human tasks for each computation until it achieves the desired confidence level; monitors, reprices, and restarts human tasks as necessary; and maximizes parallelism across human workers while staying under budget.


usenix security symposium | 2011

ZOZZLE: fast and precise in-browser JavaScript malware detection

Charlie Curtsinger; Benjamin Livshits; Benjamin G. Zorn; Christian Seifert


symposium on operating systems principles | 2015

C oz : finding code that counts with causal profiling

Charlie Curtsinger; Emery D. Berger


Communications of The ACM | 2018

Coz: finding code that counts with causal profiling

Charlie Curtsinger; Emery D. Berger


Archive | 2012

STABILIZER: Enabling Statistically Rigorous Performance Evaluation

Charlie Curtsinger; Emery D. Berger


usenix annual technical conference | 2016

COZ: Finding Code that Counts with Causal Profiling

Charlie Curtsinger; Emery D. Berger

Collaboration


Dive into the Charlie Curtsinger's collaboration.

Top Co-Authors

Avatar

Emery D. Berger

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar

Tongping Liu

University of Texas at San Antonio

View shared research outputs
Top Co-Authors

Avatar

Andrew McGregor

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar

Daniel W. Barowy

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar

Eduardo Quiñones

Barcelona Supercomputing Center

View shared research outputs
Top Co-Authors

Avatar

Francisco J. Cazorla

Barcelona Supercomputing Center

View shared research outputs
Top Co-Authors

Avatar

Jaume Abella

Barcelona Supercomputing Center

View shared research outputs
Researchain Logo
Decentralizing Knowledge