Featured Researches

Programming Languages

Communication-Safe Web Programming in TypeScript with Routed Multiparty Session Types

Modern web programming involves coordinating interactions between browser clients and a server. Typically, the interactions in web-based distributed systems are informally described, making it hard to ensure correctness, especially communication safety, i.e. all endpoints progress without type errors or deadlocks, conforming to a specified protocol. We present STScript, a toolchain that generates TypeScript APIs for communication-safe web development over WebSockets, and RouST, a new session type theory that supports multiparty communications with routing mechanisms. STScript provides developers with TypeScript APIs generated from a communication protocol specification based on RouST. The generated APIs build upon TypeScript concurrency practices, complement the event-driven style of programming in full-stack web development, and are compatible with the Node.js runtime for server-side endpoints and the React.js framework for browser-side endpoints. RouST can express multiparty interactions routed via an intermediate participant. It supports peer-to-peer communication between browser-side endpoints by routing communication via the server in a way that avoids excessive serialisation. RouST guarantees communication safety for endpoint web applications written using STScript APIs. We evaluate the expressiveness of STScript for modern web programming using several production-ready case studies deployed as web applications.

Read more
Programming Languages

Compact Native Code Generation for Dynamic Languages on Micro-core Architectures

Micro-core architectures combine many simple, low memory, low power-consuming CPU cores onto a single chip. Potentially providing significant performance and low power consumption, this technology is not only of great interest in embedded, edge, and IoT uses, but also potentially as accelerators for data-center workloads. Due to the restricted nature of such CPUs, these architectures have traditionally been challenging to program, not least due to the very constrained amounts of memory (often around 32KB) and idiosyncrasies of the technology. However, more recently, dynamic languages such as Python have been ported to a number of micro-cores, but these are often delivered as interpreters which have an associated performance limitation. Targeting the four objectives of performance, unlimited code-size, portability between architectures, and maintaining the programmer productivity benefits of dynamic languages, the limited memory available means that classic techniques employed by dynamic language compilers, such as just-in-time (JIT), are simply not feasible. In this paper we describe the construction of a compilation approach for dynamic languages on micro-core architectures which aims to meet these four objectives, and use Python as a vehicle for exploring the application of this in replacing the existing micro-core interpreter. Our experiments focus on the metrics of performance, architecture portability, minimum memory size, and programmer productivity, comparing our approach against that of writing native C code. The outcome of this work is the identification of a series of techniques that are not only suitable for compiling Python code, but also applicable to a wide variety of dynamic languages on micro-cores.

Read more
Programming Languages

Compilation of Coordinated Choice

Recently, we have proposed coordinated choices, which are nondeterministic choices equipped with names. The main characteristic of coordinated choices is that they synchronize nondeterministic decision among choices of the same name. The motivation of the synchronization mechanism is to solve a theoretical problem. So, as a practical programming language, we still want to use coordinated choices like standard ones. In other words, we want to avoid synchronization. Now, there are two problems: (i) practically, it is a bit complicated work to write a program using coordinated choices in which execution synchronization never happens; and (ii) theoretically, it is unknown whether any programs using standard choices can be written by using only coordinated ones. In this paper, we define two simply typed lambda calculi called λ ∥ equipped with standard choices and λ ∥ω equipped with coordinated choices, and give compilation rules from the former into the latter. The challenge is to show the correctness of the compilation because behavioral correspondence between expressions before and after compiling cannot be defined directly by the compilation rules. For the challenge, we give an effect system for λ ∥ω that characterizes expressions in which execution synchronization never happens. Then, we show that all compiled expressions can be typed by the effect system. As a result, we can easily show the correctness because the main concern of the correctness is whether synchronization happens or not.

Read more
Programming Languages

Compiling ONNX Neural Network Models Using MLIR

Deep neural network models are becoming increasingly popular and have been used in various tasks such as computer vision, speech recognition, and natural language processing. Machine learning models are commonly trained in a resource-rich environment and then deployed in a distinct environment such as high availability machines or edge devices. To assist the portability of models, the open-source community has proposed the Open Neural Network Exchange (ONNX) standard. In this paper, we present a high-level, preliminary report on our onnx-mlir compiler, which generates code for the inference of deep neural network models described in the ONNX format. Onnx-mlir is an open-source compiler implemented using the Multi-Level Intermediate Representation (MLIR) infrastructure recently integrated in the LLVM project. Onnx-mlir relies on the MLIR concept of dialects to implement its functionality. We propose here two new dialects: (1) an ONNX specific dialect that encodes the ONNX standard semantics, and (2) a loop-based dialect to provide for a common lowering point for all ONNX dialect operations. Each intermediate representation facilitates its own characteristic set of graph-level and loop-based optimizations respectively. We illustrate our approach by following several models through the proposed representations and we include some early optimization work and performance results.

Read more
Programming Languages

Compiling a Higher-Order Smart Contract Language to LLVM

Scilla is a higher-order polymorphic typed intermediate level language for implementing smart contracts. In this talk, we describe a Scilla compiler targeting LLVM, with a focus on mapping Scilla types, values, and its functional language constructs to LLVM-IR. The compiled LLVM-IR, when executed with LLVM's JIT framework, achieves a speedup of about 10x over the reference interpreter on a typical Scilla contract. This reduced latency is crucial in the setting of blockchains, where smart contracts are executed as parts of transactions, to achieve peak transactions processed per second. Experiments on the Ackermann function achieved a speedup of more than 45x. This talk abstract is aimed at both programming language researchers looking to implement an LLVM based compiler for their functional language, as well as at LLVM practitioners.

Read more
Programming Languages

Complete Bidirectional Typing for the Calculus of Inductive Constructions

This article presents a bidirectional type system for the Calculus of Inductive Constructions (CIC). It introduces a new judgement intermediate between the usual inference and checking, dubbed constrained inference, to handle the presence of computation in types. The key property of the system is its completeness with respect to the usual undirected one, which has been formally proven in Coq as a part of the MetaCoq project. Although it plays an important role in an ongoing completeness proof for a realistic typing algorithm, the interest of bidirectionality is wider, as it gives insights and structure when trying to prove properties on CIC or design variations and extensions. In particular, we put forward constrained inference, an intermediate between the usual inference and checking judgements, to handle the presence of computation in types.

Read more
Programming Languages

Compositional Semantics for Probabilistic Programs with Exact Conditioning

We define a probabilistic programming language for Gaussian random variables with a first-class exact conditioning construct. We give operational, denotational and equational semantics for this language, establishing convenient properties like exchangeability of conditions. Conditioning on equality of continuous random variables is nontrivial, as the exact observation may have probability zero; this is Borel's paradox. Using categorical formulations of conditional probability, we show that the good properties of our language are not particular to Gaussians, but can be derived from universal properties, thus generalizing to wider settings. We define the Cond construction, which internalizes conditioning as a morphism, providing general compositional semantics for probabilistic programming with exact conditioning.

Read more
Programming Languages

ConSORT: Context- and Flow-Sensitive Ownership Refinement Types for Imperative Programs

We present ConSORT, a type system for safety verification in the presence of mutability and aliasing. Mutability requires strong updates to model changing invariants during program execution, but aliasing between pointers makes it difficult to determine which invariants must be updated in response to mutation. Our type system addresses this difficulty with a novel combination of refinement types and fractional ownership types. Fractional ownership types provide flow-sensitive and precise aliasing information for reference variables. ConSORT interprets this ownership information to soundly handle strong updates of potentially aliased references. We have proved ConSORT sound and implemented a prototype, fully automated inference tool. We evaluated our tool and found it verifies non-trivial programs including data structure implementations.

Read more
Programming Languages

Concentration-Bound Analysis for Probabilistic Programs and Probabilistic Recurrence Relations

Analyzing probabilistic programs and randomized algorithms are classical problems in computer science. The first basic problem in the analysis of stochastic processes is to consider the expectation or mean, and another basic problem is to consider concentration bounds, i.e. showing that large deviations from the mean have small probability. Similarly, in the context of probabilistic programs and randomized algorithms, the analysis of expected termination time/running time and their concentration bounds are fundamental this http URL this work, we focus on concentration bounds for probabilistic programs and probabilistic recurrences of randomized algorithms. For probabilistic programs, the basic technique to achieve concentration bounds is to consider martingales and apply the classical Azuma's inequality. For probabilistic recurrences of randomized algorithms, Karp's classical "cookbook" method, which is similar to the master theorem for recurrences, is the standard approach to obtain concentration bounds. In this work, we propose a novel approach for deriving concentration bounds for probabilistic programs and probabilistic recurrence relations through the synthesis of exponential supermartingales. For probabilistic programs, we present algorithms for synthesis of such supermartingales in several cases. We also show that our approach can derive better concentration bounds than simply applying the classical Azuma's inequality over various probabilistic programs considered in the literature. For probabilistic recurrences, our approach can derive tighter bounds than the Karp's well-established methods on classical algorithms. Moreover, we show that our approach could derive bounds comparable to the optimal bound for quicksort, proposed by McDiarmid and Hayward. We also present a prototype implementation that can automatically infer these bounds

Read more
Programming Languages

Concise Outlines for a Complex Logic: A Proof Outline Checker for TaDA (Full Paper)

Modern separation logics allow one to prove rich properties of intricate code, e.g. functional correctness and linearizability of non-blocking concurrent code. However, this expressiveness leads to a complexity that makes these logics difficult to apply. Manual proofs or proofs in interactive theorem provers consist of a large number of steps, often with subtle side conditions. On the other hand, automation with dedicated verifiers typically requires sophisticated proof search algorithms that are specific to the given program logic, resulting in limited tool support that makes it difficult to experiment with program logics, e.g. when learning, improving, or comparing them. Proof outline checkers fill this gap. Their input is a program annotated with the most essential proof steps, just like the proof outlines typically presented in papers. The tool then checks automatically that this outline represents a valid proof in the program logic. In this paper, we systematically develop a proof outline checker for the TaDA logic, which reduces the checking to a simpler verification problem, for which automated tools exist. Our approach leads to proof outline checkers that provide substantially more automation than interactive provers, but are much simpler to develop than custom automatic verifiers.

Read more

Ready to get started?

Join us today