Featured Researches

Programming Languages

AnyHLS: High-Level Synthesis with Partial Evaluation

FPGAs excel in low power and high throughput computations, but they are challenging to program. Traditionally, developers rely on hardware description languages like Verilog or VHDL to specify the hardware behavior at the register-transfer level. High-Level Synthesis (HLS) raises the level of abstraction, but still requires FPGA design knowledge. Programmers usually write pragma-annotated C/C++ programs to define the hardware architecture of an application. However, each hardware vendor extends its own C dialect using its own vendor-specific set of pragmas. This prevents portability across different vendors. Furthermore, pragmas are not first-class citizens in the language. This makes it hard to use them in a modular way or design proper abstractions. In this paper, we present AnyHLS, an approach to synthesize FPGA designs in a modular and abstract way. AnyHLS is able to raise the abstraction level of existing HLS tools by resorting to programming language features such as types and higher-order functions as follows: It relies on partial evaluation to specialize and to optimize the user application based on a library of abstractions. Then, vendor-specific HLS code is generated for Intel and Xilinx FPGAs. Portability is obtained by avoiding any vendor-specific pragmas at the source code. In order to validate achievable gains in productivity, a library for the domain of image processing is introduced as a case study, and its synthesis results are compared with several state-of-theart Domain-Specific Language (DSL) approaches for this domain.

Read more
Programming Languages

Análise de Segurança Baseada em Roles para Fábricas de Software

Most software factories contain applications with sensitive information that needs to be protected against breaches of confidentiality and integrity, which can have serious consequences. In the context of large factories with complex applications, it is not feasible to manually analyze accesses to sensitive information without some form of safety mechanisms. This article presents a static analysis technique for software factories, based on role-based security policies. We start by synthesising a graph representation of the relevant software factories, based on the security policy defined by the user. Later the graph model is analysed to find access information where the security policy is breached, ensuring that all possible execution states are analysed. A proof of concept of our technique has been developed for the analysis of OutSystems software factories. The security reports generated by the tool allows developers to find and prioritise security breaches in their factories. The prototype was evaluated using large software factories, with strong safety requirements. Several security flaws were found, some serious ones that would be hard to be detected without our analysis.

Read more
Programming Languages

Applying Type Oriented Programming to the PGAS Memory Model

The Partitioned Global Address Space memory model has been popularised by a number of languages and applications. However this abstraction can often result in the programmer having to rely on some in built choices and with this implicit parallelism, with little assistance by the programmer, the scalability and performance of the code heavily depends on the compiler and choice of application. We propose an approach, type oriented programming, where all aspects of parallelism are encoded via types and the type system. The type information associated by the programmer will determine, for instance, how an array is allocated, partitioned and distributed. With this rich, high level of information the compiler can generate an efficient target executable. If the programmer wishes to omit detailed type information then the compiler will rely on well documented and safe default behaviour which can be tuned at a later date with the addition of types. The type oriented parallel programming language Mesham, which follows the PGAS memory model, is presented. We illustrate how, if so wished, with the use of types one can tune all parameters and options associated with this PGAS model in a clean and consistent manner without rewriting large portions of code. An FFT case study is presented and considered both in terms of programmability and performance - the latter we demonstrate by a comparison with an existing FFT solver.

Read more
Programming Languages

AsyncTaichi: On-the-fly Inter-kernel Optimizations for Imperative and Spatially Sparse Programming

Leveraging spatial sparsity has become a popular approach to accelerate 3D computer graphics applications. Spatially sparse data structures and efficient sparse kernels (such as parallel stencil operations on active voxels), are key to achieve high performance. Existing work focuses on improving performance within a single sparse computational kernel. We show that a system that looks beyond a single kernel, plus additional domain-specific sparse data structure analysis, opens up exciting new space for optimizing sparse computations. Specifically, we propose a domain-specific data-flow graph model of imperative and sparse computation programs, which describes kernel relationships and enables easy analysis and optimization. Combined with an asynchronous execution engine that exposes a wide window of kernels, the inter-kernel optimizer can then perform effective sparse computation optimizations, such as eliminating unnecessary voxel list generations and removing voxel activation checks. These domain-specific optimizations further make way for classical general-purpose optimizations that are originally challenging to directly apply to computations with sparse data structures. Without any computational code modification, our new system leads to 4.02× fewer kernel launches and 1.87× speed up on our GPU benchmarks, including computations on Eulerian grids, Lagrangian particles, meshes, and automatic differentiation.

Read more
Programming Languages

Asynchronous Effects

We explore asynchronous programming with algebraic effects. We complement their conventional synchronous treatment by showing how to naturally also accommodate asynchrony within them, namely, by decoupling the execution of operation calls into signalling that an operation's implementation needs to be executed, and interrupting a running computation with the operation's result, to which the computation can react by installing interrupt handlers. We formalise these ideas in a small core calculus, called λ ae . We demonstrate the flexibility of λ ae using examples ranging from a multi-party web application, to preemptive multi-threading, to remote function calls, to a parallel variant of runners of algebraic effects. In addition, the paper is accompanied by a formalisation of λ ae 's type safety proofs in Agda, and a prototype implementation of λ ae in OCaml.

Read more
Programming Languages

Automated Termination Analysis of Polynomial Probabilistic Programs

The termination behavior of probabilistic programs depends on the outcomes of random assignments. Almost sure termination (AST) is concerned with the question whether a program terminates with probability one on all possible inputs. Positive almost sure termination (PAST) focuses on termination in a finite expected number of steps. This paper presents a fully automated approach to the termination analysis of probabilistic while-programs whose guards and expressions are polynomial expressions. As proving (positive) AST is undecidable in general, existing proof rules typically provide sufficient conditions. These conditions mostly involve constraints on supermartingales. We consider four proof rules from the literature and extend these with generalizations of existing proof rules for (P)AST. We automate the resulting set of proof rules by effectively computing asymptotic bounds on polynomials over the program variables. These bounds are used to decide the sufficient conditions - including the constraints on supermartingales - of a proof rule. Our software tool Amber can thus check AST, PAST, as well as their negations for a large class of polynomial probabilistic programs, while carrying out the termination reasoning fully with polynomial witnesses. Experimental results show the merits of our generalized proof rules and demonstrate that Amber can handle probabilistic programs that are out of reach for other state-of-the-art tools.

Read more
Programming Languages

Automated Verification of Integer Overflow

Integer overflow accounts for one of the major source of bugs in software. Verification systems typically assume a well defined underlying semantics for various integer operations and do not explicitly check for integer overflow in programs. In this paper we present a specification mechanism for expressing integer overflow. We develop an automated procedure for integer overflow checking during program verification. We have implemented a prototype integer overflow checker and tested it on a benchmark consisting of already verified programs (over 14k LOC). We have found 43 bugs in these programs due to integer overflow.

Read more
Programming Languages

Automatic Differentiation via Effects and Handlers: An Implementation in Frank

Automatic differentiation (AD) is an important family of algorithms which enables derivative based optimization. We show that AD can be simply implemented with effects and handlers by doing so in the Frank language. By considering how our implementation behaves in Frank's operational semantics, we show how our code performs the dynamic creation of programs during evaluation.

Read more
Programming Languages

Automatic Kernel Generation for Volta Tensor Cores

A commonly occurring computation idiom in neural networks is to perform some pointwise operations on the result of a matrix multiplication. Such a sequence of operations is typically represented as a computation graph in deep learning compilers. When compiling to a GPU target, these computations can be individually mapped to manually tuned implementations provided by libraries such as cuBLAS and cuDNN. These libraries also provide off-the-shelf support for targeting tensor cores in NVIDIA GPUs, which can lead to huge performance boosts through their specialized support for mixed-precision matrix math. Alternatively, tensor cores can be programmed directly using CUDA APIs or inline assembly instructions, which opens up the possibility of generating efficient CUDA kernels automatically for such computations. Automatic kernel generation is particularly crucial when it is beneficial to generate efficient code for an entire computation graph by fusing several operations into a single device function instead of invoking a separate kernel for each of them. Polyhedral compilation techniques provide a systematic approach for the analysis and transformation of a sequence of affine loop-nests. In this paper, we describe a polyhedral approach to generate efficient CUDA kernels for matrix multiplication using inline assembly instructions for programming tensor cores on NVIDIA Volta GPUs. Furthermore, we build on this approach to generate fused kernels for computation sequences involving matrix multiplication and pointwise operations such as bias addition, ReLU activation etc. Experimental evaluation of these techniques show that automatically generated kernels can provide significantly better performance than manually tuned library implementations, with speedups ranging up to 2.55X.

Read more
Programming Languages

Automatic Repair of Vulnerable Regular Expressions

A regular expression is called vulnerable if there exist input strings on which the usual backtracking-based matching algorithm runs super linear time. Software containing vulnerable regular expressions are prone to algorithmic-complexity denial of service attack in which the malicious user provides input strings exhibiting the bad behavior. Due to the prevalence of regular expressions in modern software, vulnerable regular expressions are serious threat to software security. While there has been prior work on detecting vulnerable regular expressions, in this paper, we present a first step toward repairing a possibly vulnerable regular expression. Importantly, our method handles real world regular expressions containing extended features such as lookarounds, capturing groups, and backreferencing. (The problem is actually trivial without such extensions since any pure regular expression can be made invulnerable via a DFA conversion.) We build our approach on the recent work on example-based repair of regular expressions by Pan et al. [Pan et al. 2019] which synthesizes a regular expression that is syntactically close to the original one and correctly classifies the given set of positive and negative examples. The key new idea is the use of linear-time constraints, which disambiguate a regular expression and ensure linear time matching. We generate the constraints using an extended nondeterministic finite automaton that supports the extended features in real-world regular expressions. While our method is not guaranteed to produce a semantically equivalent regular expressions, we empirically show that the repaired regular expressions tend to be nearly indistinguishable from the original ones.

Read more

Ready to get started?

Join us today