Kees van Reeuwijk
VU University Amsterdam
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kees van Reeuwijk.
IEEE Computer | 2010
Henri E. Bal; Jason Maassen; Rob V. van Nieuwpoort; Niels Drost; Roelof Kemp; Timo van Kessel; Nick Palmer; Gosia Wrzesińska; Thilo Kielmann; Kees van Reeuwijk; Frank J. Seinstra; Ceriel J. H. Jacobs; Kees Verstoep
The use of parallel and distributed computing systems is essential to meet the ever-increasing computational demands of many scientific and industrial applications. Ibis allows easy programming and deployment of compute-intensive distributed applications, even for dynamic, faulty, and heterogeneous environments.
recent advances in intrusion detection | 2006
Willem de Bruijn; Asia Slowinska; Kees van Reeuwijk; Tomas Hruby; Li Xu; Herbert Bos
Current intrusion detection systems have a narrow scope. They target flow aggregates, reconstructed TCP streams, individual packets or application-level data fields, but no existing solution is capable of handling all of the above. Moreover, most systems that perform payload inspection on entire TCP streams are unable to handle gigabit link rates. We argue that network-based intrusion detection systems should consider all levels of abstraction in communication (packets, streams, layer-7 data units, and aggregates) if they are to handle gigabit link rates in the face of complex application-level attacks such as those that use evasion techniques or polymorphism. For this purpose, we developed a framework for network-based intrusion prevention at the network edge that is able to cope with all levels of abstraction and can be easily extended with new techniques. We validate our approach by making available a practical system, SafeCard, capable of reconstructing and scanning TCP streams at gigabit rates while preventing polymorphic buffer-overflow attacks, using (up to) layer-7 checks. Such performance makes it applicable in-line as an intrusion prevention system. SafeCard merges multiple solutions, some new and some known. We made specific contributions in the implementation of deep-packet inspection at high speeds and in detecting and filtering polymorphic buffer overflows.
languages, compilers, and tools for embedded systems | 2001
Sidney Cadot; Frits Kuijlman; Koen Langendoen; Kees van Reeuwijk; Henk J. Sips
The ENSEMBLE communication library exploits overlapping of message aggregation (computation) and DMA transfers (communication) for embedded multi-processor systems. In contrast to traditional communication libraries, ENSEMBLE operates on n-dimensional data descriptors that can be used to specify often-occurring data access patterns in n-dimentional arrays. This allows ENSEMBLE to setup a three-stage pack-transfer-unpack pipeline, effectively overlapping message aggregation and DMA transfers. ENSEMBLE is used to support Spar/Java, a Java-based language with SPMD annotations. Measurements on a TriMedia-based multi-processor system show that ENSEMBLE increases performance up to 39% for peer-to-peer communication, and up to 34% for all-to-all communication.
parallel computing | 1998
Henk J. Sips; Will Denissen; Kees van Reeuwijk
Abstract In this paper, we analyze the properties and efficiency of three basic local enumeration and three storage compression schemes for cyclic ( m ) data distributions in High Performance Fortran (HPF). The methods are presented in a unified framework, showing the relations between the various methods. We show that for array accesses that are affine functions of the loop bounds, efficient local enumeration and storage compression schemes can be derived. Furthermore, the basic set enumeration and storage techniques are shown to be orthogonal, if the local storage compression scheme is collapsible. This allows choosing the most appropriate method in parts of the computation and communication phases of parallel loops. Performance figures of the methods show that programs with cyclic ( m ) data distributions can be executed efficiently even without compile-time knowledge of the relevant access, alignment, and distribution parameters.
Archive | 2012
Dick Grune; Kees van Reeuwijk; Henri E. Bal; Ceriel J. H. Jacobs; Koen Langendoen
There are two ways of doing parsing: top-down and bottom-up. For top-down parsers, one has the choice of writing them by hand or having them generated automatically, but bottom-up parsers can only be generated. In all three cases, the syntax structure to be recognized is specified using a context-free grammar; grammars were discussed in Section 1.8. Sections 3.2 and 3.5.10 detail considerations concerning error detection and error recovery in syntax analysis.
Archive | 2012
Dick Grune; Kees van Reeuwijk; Henri E. Bal; Ceriel J. H. Jacobs; Koen Langendoen
The lexical analysis and parsing described in Chapters 2 and 3, applied to a program text, result in an abstract syntax tree (AST) with a minimal but important degree of annotation: the Token.class and Token.repr attributes supplied by the lexical analyzer as the initial attributes of the terminals in the leaf nodes of the AST. For example, a token representing an integer has the class “integer” and its value derives from the token representation; a token representing an identifier has the class “identifier”, but completion of further attributes may have to wait until the identification mechanism has done its work.
Archive | 2012
Dick Grune; Kees van Reeuwijk; Henri E. Bal; Ceriel J. H. Jacobs; Koen Langendoen
The front-end of a compiler starts with a stream of characters which constitute the program text, and is expected to create from it intermediate code that allows context handling and translation into target code. It does this by first recovering the syntactic structure of the program by parsing the program text according to the grammar of the language. Since the meaning of the program is defined in terms of its syntactic structure, possessing this structure allows the front-end to generate the corresponding intermediate code.
Archive | 2012
Dick Grune; Kees van Reeuwijk; Henri E. Bal; Ceriel J. H. Jacobs; Koen Langendoen
Parallel and distributed systems consist of multiple processors that can communicate with each other. Languages for programming such systems support constructs for expressing concurrency and communication. In this chapter, we will study how such languages can be implemented. As we will see, the presence of multiple processors introduces many new problems for a language implementer.
Archive | 2012
Dick Grune; Kees van Reeuwijk; Henri E. Bal; Ceriel J. H. Jacobs; Koen Langendoen
In the previous chapters we have discussed general methods for performing lexical and syntax analysis, context handling, code generation, and memory management, while disregarding the programming paradigm from which the source program originated. In doing so, we have exploited the fact that much of compiler construction is independent of the source code paradigm.
Archive | 2012
Dick Grune; Kees van Reeuwijk; Henri E. Bal; Ceriel J. H. Jacobs; Koen Langendoen
An assembler, like a compiler, is a converter from source code to target code, so many of the usual compiler construction techniques are applicable in assembler construction; they include lexical analysis, symbol table management, and backpatching. There are differences too, though, resulting from the relative simplicity of the source format and the relative complexity of the target format.