Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Wim F. J. Verhaegh is active.

Publication


Featured researches published by Wim F. J. Verhaegh.


european design automation conference | 1991

PHIDEO: a silicon compiler for high speed algorithms

Paul E. R. Lippens; J. van Meerbergen; A. van der Werf; Wim F. J. Verhaegh; B.T. McSweeney; J. O. Huisken; Owen Paul Mcardle

PHIDEO is a silicon compiler targeted at the design of high performance real time systems with high sampling frequencies such as HDTV. It supports the complete design trajectory starting from a high level specification all the way down to layout. New techniques are used to perform global optimisations across loop boundaries in hierarchical flow graphs. The compiler is based on a new target architectural model. Apart from the datapaths special attention is paid to memory optimisation. The new techniques are demonstrated using a progressive scan conversion algorithm.<<ETX>>


international conference on computer aided design | 1993

Allocation of multiport memories for hierarchical data streams

Paul E. R. Lippens; J. van Meerbergen; Wim F. J. Verhaegh; A. van der Werf

A multiport memory allocation problem for hierarchical, i.e. multi-dimensional, data streams is described. Memory allocation techniques are used in high level synthesis for foreground and background memory allocation, the design of data format converters, and the design of synchronous inter-processor communication hardware. The techniques presented in this paper differ from other approaches in the sense that data streams are considered to be design entities and are not expanded to individual samples. A formal model for hierarchical data streams is given and a memory allocation algorithm is presented. The algorithm comprises two steps: data routing and assignment of signal delays to memories. A number of sub-problems are formulated as ILP programs. In the presented form, the allocation algorithm only considers interconnect costs, but memory size and other cost factors can be taken into account. The presented work is implemented in the memory allocation tool MEDEA which is part of the PHIDEO synthesis system.


euromicro conference on real-time systems | 2004

QoS control strategies for high-quality video processing

Clemens C. Wüst; Liesbeth Steffens; Reinder J. Bril; Wim F. J. Verhaegh

Video processing in software is often characterized by highly fluctuating, content-dependent processing times, and a limited tolerance for deadline misses. We present an approach that allows close-to-average-case resource allocation to a single video processing task, based on asynchronous, scalable processing, and QoS adaptation. The QoS adaptation balances different QoS parameters that can be tuned by user-perception experiments: picture quality, deadline misses, and quality changes. We model the balancing problem as a discrete stochastic decision problem, and propose two closely related solution strategies, for which the processing-time statistics are determined offline and at run time, respectively. We enhance both strategies with a compensation for structural (non-stochastic) load fluctuations. Finally, we validate our approach by means of simulation experiments, and conclude that both enhanced strategies perform close to the theoretical optimum.


signal processing systems | 1995

PHIDEO: High-level synthesis for high throughput applications

Jef L. van Meerbergen; Paul E. R. Lippens; Wim F. J. Verhaegh; Albert Van Der Werf

This paper describes a new approach to high-level synthesis for high throughput applications. Such applications are typically found in real-time video systems such as HDTV. The method is capable of dealing with hierarchical flow graphs containing loops with manifest boundaries and linear index expressions. The algorithm is based on the model of periodic operations which allows optimizations across loop boundaries. Processing units and storage units are minimized simultaneously. The algorithm is implemented in thePHIDEO system. The major parts of this system are the processing unit synthesis, the scheduler and the memory synthesis including address generation.


international conference on computer aided design | 1992

Area optimization of multi-functional processing units

A. van der Werf; M. J. H. Peek; Emile H. L. Aarts; J. van Meerbergen; Paul E. R. Lippens; Wim F. J. Verhaegh

Functions executed by a multifunctional processing unit (PU) correspond to clusters of operations in the specification, which are represented as signal flow graphs (SFGs). Because of high-throughput demands, the operations of each SFG are executed in parallel. Since operations for only one of the SFGs are executed at a given time, operations belonging to different SFGs can be executed on the same operator. Here, the most important part of the mapping of several SFGs onto one PU, which is the assignment of the SFGs operations to the PUs operators, given a number of allocated operators, is considered. The problem is to find an operator assignment that minimizes the silicon area that is occupied by the PUs interconnection consisting of multiplexers and wires. An approach based on local search algorithms such as iterative improvement and simulated annealing is presented. Although these algorithms are known to be generally applicable, it is shown that detailed knowledge of the operator assignment problem is required to obtain good results within acceptable CPU time limits for large problem instances.<<ETX>>


international conference on computer aided design | 1992

Efficiency improvements for force-directed scheduling

Wim F. J. Verhaegh; Paul E. R. Lippens; Emile H. L. Aarts; Jan H. M. Korst; A. van der Werf; J. van Meerbergen

Force-directed scheduling is a technique which schedules operations under time constraints in order to achieve schedules with a minimum number of resources. The worst case time complexity of the algorithm is cubic in the number of operations. This is due to the computation of the changes in the distribution functions needed for the force calculations. An incremental way to compute the changes in the distribution functions, based on gradual time-frame reduction, is presented. This reduces the time complexity of the algorithm to quadratic in the number of operations, without any loss in effectiveness or generality of the algorithm. Implementations show a substantial CPU-time reduction of force-directed scheduling, which is illustrated by means of some industrially relevant examples.<<ETX>>


Gut | 2013

A microRNA panel to discriminate carcinomas from high-grade intraepithelial neoplasms in colonoscopy biopsy tissue

Shuyang Wang; Lei Wang; Nayima Bayaxi; Jian Li; Wim F. J. Verhaegh; Angel Janevski; Vinay Varadan; Yiping Ren; Dennis Merkle; Xianxin Meng; Xue Gao; Huijun Wang; Jiaqiang Ren; Winston Patrick Kuo; Nevenka Dimitrova; Ying Wu; Hongguang Zhu

Objective It is a challenge to differentiate invasive carcinomas from high-grade intraepithelial neoplasms in colonoscopy biopsy tissues. In this study, microRNA profiles were evaluated in the transformation of colorectal carcinogenesis to discover new molecular markers for identifying a carcinoma in colonoscopy biopsy tissues where the presence of stromal invasion cells is not detectable by microscopic analysis. Methods The expression of 723 human microRNAs was measured in laser capture microdissected epithelial tumours from 133 snap-frozen surgical colorectal specimens. Three well-known classification algorithms were used to derive candidate biomarkers for discriminating carcinomas from adenomas. Quantitative reverse-transcriptase PCR was then used to validate the candidates in an independent cohort of macrodissected formalin-fixed paraffin-embedded colorectal tissue samples from 91 surgical resections. The biomarkers were applied to differentiate carcinomas from high-grade intraepithelial neoplasms in 58 colonoscopy biopsy tissue samples with stromal invasion cells undetectable by microscopy. Results One classifier of 14 microRNAs was identified with a prediction accuracy of 94.1% for discriminating carcinomas from adenomas. In formalin-fixed paraffin-embedded surgical tissue samples, a combination of miR-375, miR-424 and miR-92a yielded an accuracy of 94% (AUC=0.968) in discriminating carcinomas from adenomas. This combination has been applied to differentiate carcinomas from high-grade intraepithelial neoplasms in colonoscopy biopsy tissues with an accuracy of 89% (AUC=0.918). Conclusions This study has found a microRNA panel that accurately discriminates carcinomas from high-grade intraepithelial neoplasms in colonoscopy biopsy tissues. This microRNA panel has considerable clinical value in the early diagnosis and optimal surgical decision-making of colorectal cancer.


international test conference | 1996

Optimal scan for pipelined testing: an asynchronous foundation

Marly Roncken; Emile H. L. Aarts; Wim F. J. Verhaegh

This paper addresses the problem of constructing a scan chain such that (1) the area overhead is minimal for latch-based designs, and (2) the number of pipeline scan shifts is minimal. We present an efficient heuristic algorithm to construct near-optimal scan chains. On the theoretical side, we show that part (1) of the problem can be solved in polynomial time, and that part (2) is NP-hard, thus precisely pinpointing the source of complexity and justifying our heuristic approach. Experimental results on three industrial asynchronous IC designs show (1) less than 0.1% extra scan latches for level-sensitive scan design, and (2) scan shift reductions up to 86% over traditional scan schemes.


Journal of Scheduling | 2004

Best-Case Response Times and Jitter Analysis of Real-Time Tasks

Reinder J. Bril; Elisabeth Francisca Maria Steffens; Wim F. J. Verhaegh

In this paper, we present a simple recursive equation and an iterative procedure to determine the best-case response times of periodic tasks under fixed-priority preemptive scheduling and arbitrary phasings. The approach is of a similar nature as the one used to determine worst-case response times (Joseph and Pandya, 1986) in the sense that where a critical instant is considered to determine the latter, we base our analysis on an optimal instant. Such an optimal instant occurs when all higher priority tasks have a simultaneous release that coincides with the completion of an execution of the task under consideration. The resulting recursive equation closely resembles the one for worst-case response times. The iterative procedure is illustrated by means of a small example. Next, we apply the best-case response times to analyze jitter in distributed multiprocessor systems. To this end, we discuss the effect of the best-case response times on completion jitter, as well as the effect of release jitter on the best-case response times. The newly derived best-case response times generally result in tighter bounds on jitter, in turn leading to tighter worst-case response time bounds.


signal processing systems | 1997

Mpeg2 Video Encoding in Consumer Electronics

Richard P. Kleihorst; A. van der Werf; Wilhelmus H. A. Bruls; Wim F. J. Verhaegh; E. Waterlander

Only very recently, single-chip MPEG2 video encoders are being reported. They are a result of additional interest in encoding in consumer products, apart from broadcast encoding, where a video encoder contains several expensive chips. Only single-chip solutions are cost-effective enough to enable digital recording for the consumer. The professional broadcast encoders are expensive because they use the full MPEG toolkit to guarantee good image quality, at the lowest possible bit-rate. Some MPEG tools are costly in hardware and these are therefore not feasible in single-chip solutions. This results in higher bit-rates, that can be accepted because of the available channel and storage capacity of the latest consumer storage media, harddisk, digital tape (D-VHS) and Digital Versatile Disk (DVD). A consumer product is I.McIC, a single-chip MPEG2 video encoder. It operates in ML@SP mode which can be decoded by all MPEG2 decoders. The IC is highly-integrated, as it contains motion-estimation and compensation, adaptive temporal noise filtering and buffer/bit-rate control. The high-throughput functions of the MPEG algorithm are mapped onto pipelined dedicated hardware, whereas the remaining functions are processed by an application-specific instruction-set processor. Software for this processor can be downloaded, in order to suit the IC for different applications and operating conditions. The IC consists of several communicating processors which were designed using high-level synthesis tools, PHIDEO and DSP Station™.

Collaboration


Dive into the Wim F. J. Verhaegh's collaboration.

Researchain Logo
Decentralizing Knowledge