Michael Manzke
Trinity College, Dublin
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Michael Manzke.
international conference on computer graphics and interactive techniques | 2013
Michael J. Doyle; Colin Fowler; Michael Manzke
Ray-tracing algorithms are known for producing highly realistic images, but at a significant computational cost. For this reason, a large body of research exists on various techniques for accelerating these costly algorithms. One approach to achieving superior performance which has received comparatively little attention is the design of specialised ray-tracing hardware. The research that does exist on this topic has consistently demonstrated that significant performance and efficiency gains can be achieved with dedicated microarchitectures. However, previous work on hardware ray-tracing has focused almost entirely on the traversal and intersection aspects of the pipeline. As a result, the critical aspect of the management and construction of acceleration data-structures remains largely absent from the hardware literature. We propose that a specialised microarchitecture for this purpose could achieve considerable performance and efficiency improvements over programmable platforms. To this end, we have developed the first dedicated microarchitecture for the construction of binned SAH BVHs. Cycle-accurate simulations show that our design achieves significant improvements in raw performance and in the bandwidth required for construction, as well as large efficiency gains in terms of performance per clock and die area compared to manycore implementations. We conclude that such a design would be useful in the context of a heterogeneous graphics processor, and may help future graphics processor designs to reduce predicted technology-imposed utilisation limits.
International Journal of Computer Vision | 2012
Cem Direkoglu; Rozenn Dahyot; Michael Manzke
We present a novel and effective skeletonization algorithm for binary and gray-scale images, based on the anisotropic heat diffusion analogy. We diffuse the image in the direction normal to the feature boundaries and also allow tangential diffusion (curvature decreasing diffusion) to contribute slightly. The proposed anisotropic diffusion provides a high quality medial function in the image: it removes noise and preserves prominent curvatures of the shape along the level-sets (skeleton features). The skeleton strength map, which provides the likelihood of a point to be part of the skeleton, is defined by the mean curvature measure. Finally, thin and binary skeleton is obtained by non-maxima suppression and hysteresis thresholding of the skeleton strength map. Our method outperforms the most related and the popular methods in skeleton extraction especially in noisy conditions. Results show that the proposed approach is better at handling noise in images and preserving the skeleton features at the centerline of the shape.
VRIPHYS | 2007
Muiris Woulfe; John Dingliana; Michael Manzke
Broad phase collision detection is a vital task in most interactive simulations, but it remains computationally expensive and is frequently an impediment to efficient implementation of realtime graphics applications. To overcome this hurdle, we propose a novel microarchitecture for performing broad phase collision detection using Axis-Aligned Bounding Boxes (AABBs), which exploits the parallelism available in the algorithms. We have implemented our microarchitecture on a Field-Programmable Gate Array (FPGA) and our results show that this implementation is capable of achieving an acceleration of up to 1.5× over the broad phase component of the SOLID collision detection library, when considering the communication overhead between the CPU and the FPGA. Our results further indicate that significantly higher accelerations are achievable using a more sophisticated FPGA or by implementing our microarchitecture on an Application-Specific Integrated Circuit (ASIC).
international conference on computer graphics and interactive techniques | 2006
Michael Manzke; Ross Brennan; Keith O'Conor; John Dingliana; Carol O'Sullivan
Current scalable high-performance graphics systems are either constructed using special purpose graphics acceleration hardware or built as a cluster of commodity components with a software infrastructure that exploits multiple graphics cards [Humphreys et al. 2002]. Both these solutions are used in application domains where computational demand cannot be met by a single commodity graphics card e.g., large-scale scientific visualisation. The former approach tends to provide the highest performance but is expensive because it requires frequent redesign of the special purpose graphics acceleration hardware in order to maintain a performance advantage over the commodity graphics hardware used in the cluster approach. The latter approach, while more affordable and scalable, has intrinsic performance drawbacks due to computationally expensive communication between the individual graphics pipelines.
conference on visual media production | 2010
Jonathan Ruttle; Michael Manzke; Rozenn Dahyot
We present a statistical framework to merge the information from silhouettes segmented in multiple view images to infer the 3D shape of an object. The approach is generalising the robust but discrete modelling of the visual hull by using the concept of averaged likelihoods. One resulting advantage of our framework is that the objective function is continuous and therefore an iterative gradient ascent algorithm can be defined to efficiently search the space. Moreover this results in a method which is less memory demanding and one that is very suitable to a parallel processing architecture. Experimental results shows that this approach is efficient for getting a robust initial guess to the 3D shape of an object in view
spring conference on computer graphics | 2009
Muiris Woulfe; Michael Manzke
Collision detection is a vital component of applications spanning myriad fields, yet there exists no means for developers to analyse the suitability of their collision detection algorithms across the spectrum of scenarios that could be encountered. To rectify this, we propose a framework for benchmarking interactive collision detection, which consists of a single generic benchmark that can be adapted using a number of parameters to create a large range of practical benchmarks. This framework allows algorithm developers to test the validity of their algorithms across a wide test space and allows developers of interactive applications to recreate their application scenarios and quickly determine the most amenable algorithm. To demonstrate the utility of our framework, we adapted it to work with three collision detection algorithms supplied with the Bullet Physics SDK. Our results demonstrate that those algorithms conventionally believed to offer the best performance are not always the correct choice. This demonstrates that conventional wisdom cannot be relied on for selecting a collision detection algorithm and that our benchmarking framework fulfils a vital need in the collision detection community. The framework has been made open source, so that developers do not have to reprogram the framework to test their own algorithms, allowing for consistent results across different algorithms and reducing development time.
2009 13th International Machine Vision and Image Processing Conference | 2009
Jonathan Ruttle; Michael Manzke; Rozenn Dahyot
Scene flow is the motion of the surface points in the 3D world. For a camera, it is seen as a 2D optical flow in the image plane. Knowing the scene flow can be very useful as it gives an idea of the surface geometry of the objects in the scene and how those objects are moving. Four methods for calculating the scene flow given multiple optical flows have been explored and detailed in this paper along with the basic mathematics surrounding multi-view geometry. It was found that given multiple optical flows it is possible to estimate the scene flow to different levels of detail depending on the level of prior information present.
workshop on computer architecture education | 2003
Ross Brennan; Michael Manzke
The introduction of reconfigurable logic devices as teaching-aids, into undergraduate and graduate education, enables the students to conduct experiments they could otherwise not perform. Furthermore, this approach gives instructors the freedom to choose architectures that are dictated by the underlying hardware to a lesser extent. Todays Field Programmable Gate Arrays (FPGA) can implement integer units as complex as the SPARC V8 at a reasonable hardware price. Educational boards that replace conventional CPUs with reconfigurable logic devices can be integrated into an existing syllabus with legacy hardware requirements without disruption, as long as the soft-CPU core on the reconfigurable logic device provides opcode compatibility with the superseded processing unit.
british machine vision conference | 2010
Cem Direkoglu; Rozenn Dahyot; Michael Manzke
We introduce a novel skeleton extraction algorithm in binary and gray-scale images, based on the anisotropic heat diffusion analogy. We propose to diffuse image in the dominance of direction normal to the feature boundaries and also allow tangential diffusion to contribute slightly. The proposed anisotropic diffusion provides a high quality medial function in the image, since it removes noise and preserves prominent curvatures of the shape along the level-sets (skeleton locations). Then the skeleton strength map, which provides the likelihood to be a skeleton point, is obtained by computing the mean curvature of level-sets. The overall process is completed by non-maxima suppression and hysteresis thresholding to obtain thin and binary skeleton. Results indicate that this approach has advantages in handling noise in the image and in obtaining smooth shape skeleton because of the directional averaging inherent of our new anisotropic heat flow.
IEEE Transactions on Multi-Scale Computing Systems | 2018
Michael J. Doyle; Ciaran Tuohy; Michael Manzke
The ever-increasing demands of computer graphics applications have motivated the evolution of computer graphics hardware over the last 20 years. Early commodity graphics hardware was largely based on fixed-function components offering little flexibility. The gradual replacement of fixed-function hardware with more general-purpose instruction processors, has enabled GPUs to deliver visual experiences more tailored to specific applications. This trend has culminated in modern GPUs essentially being programmable stream processors, capable of supporting a wide variety of applications in and outside of computer graphics. However, the growing concern of power efficiency in modern processors, coupled with an increasing demand for supporting next-generation graphics pipelines, has re-invigorated the debate on the use of fixed-function accelerators in these platforms. In this paper, we conduct a study of a heterogeneous, system-on-chip solution for the construction of a highly important data structure for computer graphics: the bounding volume hierarchy. This design incorporates conventional CPU cores alongside a fixed-function accelerator prototyped on a reconfigurable logic fabric. Our study supports earlier, simulation-only studies which argue for the introduction of this class of accelerator in future graphics processors.