Magnus Wrenninge
Linköping University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Magnus Wrenninge.
international conference on computer graphics and interactive techniques | 2017
Julian Fong; Magnus Wrenninge; Christopher D. Kulla; Ralf Habel
This document might be out of date, please check online for an updated version. With significant advances in techniques, along with increasing computational power, path tracing has now become the predominant rendering method used in movie production. Thanks to these advances, volume rendering can now take full advantage of the path tracing revolution, allowing the creation of photoreal images that would not have been feasible only a few years ago. However, volume rendering also provides its own set of unique challenges that can be daunting to path tracer developers and researchers accustomed to dealing only with surfaces. While recent texts and materials have covered some of these challenges, to the best of our knowledge none have comprehensively done so, especially when confronted with the complexity and scale demands required by production. For example, the last volume rendering course at SIGGRAPH in 2011 discussed ray marching and precomputed lighting and shadowing, none of which are techniques advisable for production purposes in 2017.
international conference on computer graphics and interactive techniques | 2003
Jonas Unger; Magnus Wrenninge; Mark Ollila
We present a system allowing real-time image based lighting based on HDR panoramic images [Debevec 1998]. The system performs time-consuming diffuse light calculations in a pre-processing step, which is key to attaining interactivity. The real-time subsystem processes an image based lighting model in software, which would be simple to implement in hardware. Rendering is handled by OpenGL, but could be substituted for another graphics API, should there be such a need. Applications for the technique presented are discussed, and includes methods for realistic outdoor lighting. The system architecture is outlined, describing the algorithms used. Lastly, the ideas for future work that arose during the project are discussed
international conference on computer graphics and interactive techniques | 2003
Magnus Wrenninge; Doug Roble
Level set surfaces are well suited for representing the complex surface of a liquid in a ¤uid simulation. At Digital Domain we have developed a ¤uid simulation package that represents all objects, not just the liquid, as particle corrected level sets and velocity £elds. This framework enables easy experimentation with all aspects of the simulation, including adding new ways of interacting with the liquid. This presentation will illustrate the power of the framework with the following new concepts: moving, deforming objects represented as level sets; level set sources and drains; a generalized way of manipulating the velocity £eld, based on an image compositing paradigm.
international conference on computer graphics and interactive techniques | 2016
Jon Reisch; Stephen Marshall; Magnus Wrenninge; Tolga G. Goktekin; Michael Hall; Michael O'Brien; Jason Johnston; Jordan Rempel; Andy Lin
Pixars The Good Dinosaur is a journey through nature, with the environment and all its perils playing a major role alongside Arlo and Spot. The river they travel along is both an obstacle and a guide, and serves as a key storytelling tool -- the films yellow brick road. Because the rivers were so prominent, a linear department workflow was impractical; for the layout department to design shots they needed a reliable representation of the river, such that the features they designed the shot towards would stay consistent throughout the rest of the pipeline. In order to achieve this, a sequence-based workflow was adopted, where the river acted the same as any asset in the set. In the end, sections of river as long as a half mile were simulated in order to give the film makers the necessary flexibility.
international conference on computer graphics and interactive techniques | 2016
Matthew Webb; Magnus Wrenninge; Jordan Rempel; Cody Harrington
The Good Dinosaur is the first Pixar film without matte painted skies; all 800 shots with visible skies were fully modeled, dressed, lit and volume rendered. In the film, the environment served as an adversary for Arlo to struggle against. Early concept art made it clear there would be multiple sequences during thunderstorms as well as others taking place above the clouds. Volumes were well suited to those settings, but even in quieter moments we wanted to capture constantly changing weather with the perspective and parallax of a true dimensional sky. Finally, 3D clouds enabled the lighting department to treat the environment as a single whole, from foreground all the way to the horizon.
international conference on computer graphics and interactive techniques | 2016
Magnus Wrenninge; Michael Rice
In a recent paper, we introduced the Reves volume modeling algorithm [Wrenninge 2016]. Pixars latest animation film, The Good Dinosaur, was the first production to use the system, and this submission aims to show the tool in practical use. Although Reves is designed to produce temporal volumes, it is a flexible and powerful volume modeling tool for static volumes as well. Two key aspects of Reves are its use of an intermediate rasterization representation (microvoxels), and is its scalability. The microvoxel representation means that a wide variety of input primitives can be handled, with efficient SIMD execution of shaders. The scalability provides consistent behavior to the user: at low resolutions feedback is fast and small primitives antialias consistently, and at high resolutions memory use is well controlled. This, together with robust shader and coverage antialising, means that the system can be relied on to produce consistent results at any given output resolution. For the user, it means fast interactive feedback that closely matches final quality.
international conference on computer graphics and interactive techniques | 2015
Magnus Wrenninge
Multiple scattering is a crucial part of photorealistic rendering of high-albedo media. In the production rendering context, current techniques include wavefront tracking [Miller et al. 2012] as well as modified shadow calculations [Wrenninge et al. 2013] (hereon referred to as the contrast approximation). In order to convincingly render optically thick media, high scatter orders must be included, upwards of 100 bounces. Also, anisotropic effects must be handled.
international conference on computer graphics and interactive techniques | 2013
Magnus Wrenninge; Christopher D. Kulla; Viktor Lundqvist
Path traced volumes in production Starting on The Amazing Spiderman and Men In Black 3, Imageworks’ in-house version of Arnold has supported path tracing of volumes alongside surfaces. The pipeline is built around the Field3D file format, and volume primitives support all standard light sources, including area lights and textured skydomes. We maintain reasonable render times with the importance sampling techniques detailed in [Kulla and Fajardo 2012]. To lighters, volumes were seen as just another scene graph location in Katana, with a geometric proxy representation for fast previewing and layout. Named attributes of each field automatically bind to similarly named shader parameters, so that TDs can write custom shaders easily. Under the hood, the Field3D primitive supported arbitrary numbers of fields in each .f3d file, and had full support for dense, sparse and MAC fields of mixed bit depths.
international conference on computer graphics and interactive techniques | 2008
Magnus Wrenninge; Vincent Serritella; Theo Vandernoot; Henrik Fält; Patrick Witting
In a world without electricity, fire proved to be an integral part of Beowulfs 900+ all-CG shots. Given the range of fire effects required – from simple torches to a fire-breating dragon – we needed a pipeline that could support both prop-based fire as well as complex and heavily art-directed shots. Before the show was over, we had taken effects artists out of the equation for a majority of the shots and replaced the entire rendering solution in the process.
arXiv: Computer Vision and Pattern Recognition | 2017
Apostolia Tsirikoglou; Joel Kronander; Magnus Wrenninge; Jonas Unger