Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Oliver Grau is active.

Publication


Featured researches published by Oliver Grau.


european conference on computer vision | 2016

VConv-DAE: Deep Volumetric Shape Learning Without Object Labels

Abhishek Sharma; Oliver Grau; Mario Fritz

With the advent of affordable depth sensors, 3D capture becomes more and more ubiquitous and already has made its way into commercial products. Yet, capturing the geometry or complete shapes of everyday objects using scanning devices (e.g. Kinect) still comes with several challenges that result in noise or even incomplete shapes.


british machine vision conference | 2015

Discrete Light Source Estimation from Light Probes for Photorealistic Rendering.

Farshad Einabadi; Oliver Grau

Applications like rendering of images using computer graphics methods are usually requiring more sophisticated light models to give better control. Complex scenes in computer generated images are requiring very differentiated light models to give a realistic rendering of the scene. That usually includes a high number of (virtual) light sources to model a scene to reproduce accurate shadows and shadings. In particular in the production of visual effects for movies and TV the real scene lighting needs to be captured very accurately to give a realistic rendering of virtual objects into that scene. In this context the light modeling is usually done manually by skilled artists in a time consuming process. This contribution describes a new technique for estimation of discrete spot light sources. The method uses a consumer grade DSLR camera equipped with a fisheye lens to capture light probe images registered to the scene. From these probe images the geometric and radiometric properties of the dominant light sources in the scene are estimated. The first step is a robust approach to identify light sources in the light probes and to find exact positions by triangulation. Then the light direction and radiometric fall-off properties are formulated and estimated in a least square minimization approach. There are a number of advantages in our approach. First, the probing camera is registered using a multi-camera setup which requires the minimum amendments to the studio. Second, we are not limited to any specific probing object since the properties of each light are estimated based on processing the probe images. In addition, since the probing camera can move freely in the area of interest, there are no limits in terms of the covered space. Large field of view of the fisheye lens is also beneficial in this matter. Calibration and Registration of Cameras. We propose a two-step calibration and registration approach. In the first step, a planar asymmetric calibration pattern is used for simultaneous calibration of the intrinsics and the pose of all the witness cameras and the principal camera using a bundle adjustment module. In the next step, parameters of witness cameras are kept fixed and the probing camera is registered in the same coordinate system by using color features of an attached calibration rig. Position Estimation. To estimate the 3D position vectors of the light sources, one needs to shoot rays from every detected light blob in all probe images and triangulate the corresponding rays from at least two probe positions for each source. Figure 1 summarizes the required steps.


federated conference on computer science and information systems | 2014

High quality, low latency in-home streaming of multimedia applications for mobile devices

Daniel Pohl; Stefan Nickels; Ram Nalla; Oliver Grau

Today, mobile devices like smartphones and tablets are becoming more powerful and exhibit enhanced 3D graphics performance. However, the overall computing power of these devices is still limited regarding usage scenarios like photo-realistic gaming, enabling an immersive virtual reality experience or real-time processing and visualization of big data. To overcome these limitations application streaming solutions are constantly gaining focus. The idea is to transfer the graphics output of an application running on a server or even a cluster to a mobile device, conveying the impression that the application is running locally. User inputs on the mobile client side are processed and sent back to the server. The main criteria for successful application streaming are low latency, since users want to interact with the scene in near real-time, as well as high image quality. Here, we present a novel application framework suitable for streaming applications from high-end machines to mobile devices. Using real-time ETC1 compression in combination with a distributed rendering architecture we fully leverage recent progress in wireless computer networking standards (IEEE 802.11ac) for mobile devices, achieving much higher image quality at half the latency compared to other inhome streaming solutions.


ieee virtual reality conference | 2015

Using astigmatism in wide angle HMDs to improve rendering

Daniel Pohl; Timo Bolkart; Stefan Nickels; Oliver Grau

Lenses in modern consumer HMDs introduce distortions like astigmatism: only the center area of the displayed content can be perceived sharp while with increasing distance from the center the image gets out of focus. We show with three new approaches that this undesired side effect can be used in a positive way to save calculations in blurry areas. For example, using sampling maps to lower the detail in areas where the image is blurred through astigmatism, increases performance by a factor of 2 to 3. Further, we introduce a new calibration of user-specific viewing parameters that increase the performance by about 20-75%.


virtual reality software and technology | 2016

Concept for using eye tracking in a head-mounted display to adapt rendering to the user's current visual field

Daniel Pohl; Xucong Zhang; Andreas Bulling; Oliver Grau

With increasing spatial and temporal resolution in head-mounted displays (HMDs), using eye trackers to adapt rendering to the user is getting important to handle the rendering workload. Besides using methods like foveated rendering, we propose to use the current visual field for rendering, depending on the eye gaze. We use two effects for performance optimizations. First, we noticed a lens defect in HMDs, where depending on the distance of the eye gaze to the center, certain parts of the screen towards the edges are not visible anymore. Second, if the user looks up, he cannot see the lower parts of the screen anymore. For the invisible areas, we propose to skip rendering and to reuse the pixels colors from the previous frame. We provide a calibration routine to measure these two effects. We apply the current visual field to a renderer and get up to 2x speed-ups.


conference on visual media production | 2015

Intuitive virtual production tools for set and light editing

Jonas Trottnow; Kai Götz; Stefan Seibert; Simon Spielmann; Volker Helzle; Farshad Einabadi; Clemens K. H. Sielaff; Oliver Grau

This contribution describes a set of newly developed tools for virtual production. Virtual production aims to bring together the creative production aspects into one real-time environment, to overcome the bottlenecks of offline processing in digital content production. This paper introduces tools and an architecture to edit set assets and adjust the lighting set-up. A set of tools was designed, implemented and tested on tablet PCs and one augmented reality and a virtual reality device. These tools are designed to be used on a movie set by staff not necessarily familiar with 3D software. Further, an approach to harmonize light set-ups in virtual and real scenes is introduced. This approach uses an automated image-based light capture process, which models the dominant lights as discrete light sources with fall-off characteristics to give required fine details for close range light set-ups and overcomes limitations of traditional image-based light probes. The paper describes initial results of a user evaluation using the developed tools in production-like environments.


IEEE MultiMedia | 2013

Applications of Face Analysis and Modeling in Media Production

Darren Cosker; Peter Eisert; Oliver Grau; Peter J. B. Hancock; Jonathan McKinnell; Eng-Jon Ong

Facial expressions play an important role in day-by-day communication as well as media production. This article surveys automatic facial analysis and modeling methods using computer vision techniques and their applications for media production. The authors give a brief overview of the psychology of face perception and then describe some of the applications of computer vision and pattern recognition applied to face recognition in media production. This article also covers the automatic generation of face models, which are used in movie and TV productions for special effects in order to manipulate peoples faces or combine real actors with computer graphics.


international conference on computer graphics and interactive techniques | 2013

Presentation and communication of visual artworks in an interactive virtual environment

Jeni Maleshkova; Matthew Purver; Oliver Grau; Julien Pansiot

New forms of art have developed because of the possibilities that modern technology provides for artists. However, innovative technology can not only be used in the creation of new media art but also sets a range of new opportunities for the presentation and communication of visual art. In this paper, we want to introduce an approach for the presentation of visual artworks in interactive virtual environments. In this work, we showcase a project that focuses on the problem of exploring and implementing innovative approaches and technologies for displaying art in a 3D virtual environment. We wish to excite people about art by offering them an entertaining and educational virtual experience. Furthermore, we want to contribute towards escaping from the common exhibition space by creating a virtual application that augments the presented artwork by offering an interactive audience experience.


3dtv-conference: the true vision - capture, transmission and display of 3d video | 2011

Tools for 3D-TV programme production

Oliver Grau; Marcus Müller; Josef Kluger

This contribution discusses tools for the production of 3D-TV programmes as developed and tested in the 3D4YOU project. The project looked in particular into image-plus-depth based formats and their integration into a 3D-TV production chain. This contribution focuses on requirements and production approaches for selected programme genres and describes examples of on-set and post-production tools for capture and generation of depth information.


Archive | 2018

When Computers Decide: European Recommendations on Machine-Learned Automated Decision Making

James R. Larus; Chris Hankin; Siri Granum Carson; Markus Christen; Silvia Crafa; Oliver Grau; Claude Kirchner; Bran Knowles; Andrew D. McGettrick; Damian Andrew Tamburri; Hannes Werthner

Over the past two decades, the ability of machines to challenge and beat humans at complex games has made “quantum” leaps, rhetorically if not in technical computing terms. n nIn 1997, IBMs Deep Blue supercomputer used “brute force” computing power to out-calculate Grand Master Garry Kasparov at chess. In 2011, the companys Watson employed “machine learning” (ML) techniques to beat several former Jeopardy champions at their own game. In early 2016, Googles DeepMind AlphaGo program-trained by a massive game history-repeatedly defeated the reigning European champion at Go: a game that has more possible board configurations than there are atoms in the universe [1]. It reached this milestone by employing two neural networks powered by sophisticated “automated decision making” (ADM) algorithms. And, in 2017, AlphaGo Zero became the strongest Go player on the planetx14human or machinex14after just a few months of game-play training alone. Incredibly, it was programmed initially only with the rules of the game [2]. n nAutomated decision making concerns decision making by purely technological means without human involvement. Article 22(1) of the European General Data Protection Regulation (GDPR) enshrines the right of data subjects not to be subject to decisions, which have legal or other significant effects, being based solely on automatic individual decision making. As a consequence, in this paper we consider applications of ADM to applications other than those based on personal information, for example the game-playing discussed above. We discuss other aspects of GDPR later in the paper. Whilst the game-playing results are impressive, the consequences of machine learning and automated decision making are themselves, however, no game. As of this writing, they have progressed to enable computers to rival humans ability at even more challenging, ambiguous, and highly skilled tasks with profound “real world” applications, such as: recognizing images, understanding speech, and analysing X-rays among many others. As these techniques continue to improve rapidly, many new and established companies are utilizing them to build applications that reliably perform activities that previously were done (and doable) only by people. Today, such systems can both augment human decision making and, in some cases, replace it with a fully autonomous system. n nIn this report, we review the principal implications of the coming widespread adoption of MLdriven automated decision making with a particular emphasis on its technical, ethical, legal, economic, societal and educational ramifications. We also make a number of recommendations that policy makers might wish to consider.

Collaboration


Dive into the Oliver Grau's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stefan Nickels

German Cancer Research Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stefan Nickels

German Cancer Research Center

View shared research outputs
Researchain Logo
Decentralizing Knowledge