Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christof Hoppe is active.

Publication


Featured researches published by Christof Hoppe.


Proceedings of the IEEE | 2014

Augmented Reality for Construction Site Monitoring and Documentation

Stefanie Zollmann; Christof Hoppe; Stefan Kluckner; Christian Poglitsch; Horst Bischof; Gerhard Reitmayr

Augmented reality (AR) allows for an on-site presentation of information that is registered to the physical environment. Applications from civil engineering, which require users to process complex information, are among those which can benefit particularly highly from such a presentation. In this paper, we will describe how to use AR to support monitoring and documentation of construction site progress. For these tasks, the responsible staff usually requires fast and comprehensible access to progress information to enable comparison to the as-built status as well as to as-planned data. Instead of tediously searching and mapping related information to the actual construction site environment, our AR system allows for the access of information right where it is needed. This is achieved by superimposing progress as well as as-planned information onto the users view of the physical environment. For this purpose, we present an approach that uses aerial 3-D reconstruction to automatically capture progress information and a mobile AR client for on-site visualization. Within this paper, we will describe in greater detail how to capture 3-D, how to register the AR system within the physical outdoor environment, how to visualize progress information in a comprehensible way in an AR overlay, and how to interact with this kind of information. By implementing such an AR system, we are able to provide an overview about the possibilities and future applications of AR in the construction industry.


british machine vision conference | 2013

Incremental Surface Extraction from Sparse Structure-from-Motion Point Clouds

Christof Hoppe; Manfred Klopschitz; Michael Donoser; Horst Bischof

In this paper we propose a new method to incrementally extract a surface from a consecutively growing Structure-from-Motion (SfM) point cloud in real-time. Our method is based on a Delaunay triangulation (DT) on the 3D points. The core idea is to robustly label all tetrahedra into freeand occupied space using a random field formulation and to extract the surface as the interface between differently labeled tetrahedra. For this reason, we propose a new energy function that achieves the same accuracy as state-of-the-art methods but reduces the computational effort significantly. Furthermore, our new formulation allows us to extract the surface in an incremental manner, i. e. whenever the point cloud is updated we adapt our energy function. Instead of minimizing the updated energy with a standard graph cut, we employ the dynamic graph cut of Kohli et al. [1] which enables efficient minimization of a series of similar random fields by re-using the previous solution. In such a way we are able to extract the surface from an increasingly growing point cloud nearly independent of the overall scene size. Energy Function for Surface Extraction Our method formulates surface extraction as a binary labeling problem, with the goal of assigning each tetrahedron either a free or occupied label. For this reason, we model the probabilities that a tetrahedron is free- or occupied space analyzing the set of rays that connect all 3D points to image features. Following the idea of the truncated signed distance function (TSDF), which is known from voxel-based surface reconstructions, a tetrahedron in front of a 3D point X has a high probability to be free space, whereas a tetrahedron behind X is presumably occupied space. We further assume that it is very unlikely that neighboring tetrahedra obtain different labels, except for pairs of tetrahedra that have a ray through the face connecting both. Such a labeling problem can be elegantly formulated as a pairwise random field and since our priors are submodular, we can efficiently find a global optimal labeling solution e. g. using graph cuts. In contrast to existing methods like [2], our energy depends only on the visibility information that is directly connected to the four 3D points that span the tetrahedraVi. Hence a modification of the tetrahedral structure by inserting new points has only limited effect on the energy function. This property enables us to easily adopt the energy function to a modified tetrahedral structure. Incremental Surface Extraction To enable efficient incremental surface reconstruction, our method has to consecutively integrate new scene information (3D points as well as visibility information) in the energy function and to minimize the modified energy efficiently. Integrating new visibility information, i. e. adding rays for newly available 3D points, affects only those terms of the energy function that relate


british machine vision conference | 2012

Online Feedback for Structure-from-Motion Image Acquisition

Christof Hoppe; Manfred Klopschitz; Markus Rumpler; Andreas Wendel; Stefan Kluckner; Horst Bischof; Gerhard Reitmayr

The quality and completeness of 3D models obtained by Structure-fromMotion (SfM) heavily depend on the image acquisition process. If the user gets feedback about the reconstruction quality already during the acquisition, he can optimize this process. The goal of this paper is to support a user during image acquisition by giving online feedback of the current reconstruction quality. We propose an online SfM method that integrates wide-baseline still-images in an online fashion into a consistent reconstruction and we derive a surface model given the SfM point cloud. To guide the user to scene parts that are captured not very well, we colour the mesh according to redundancy and resolution information. In the experiments, we show that our approach makes the final SfM result predictable already during image acquisition. The method is suited for large-scale reconstructions as obtained by flying micro aerial vehicles as well as on small indoor environments. We propose a method that supports a user in the acquisition process in two ways: (a) sparse online SfM with accuracy close to offline methods and (b) surface extraction and quality visualization. The workflow of our method is shown in Figure 1.


computer vision and pattern recognition | 2011

Efficient structure from motion with weak position and orientation priors

Arnold Irschara; Christof Hoppe; Horst Bischof; Stefan Kluckner

In this paper we present an approach that leverages prior information from global positioning systems and inertial measurement units to speedup structure from motion computation. We propose a view selection strategy that advances vocabulary tree based coarse matching by also considering the geometric configuration between weakly oriented images. Furthermore, we introduce a fast and scalable reconstruction approach that relies on global rotation registration and robust bundle adjustment. Real world experiments are performed using data acquired by a micro aerial vehicle attached with GPS/INS sensors. Our proposed algorithm achieves orientation results that are sub-pixel accurate and the precision is on a par with results from incremental structure from motion approaches. Moreover, the method is scalable and computationally more efficient than previous approaches.


international conference on robotics and automation | 2015

Building with drones: Accurate 3D facade reconstruction using MAVs

Shreyansh Daftry; Christof Hoppe; Horst Bischof

Automatic reconstruction of 3D models from images using multi-view Structure-from-Motion methods has been one of the most fruitful outcomes of computer vision. These advances combined with the growing popularity of Micro Aerial Vehicles as an autonomous imaging platform, have made 3D vision tools ubiquitous for large number of Architecture, Engineering and Construction applications among audiences, mostly unskilled in computer vision. However, to obtain high-resolution and accurate reconstructions from a large-scale object using SfM, there are many critical constraints on the quality of image data, which often become sources of inaccuracy as the current 3D reconstruction pipelines do not facilitate the users to determine the fidelity of input data during the image acquisition. In this paper, we present and advocate a closed-loop interactive approach that performs incremental reconstruction in real-time and gives users an online feedback about the quality parameters like Ground Sampling Distance (GSD), image redundancy, etc on a surface mesh. We also propose a novel multi-scale camera network design to prevent scene drift caused by incremental map building, and release the first multi-scale image sequence dataset as a benchmark. Further, we evaluate our system on real outdoor scenes, and show that our interactive pipeline combined with a multi-scale camera network approach provides compelling accuracy in multi-view reconstruction tasks when compared against the state-of-the-art methods.


international symposium on mixed and augmented reality | 2012

Interactive 4D overview and detail visualization in augmented reality

Stefanie Zollmann; Denis Kalkofen; Christof Hoppe; Stefan Kluckner; Horst Bischof; Gerhard Reitmayr

In this paper we present an approach for visualizing time-oriented data of dynamic scenes in an on-site AR view. Visualizations of time-oriented data have special challenges compared to the visualization of arbitrary virtual objects. Usually, the 4D data occludes a large part of the real scene. Additionally, the data sets from different points in time may occlude each other. Thus, it is important to design adequate visualization techniques that provide a comprehensible visualization. In this paper we introduce a visualization concept that uses overview and detail techniques to present 4D data in different detail levels. These levels provide at first an overview of the 4D scene, at second information about the 4D change of a single object and at third detailed information about object appearance and geometry for specific points in time. Combining the three levels of detail with interactive transitions such as magic lenses or distorted viewing techniques enables the user to understand the relationship between them. Finally we show how to apply this concept for construction site documentation and monitoring.


IEEE Transactions on Visualization and Computer Graphics | 2014

FlyAR: Augmented Reality Supported Micro Aerial Vehicle Navigation

Stefanie Zollmann; Christof Hoppe; Tobias Langlotz; Gerhard Reitmayr

Micro aerial vehicles equipped with high-resolution cameras can be used to create aerial reconstructions of an area of interest. In that context automatic flight path planning and autonomous flying is often applied but so far cannot fully replace the human in the loop, supervising the flight on-site to assure that there are no collisions with obstacles. Unfortunately, this workflow yields several issues, such as the need to mentally transfer the aerial vehicles position between 2D map positions and the physical environment, and the complicated depth perception of objects flying in the distance. Augmented Reality can address these issues by bringing the flight planning process on-site and visualizing the spatial relationship between the planned or current positions of the vehicle and the physical environment. In this paper, we present Augmented Reality supported navigation and flight planning of micro aerial vehicles by augmenting the users view with relevant information for flight planning and live feedback for flight supervision. Furthermore, we introduce additional depth hints supporting the user in understanding the spatial relationship of virtual waypoints in the physical world and investigate the effect of these visualization techniques on the spatial understanding.


Computer Vision and Image Understanding | 2017

Evaluations on multi-scale camera networks for precise and geo-accurate reconstructions from aerial and terrestrial images with user guidance

Markus Rumpler; Alexander Tscharf; Christian Mostegel; Shreyansh Daftry; Christof Hoppe; Rudolf Prettenthaler; Friedrich Fraundorfer; Gerhard Mayer; Horst Bischof

Use of planar fiducial markers for automatic accurate camera calibration.Online user feedback and quality visualization for image acquisition.Integration of ground control points and GPS measurements in the bundle adjustment.Accurate and easy-to-use 3D reconstruction pipeline with automatic geo-registration.Unified document with extensive evaluations and insights to large-scale 3D modeling. During the last decades photogrammetric computer vision systems have been well established in scientific and commercial applications. Recent developments in image-based 3D reconstruction systems have resulted in an easy way of creating realistic, visually appealing and accurate 3D models. We present a fully automated processing pipeline for metric and geo-accurate 3D reconstructions of complex geometries supported by an online feedback method for user guidance during image acquisition. Our approach is suited for seamlessly matching and integrating images with different scales, from different view points (aerial and terrestrial), and with different cameras into one single reconstruction. We evaluate our approach based on different datasets for applications in mining, archaeology and urban environments and thus demonstrate the flexibility and high accuracy of our approach. Our evaluation includes accuracy related analyses investigating camera self-calibration, georegistration and camera network configuration.


international conference on robotics and automation | 2012

Geo-referenced 3D reconstruction: Fusing public geographic data and aerial imagery

Michael Maurer; Markus Rumpler; Andreas Wendel; Christof Hoppe; Arnold Irschara; Horst Bischof

We present an image-based 3D reconstruction pipeline for acquiring geo-referenced semi-dense 3D models. Multiple overlapping images captured from a micro aerial vehicle platform provide a highly redundant source for multi-view reconstructions. Publicly available geo-spatial information sources are used to obtain an approximation to a digital surface model (DSM). Models obtained by the semi-dense reconstruction are automatically aligned to the DSM to allow the integration of highly detailed models into the original DSM and to provide geographic context.


advanced video and signal based surveillance | 2011

AVSS 2011 demo session: Construction site monitoring from highly-overlapping MAV images

Stefan Kluckner; Josef-Alois Birchbauer; Claudia Windisch; Christof Hoppe; Arnold Irschara; Andreas Wendel; Stefanie Zollmann; Gerhard Reitmayr; Horst Bischof

Summary form only given. We report on a disruption in organizational dynamics arising from the introduction of model-driven development tools in General Motors. The introduction altered the balance of collaboration deeply, and the organization is still negotiating with its aftermath. Our report illustrates one consequence of tool adoption in groups, and that these consequences should be understood to facilitate technical change.

Collaboration


Dive into the Christof Hoppe's collaboration.

Top Co-Authors

Avatar

Horst Bischof

Graz University of Technology

View shared research outputs
Top Co-Authors

Avatar

Markus Rumpler

Graz University of Technology

View shared research outputs
Top Co-Authors

Avatar

Stefan Kluckner

Graz University of Technology

View shared research outputs
Top Co-Authors

Avatar

Andreas Wendel

Graz University of Technology

View shared research outputs
Top Co-Authors

Avatar

Stefanie Zollmann

Graz University of Technology

View shared research outputs
Top Co-Authors

Avatar

Arnold Irschara

Graz University of Technology

View shared research outputs
Top Co-Authors

Avatar

Rudolf Prettenthaler

Graz University of Technology

View shared research outputs
Top Co-Authors

Avatar

Shreyansh Daftry

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Stefan Kluckner

Graz University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge