Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kristian Ambrosch is active.

Publication


Featured researches published by Kristian Ambrosch.


Computer Vision and Image Understanding | 2010

Accurate hardware-based stereo vision

Kristian Ambrosch; Wilfried Kubinger

To enable both accurate and fast real-time stereo vision in embedded systems, we propose a novel stereo matching algorithm that is designed for high efficiency when realized in hardware. We evaluate its accuracy using the Middlebury Stereo Evaluation, revealing its high performance at minimum tolerance. To outline the resource efficiency of the algorithm, we present its realization as an Intellectual Property (IP) core that is designed for the deployment in Field Programmable Gate Arrays (FPGAs) and Application Specific Integrated Circuits (ASICs).


international symposium on visual computing | 2008

An Optimized Software-Based Implementation of a Census-Based Stereo Matching Algorithm

Christian Zinner; Martin Humenberger; Kristian Ambrosch; Wilfried Kubinger

This paper presents S 3 E , a software implementation of a high-quality dense stereo matching algorithm. The algorithm is based on a Census transform with a large mask size. The strength of the system lies in the flexibility in terms of image dimensions, disparity levels, and frame rates. The program runs on standard PC hardware utilizing various SSE instructions. We describe the performance optimization techniques that had a considerably high impact on the run-time performance. Compared to a generic version of the source code, a speedup factor of 112 could be achieved. On input images of 320×240 and a disparity range of 30, S 3 E achieves 42fps on an Intel Core 2 Duo CPU running at 2GHz.


Archive | 2009

SAD-Based Stereo Matching Using FPGAs

Kristian Ambrosch; Martin Humenberger; Wilfried Kubinger; Andreas Steininger

In this chapter we present a field-programmable gate array (FPGA) based stereo matching architecture. This architecture uses the sum of absolute differences (SAD) algorithm and is targeted at automotive and robotics applications. The disparity maps are calculated using 450×375 input images and a disparity range of up to 150 pixels. We discuss two different implementation approaches for the SAD and analyze their resource usage. Furthermore, block sizes ranging from 3×3 up to 11×11 and their impact on the consumed logic elements as well as on the disparity map quality are discussed. The stereo matching architecture enables a frame rate of up to 600 fps by calculating the data in a highly parallel and pipelined fashion. This way, a software solution optimized by using Intel’s Open Source Computer Vision Library running on an Intel Pentium 4 with 3 GHz clock frequency is outperformed by a factor of 400.


Eurasip Journal on Embedded Systems | 2008

Flexible hardware-based stereo matching

Kristian Ambrosch; Wilfried Kubinger; Martin Humenberger; Andreas Steininger

To enable adaptive stereo vision for hardware-based embedded stereo vision systems, we propose a novel technique for implementing a flexible block size, disparity range, and frame rate. By reusing existing resources of a static architecture, rather than dynamic reconfiguration, our technique is compatible with application specific integrated circuit (ASIC) as well as field programmable gate array (FPGA) implementations. We present the corresponding block diagrams and their implementation in our hardware-based stereo matching architecture. Furthermore, we show the impact of flexible stereo matching on the generated disparity maps for the sum of absolute differences (SADs), rank, and census transform algorithms. Finally, we discuss the resource usage and achievable performance when synthesized for an Altera Stratix II FPGA.


computer vision and pattern recognition | 2007

Hardware implementation of an SAD based stereo vision algorithm

Kristian Ambrosch; Martin Humenberger; Wilfried Kubinger; Andreas Steininger

This paper presents the hardware implementation of a stereo vision core algorithm, that runs in real-time and is targeted at automotive applications. The algorithm is based on the sum of absolute differences (SAD) and computes the disparity map using 320 times 240 input images with a maximum disparity of 100 pixels. The hardware operates at a frequency of 65 MHz and achieves a frame rate of 425 fps by calculating the data highly parallel and pipelined. Thus an implemented and basically optimized software solution, running on an Intel Pentium 4 with 3 GHz clock frequency is 166 times outperformed.


convention of electrical and electronics engineers in israel | 2010

A miniature embedded stereo vision system for automotive applications

Kristian Ambrosch; Christian Zinner; Helmut Leopold

Dependable 3D perception modules are essential for safe operation of autonomous systems. Therefore, we present a highly compact stereo vision system that gets along without a dedicated processing platform, having the DSPs integrated in the cameras. To enable the computation of dense and accurate depth maps, we implement a Sparse Census Transform, reducing the complexity of the stereo matching procedure by a factor of four while still ensuring highly accurate results. Besides the detection of false positives, wrong matches are highly reduced due to the computation and analysis of a dedicated confidence value. Furthermore, the algorithm allows for the computation of camera images with up to 16 bit camera resolution, leading just to minor increases in computational time.


international symposium on visual computing | 2010

Gradient-based modified census transform for optical flow

Philipp Puxbaum; Kristian Ambrosch

To enable the precise detection of persons walking or running on the ground using unmannedMicro Aerial Vehicles (MAVs), we present the evaluation of the MCT algorithm based on intensity as well as gradient images for optical flow, focusing on accuracy as well as low computational complexity to enable the real-time implementation in light-weight embedded systems. Therefore, we give a detailed analysis of this algorithm on four optical flow datasets from the Middlebury database and show the algorithms performance when compared to other optical flow algorithms. Furthermore, different approaches for sub-pixel refinement and occlusion detection are discussed.


Archive | 2009

Benchmarks of Low-Level Vision Algorithms for DSP, FPGA, and Mobile PC Processors

Daniel Baumgartner; Peter Roessler; Wilfried Kubinger; Christian Zinner; Kristian Ambrosch

We present recent results of a performance benchmark of selected low-level vision algorithms implemented on different high-speed embedded platforms. The algorithms were implemented on a digital signal processor (DSP) (Texas Instruments TMS320C6414), a field-programmable gate array (FPGA) (Altera Stratix-I and II families) as well as on a mobile PC processor (Intel Mobile Core 2 Duo T7200). These implementations are evaluated, compared, and discussed in detail. The DSP and the mobile PC implementations, both making heavy use of processor-specific acceleration techniques (intrinsics and resource optimized slicing direct memory access on DSPs or Intel integrated performance primitives Library on mobile PC processors), outperform the FPGA implementations, but at the cost of spending all its resources to these tasks. FPGAs, however, are very well suited to algorithms that benefit from parallel execution.


computer vision and pattern recognition | 2008

Extending two non-parametric transforms for FPGA based stereo matching using bayer filtered cameras

Kristian Ambrosch; Martin Humenberger; Wilfried Kubinger; Andreas Steininger

Stereo vision has become a very interesting sensing technology for robotic platforms. It offers various advantages, but the drawback is a very high algorithmic effort. Due to the aptitude of certain non-parametric techniques for field programmable gate array (FPGA) based stereo matching, these algorithms can be implemented in highly parallel design while offering adequate real-time behavior. To enable the provision of color images by the stereo sensor for object classification tasks, we propose a technique for extending the rank and the census transform for increased robustness on gray scaled Bayer patterned images. Furthermore, we analyze the extended and the original algorithmspsila behavior on image sets created in controlled environments as well as on real world images and compare their resource usage when implemented on our FPGA based stereo matching architecture.


international symposium on visual computing | 2011

Distortion compensation for movement detection based on dense optical flow

Josef Maier; Kristian Ambrosch

This paper presents a method for detecting moving objects in two temporal succeeding images by calculating the fundamental matrix and the radial distortion and therefore, the distances from points to epipolar lines. In static scenes, these distances are a result of noise and/or the inaccuracy of the computed epipolar geometry and lens distortion. Hence, we are using these distances by applying an adaptive threshold to detect moving objects using views of a camera mounted on a Micro Unmanned Aerial Vehicle (UAV). Our approach uses a dense optical flow calculation and estimates the epipolar geometry and radial distortion. In addition, a dedicated approach of selecting point correspondences that suits dense optical flow computations and an optimization-algorithm that corrects the radial distortion parameter are introduced. Furthermore, the results on distorted ground truth datasets show a good accuracy which is outlined by the presentation of the performance on real-world scenes captured by an UAV.

Collaboration


Dive into the Kristian Ambrosch's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Martin Humenberger

Austrian Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Andreas Steininger

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Christian Zinner

Austrian Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Helmut Leopold

Austrian Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Josef Maier

Austrian Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Peter Roessler

University of Applied Sciences Technikum Wien

View shared research outputs
Top Co-Authors

Avatar

Philipp Puxbaum

Austrian Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Stephan Schraml

Austrian Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Sven Olufs

Vienna University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge