Oliver Zendel
Austrian Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Oliver Zendel.
international conference on computer vision | 2015
Oliver Zendel; Markus Murschitz; Martin Humenberger; Wolfgang Herzner
Test data plays an important role in computer vision (CV) but is plagued by two questions: Which situations should be covered by the test data and have we tested enough to reach a conclusion? In this paper we propose a new solution answering these questions using a standard procedure devised by the safety community to validate complex systems: The Hazard and Operability Analysis (HAZOP). It is designed to systematically search and identify difficult, performance-decreasing situations and aspects. We introduce a generic CV model that creates the basis for the hazard analysis and, for the first time, apply an extensive HAZOP to the CV domain. The result is a publicly available checklist with more than 900 identified individual hazards. This checklist can be used to evaluate existing test datasets by quantifying the amount of covered hazards. We evaluate our approach by first analyzing and annotating the popular stereo vision test datasets Middlebury and KITTI. Second, we compare the performance of six popular stereo matching algorithms at the identified hazards from our checklist with their average performance and show, as expected, a clear negative influence of the hazards. The presented approach is a useful tool to evaluate and improve test datasets and creates a common basis for future dataset designs.
european conference on computer vision | 2016
Josef Maier; Martin Humenberger; Markus Murschitz; Oliver Zendel; Markus Vincze
In this paper, we present a novel algorithm for reliable and fast feature matching. Inspired by recent efforts in optimizing the matching process using geometric and statistical properties, we developed an approach which constrains the search space by utilizing spatial statistics from a small subset of matched and filtered correspondences. We call this method Guided Matching based on Statistical Optical Flow (GMbSOF). To ensure broad applicability, our approach works on high dimensional descriptors like SIFT but also on binary descriptors like FREAK. To evaluate our algorithm, we developed a novel method for determining ground truth matches, including true negatives, using spatial ground truth information of well known datasets. Therefore, we evaluate not only with precision and recall but also with accuracy and fall-out. We compare our approach in detail to several relevant state-of-the-art algorithms using these metrics. Our experiments show that our method outperforms all other tested solutions in terms of processing time while retaining a comparable level of matching quality.
international symposium on robotics | 2013
Oliver Zendel; Wolfgang Herzner; Markus Murschitz
This paper introduces a model-based approach for testing robustness of computer vision solutions with respect to a given task or application. Assessment of essential CV component robustness is crucial to ensure a safe robot and human coexistence. Currently this is mostly a manual and heuristic task lacking reliable metrics for determining the completeness and strength of a given test set. Our novel approach enables the generation of test data with a measurable coverage of optical situations both typical and critical for a given application. Typical situations are defined using a specific domain model while critical circumstances can be selected from a list of predefined hazards which was created using a proven hazard analysis procedure. Furthermore, the framework allows the automatic reduction of redundancy over the entire set of test images by using clustering. Finally the required oracle (ground truth) is automatically generated and is correct by definition.
Proceedings of the 13th International Symposium on Open Collaboration | 2017
Oliver Zendel; Matthias Schörghuber; Michela Vignoli
Peer reviewing is a crucial step for quality assurance at scientific publishing. The task is time consuming and error-prone due to conflicts of interest, subjective opinions, and different education backgrounds. Open Peer Review (OPR) can solve many of said problems and is already applied to the journal publishing workflow. The poster visualizes the efforts done in the EU project OpenUP to evaluate the usefulness of OPR for conference submissions. Two conference venues will try out specific versions of OPR. The conference management software (CMS) needed to facilitate this process is summarized. The CMS solution HotCRP was chosen among the evaluated options for the pilots. The poster introduces the individual processes of open peer review at the two venues and how this is supported in HotCRP. This shall give conference organizers an insight into what is possible and allow for discussions with the OpenUP team about the selected approaches.
european conference on computer vision | 2018
Oliver Zendel; Katrin Honauer; Markus Murschitz; Daniel Steininger; Gustavo Fernández Domínguez
Test datasets should contain many different challenging aspects so that the robustness and real-world applicability of algorithms can be assessed. In this work, we present a new test dataset for semantic and instance segmentation for the automotive domain. We have conducted a thorough risk analysis to identify situations and aspects that can reduce the output performance for these tasks. Based on this analysis we have designed our new dataset. Meta-information is supplied to mark which individual visual hazards are present in each test case. Furthermore, a new benchmark evaluation method is presented that uses the meta-information to calculate the robustness of a given algorithm with respect to the individual hazards. We show how this new approach allows for a more expressive characterization of algorithm robustness by comparing three baseline algorithms.
computer vision and pattern recognition | 2017
Oliver Zendel; Katrin Honauer; Markus Murschitz; Martin Humenberger; Gustavo Fernández Domínguez
In recent years, a great number of datasets were published to train and evaluate computer vision (CV) algorithms. These valuable contributions helped to push CV solutions to a level where they can be used for safety-relevant applications, such as autonomous driving. However, major questions concerning quality and usefulness of test data for CV evaluation are still unanswered. Researchers and engineers try to cover all test cases by using as much test data as possible. In this paper, we propose a different solution for this challenge. We introduce a method for dataset analysis which builds upon an improved version of the CV-HAZOP checklist, a list of potential hazards within the CV domain. Picking stereo vision as an example, we provide an extensive survey of 28 datasets covering the last two decades. We create a tailored checklist and apply it to the datasets Middlebury, KITTI, Sintel, Freiburg, and HCI to present a thorough characterization and quantitative comparison. We confirm the usability of our checklist for identification of challenging stereo situations by applying nine state-of-the-art stereo matching algorithms on the analyzed datasets, showing that hazard frames correlate with difficult frames. We show that challenging datasets still allow a meaningful algorithm evaluation even for small subsets. Finally, we provide a list of missing test cases that are still not covered by current datasets as inspiration for researchers who want to participate in future dataset creation.
computer vision and pattern recognition | 2017
Josef Maier; Martin Humenberger; Oliver Zendel; Markus Vincze
Feature matching quality strongly influences the accuracy of most computer vision tasks. This led to impressive advances in keypoint detection, descriptor calculation, and feature matching itself. To compare different approaches and evaluate their quality, datasets from related tasks are used. Unfortunately, none of these datasets actually provide ground truth (GT) feature matches. Thus, matches can only be approximated due to repeatability errors of keypoint detectors and inaccuracies of GT. In this paper, we introduce ground truth matches (GTM) for several well known datasets. Based on the provided spacial ground truth, we automatically generate them using popular feature types. Currently, feature matching evaluation is typically performed using precision and recall. The introduced GTM additionally enable evaluation with accuracy and fall-out. The datasets were manually annotated, on the one hand to evaluate the precision and unambiguousness of the GTM, and on the other hand to determine the accuracy of the ground truth provided with the datasets. Using GTM, we present an evaluation of multiple state-of-the-art keypoint-descriptor combinations as well as matching algorithms.
International Journal of Computer Vision | 2017
Oliver Zendel; Markus Murschitz; Martin Humenberger; Wolfgang Herzner
Good test data is crucial for driving new developments in computer vision (CV), but two questions remain unanswered: which situations should be covered by the test data, and how much testing is enough to reach a conclusion? In this paper we propose a new answer to these questions using a standard procedure devised by the safety community to validate complex systems: the hazard and operability analysis (HAZOP). It is designed to systematically identify possible causes of system failure or performance loss. We introduce a generic CV model that creates the basis for the hazard analysis and—for the first time—apply an extensive HAZOP to the CV domain. The result is a publicly available checklist with more than 900 identified individual hazards. This checklist can be utilized to evaluate existing test datasets by quantifying the covered hazards. We evaluate our approach by first analyzing and annotating the popular stereo vision test datasets Middlebury and KITTI. Second, we demonstrate a clearly negative influence of the hazards in the checklist on the performance of six popular stereo matching algorithms. The presented approach is a useful tool to evaluate and improve test datasets and creates a common basis for future dataset designs.
Septentrio Conference Series | 2016
Michela Vignoli; Oliver Zendel; Matthias Schörghuber
The review-disseminate-assess cycle is a multifaceted process involving different stakeholders: researchers, publishers, research institutions and funders, private companies, industry, and citizens. The H2020 project OpenUP aspires to bring all these stakeholders into an open dialogue to consensually identify and spread the review-disseminate-assess mechanisms advancing evolving practices of RRI in an Open Science context. OpenUP will actively engage research communities and implement a series of hands-on pilots to validate OpenUP’s proposed (open) peer review, innovative dissemination, and impact indicator frameworks. The pilots will be carried out in close cooperation with selected, devoted research communities from four scientific areas: arts and humanities, social sciences, life sciences, and energy. This poster visualises UpenUP’s pilot design exemplified by two of the seven Open Science Pilots to be conducted by the project: 1) Open Peer Review for Conferences, and 2) Addressing and Reaching Businesses and the Public with Research Output. The first pilot will evaluate the feasibility and acceptance of open peer reviewing in a conference setting. The specific implementation of the applied schema will be determined by the results from the state of the art study as well as the user questionnaire answers from other work packages of OpenUP. In comparison to the traditional way (double-blind evaluation of submitted papers by assigned reviewers chosen by the conference organisers) the new schema should allow for a more open and fair process as well as give additional incentives to reviewers. The actual pilot study will be conducted at a medium sized conference in consultation with the conference organisers. Follow-up questionnaires and interviews will show if the stakeholders preferred the new process and can provide constructive feedback for improvements and policy decisions. The second pilot will test existing and potential alternative forms, formats, and channels of open science communication, and explore how the targeted audiences, in particular businesses and the public, can be best reached via these channels. In a preparatory phase the team will map open science communication formats/channels and their targeted audiences. The team will conduct a workshop to elicit the targeted stakeholders’ requirements and expectations towards a useful and appealing communication of scientific contents. This will be the basis of the second pilot presented here. It will actively involve one or more energy research community projects beyond the OpenUP consortium. The goals are to test the previously established communication standards and channels for the energy area, and evaluate the impact and resonance at the targeted audiences. By actively involving research communities and other relevant stakeholders into the pilots, OpenUP will not only evaluate the practicability of the identified open peer review, innovative dissemination, and impact measurement methods in particular settings. It also intends to create and disseminate success stories, best/good practices, and policy recommendations, which will help further communities to implement working Open Science approaches in their research evaluation and communication strategies.
international conference on consumer electronics berlin | 2014
Daniel Moldovan; Oliver Zendel; Christian Zinner
In this paper, we propose a practical system for detecting 3D volumetric intrusion in a predefined restricted area using depth images provided by a range camera. This system can be employed for the protection of valuable objects displayed in public areas, as well as for monitoring the space around private property assets. The system defines a virtual 3D shield around the asset that has to be protected, thus delimiting the protected boundaries in all three dimensions. Experimental results performed with both passive stereo camera and IR depth sensors confirmed that the proposed method effectively localized the intrusion detection to the volume of the monitored object.