Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Martin Godec is active.

Publication


Featured researches published by Martin Godec.


international conference on computer vision | 2009

On-line Random Forests

Amir Saffari; Christian Leistner; Jakob Santner; Martin Godec; Horst Bischof

Random Forests (RFs) are frequently used in many computer vision and machine learning applications. Their popularity is mainly driven by their high computational efficiency during both training and evaluation while achieving state-of-the-art results. However, in most applications RFs are used off-line. This limits their usability for many practical problems, for instance, when training data arrives sequentially or the underlying distribution is continuously changing. In this paper, we propose a novel on-line random forest algorithm. We combine ideas from on-line bagging, extremely randomized forests and propose an on-line decision tree growing procedure. Additionally, we add a temporal weighting scheme for adaptively discarding some trees based on their out-of-bag-error in given time intervals and consequently growing of new trees. The experiments on common machine learning data sets show that our algorithm converges to the performance of the off-line RF. Additionally, we conduct experiments for visual tracking, where we demonstrate real-time state-of-the-art performance on well-known scenarios and show good performance in case of occlusions and appearance changes where we outperform trackers based on on-line boosting. Finally, we demonstrate the usability of on-line RFs on the task of interactive real-time segmentation.


international conference on computer vision | 2011

Hough-based tracking of non-rigid objects

Martin Godec; Peter M. Roth; Horst Bischof

Online learning has shown to be successful in tracking of previously unknown objects. However, most approaches are limited to a bounding-box representation with fixed aspect ratio. Thus, they provide a less accurate foreground/background separation and cannot handle highly non-rigid and articulated objects. This, in turn, increases the amount of noise introduced during online self-training.


computer vision and pattern recognition | 2010

Online multi-class LPBoost

Amir Saffari; Martin Godec; Thomas Pock; Christian Leistner; Horst Bischof

Online boosting is one of the most successful online learning algorithms in computer vision. While many challenging online learning problems are inherently multi-class, online boosting and its variants are only able to solve binary tasks. In this paper, we present Online Multi-Class LPBoost (OMCLP) which is directly applicable to multi-class problems. From a theoretical point of view, our algorithm tries to maximize the multi-class soft-margin of the samples. In order to solve the LP problem in online settings, we perform an efficient variant of online convex programming, which is based on primal-dual gradient descent-ascent update strategies. We conduct an extensive set of experiments over machine learning benchmark datasets, as well as, on Caltech 101 category recognition dataset. We show that our method is able to outperform other online multi-class methods. We also apply our method to tracking where, we present an intuitive way to convert the binary tracking by detection problem to a multi-class problem where background patterns which are similar to the target class, become virtual classes. Applying our novel model, we outperform or achieve the state-of-the-art results on benchmark tracking videos.


international conference on pattern recognition | 2010

On-Line Random Naive Bayes for Tracking

Martin Godec; Christian Leistner; Amir Saffari; Horst Bischof

Randomized learning methods (i.e., Forests or Ferns) have shown excellent capabilities for various computer vision applications. However, it was shown that the tree structure in Forests can be replaced by even simpler structures, e.g., Random Naive Bayes classifiers, yielding similar performance. The goal of this paper is to benefit from these findings to develop an efficient on-line learner. Based on the principals of on-line Random Forests, we adapt the Random Naive Bayes classifier to the on-line domain. For that purpose, we propose to use on-line histograms as weak learners, which yield much better performance than simple decision stumps. Experimentally we show, that the approach is applicable to incremental learning on machine learning datasets. Additionally, we propose to use an iir filtering-like forgetting function for the weak learners to enable adaptivity and evaluate our classifier on the task of tracking by detection.


computer vision and pattern recognition | 2011

Improving classifiers with unlabeled weakly-related videos

Christian Leistner; Martin Godec; Samuel Schulter; Amir Saffari; Manuel Werlberger; Horst Bischof

Current state-of-the-art object classification systems are trained using large amounts of hand-labeled images. In this paper, we present an approach that shows how to use unlabeled video sequences, comprising weakly-related object categories towards the target class, to learn better classifiers for tracking and detection. The underlying idea is to exploit the space-time consistency of moving objects to learn classifiers that are robust to local transformations. In particular, we use dense optical flow to find moving objects in videos in order to train part-based random forests that are insensitive to natural transformations. Our method, which is called Video Forests, can be used in two settings: first, labeled training data can be regularized to force the trained classifier to generalize better towards small local transformations. Second, as part of a tracking-by-detection approach, it can be used to train a general codebook solely on pair-wise data that can then be applied to tracking of instances of a priori unknown object categories. In the experimental part, we show on benchmark datasets for both tracking and detection that incorporating unlabeled videos into the learning of visual classifiers leads to improved results.


dagm conference on pattern recognition | 2010

On-line multi-view forests for tracking

Christian Leistner; Martin Godec; Amir Saffari; Horst Bischof

A successful approach to tracking is to on-line learn discriminative classifiers for the target objects. Although these tracking-by-detection approaches are usually fast and accurate they easily drift in case of putative and self-enforced wrong updates. Recent work has shown that classifier-based trackers can be significantly stabilized by applying semi-supervised learning methods instead of supervised ones. In this paper, we propose a novel on-line multi-view learning algorithm based on random forests. The main idea of our approach is to incorporate multiview learning inside random forests and update each tree with individual label estimates for the unlabeled data. Our method is fast, easy to implement, benefits from parallel computing architectures and inherently exploits multiple views for learning from unlabeled data. In the tracking experiments, we outperform the state-of-the-art methods based on boosting and random forests.


IEEE Intelligent Systems | 2010

Autonomous Audio-Supported Learning of Visual Classifiers for Traffic Monitoring

Horst Bischof; Martin Godec; Christian Leistner; Bernhard Rinner; Andreas Starzacher

In this paper, using acoustic detection and classification of vehicles, the proposed autonomous self-learning framework generates scene adaptive vehicle classifiers without the need to hand label any video data.


computer vision and pattern recognition | 2010

Context-driven clustering by multi-class classification in an active learning framework

Martin Godec; Sabine Sternig; Peter M. Roth; Horst Bischof

Tracking and detection of objects often require to apply complex models to cope with the large intra-class variability of the foreground as well as the background class. In this work, we reduce the complexity of a binary classification problem by a context-driven approach. The main idea is to use a hidden multi-class representation to capture multi-modalities in the data finally providing a binary classifier. We introduce virtual classes generated by a context-driven clustering, which are updated using an active learning strategy. By further using an on-line learner the classifier can easily be adapted to changing environmental conditions. Moreover, by adding additional virtual classes more complex scenarios can be handled. We demonstrate the approach for tracking as well as detection on different scenarios reaching state-of-the-art results.


computer vision and pattern recognition | 2010

TransientBoost: On-line boosting with transient data

Sabine Sternig; Martin Godec; Peter M. Roth; Horst Bischof

For on-line learning algorithms, which are applied in many vision tasks such as detection or tracking, robust integration of unlabeled samples is a crucial point. Various strategies such as self-training, semi-supervised learning and multiple-instance learning have been proposed. However, these methods are either too adaptive, which causes drifting, or biased by a prior, which hinders incorporation of new (orthogonal) information. Therefore, we propose a new on-line learning algorithm (TransientBoost), which is highly adaptive but still robust. This is realized by using an internal multi-class representation and modeling reliable and unreliable data in separate classes. Unreliable data is considered transient, hence we use highly adaptive learning parameters to adapt to fast changes in the scene while errors fade out fast. In contrast, the reliable data is preserved completely and not harmed by wrong updates. We demonstrate our algorithm on two different tasks, i.e., object detection and object tracking showing that we can handle typical problems considerable better than existing approaches. To demonstrate the stability and the robustness, we show long-term experiments for both tasks.


advanced video and signal based surveillance | 2010

Audio-Visual Co-Training for Vehicle Classification

Martin Godec; Christian Leistner; Horst Bischof; Andreas Starzacher; Bernhard Rinner

In this paper, we introduce a fully autonomous vehicleclassification system that continuously learns from largeamounts of unlabeled data. For that purpose, we proposea novel on-line co-training method based on visual andacoustic information. Our system does not need complicatedmicrophone arrays or video calibration and automaticallyadapts to specific traffic scenes. These specialized detectorsare more accurate and more compact than generalclassifiers, which allows for light-weight usage in low-costand portable embedded systems. Hence, we implementedour system on an off-the-shelf embedded platform. In the experimentalpart, we show that the proposed method is ableto cover the desired task and outperforms single-cue systems.Furthermore, our co-training framework minimizesthe labeling effort without degrading the overall system performance.

Collaboration


Dive into the Martin Godec's collaboration.

Top Co-Authors

Avatar

Horst Bischof

Graz University of Technology

View shared research outputs
Top Co-Authors

Avatar

Christian Leistner

Graz University of Technology

View shared research outputs
Top Co-Authors

Avatar

Amir Saffari

Graz University of Technology

View shared research outputs
Top Co-Authors

Avatar

Peter M. Roth

Graz University of Technology

View shared research outputs
Top Co-Authors

Avatar

Bernhard Rinner

Alpen-Adria-Universität Klagenfurt

View shared research outputs
Top Co-Authors

Avatar

Sabine Sternig

Graz University of Technology

View shared research outputs
Top Co-Authors

Avatar

Arnold Maier

Graz University of Technology

View shared research outputs
Top Co-Authors

Avatar

Jakob Santner

Graz University of Technology

View shared research outputs
Top Co-Authors

Avatar

Manuel Werlberger

Graz University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge