Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jörgen Ahlberg is active.

Publication


Featured researches published by Jörgen Ahlberg.


systems man and cybernetics | 2004

Fast and reliable active appearance model search for 3-D face tracking

Fadi Dornaika; Jörgen Ahlberg

This paper addresses the three-dimensional (3D) tracking of pose and animation of the human face in monocular image sequences using active appearance models. The major problem of the classical appearance-based adaptation is the high computational time resulting from the inclusion of a synthesis step in the iterative optimization. Whenever the dimension of the face space is large, a real-time performance cannot be achieved. In this paper, we aim at designing a fast and stable active appearance model search for 3D face tracking. The main contribution is a search algorithm whose CPU-time is not dependent on the dimension of the face space. Using this algorithm, we show that both the CPU-time and the likelihood of a nonaccurate tracking are reduced. Experiments evaluating the effectiveness of the proposed algorithm are reported, as well as method comparison and tracking synthetic and real image sequences.


Pattern Recognition | 2014

Eye pupil localization with an ensemble of randomized trees

Nenad Markuš; Miroslav Frljak; Igor S. Pandźić; Jörgen Ahlberg; Robert Forchheimer

We describe a method for eye pupil localization based on an ensemble of randomized regression trees and use several publicly available datasets for its quantitative and qualitative evaluation. The method compares well with reported state-of-the-art and runs in real-time on hardware with limited processing power, such as mobile devices. HighlightsA framework for eye pupil localization that compares well with state-of-the-art.Randomization during runtime improves performance.The developed system works in real-time on mobile devices.


international conference on computer vision | 2001

Using the active appearance algorithm for face and facial feature tracking

Jörgen Ahlberg

This paper describes a system for tracking a face and its facial features in an input video sequence using the active appearance algorithm. The algorithm adapts a wireframe model to the face in each frame, and the adaptation parameters are converted to MPEG-4 facial animation parameters. The results are promising, and it is our conclusion that we should continue on this track in our task to create a real-time model-based coder.


Image and Vision Computing | 2006

Fitting 3D face models for tracking and active appearance model training

Fadi Dornaika; Jörgen Ahlberg

In this paper, we consider fitting a 3D deformable face model to continuous video sequences for the tasks of tracking and training. We propose two appearance-based methods that only require a simple statistical facial texture model and do not require any information about an empirical or analytical gradient matrix, since the best search directions are estimated on the fly. The first method computes the fitting using a locally exhaustive and directed search where the 3D head pose and the facial actions are simultaneously estimated. The second method decouples the estimation of these parameters. It computes the 3D head pose using a robust feature-based pose estimator incorporating a facial texture consistency measure. Then, it estimates the facial actions with an exhaustive and directed search. Fitting and tracking experiments demonstrate the feasibility and usefulness of the developed methods. A performance evaluation also shows that the proposed methods can outperform the fitting based on an active appearance model search adopting a pre-computed gradient matrix. Although the proposed schemes are not as fast as the schemes adopting a directed continuous search, they can tackle many disadvantages associated with such approaches.


International Journal of Imaging Systems and Technology | 2003

Face tracking for model-based coding and face animation

Jörgen Ahlberg; Robert Forchheimer

We present a face and facial feature tracking system able to extract animation parameters describing the motion and articulation of a human face in real‐time on consumer hardware. The system is based on a statistical model of face appearance and a search algorithm for adapting the model to an image. Speed and robustness is discussed, and the system evaluated in terms of accuracy.


international conference on information fusion | 2010

Fusion of acoustic and optical sensor data for automatic fight detection in urban environments

Maria Andersson; Stavros Ntalampiras; Todor Ganchev; Joakim Rydell; Jörgen Ahlberg; Nikos Fakotakis

We propose a two-stage method for detection of abnormal behaviours, such as aggression and fights in urban environment, which is applicable to operator support in surveillance applications. The proposed method is based on fusion of evidence from audio and optical sensors. In the first stage, a number of modality-specific detectors perform recognition of low-level events. Their outputs act as input to the second stage, which performs fusion and disambiguation of the firststage detections. Experimental evaluation on scenes from the outdoor part of the PROMETHEUS database demonstrated the practical viability of the proposed approach. We report a fight detection rate of 81% when both audio and optical information are used. Reduced performance is observed when evidence from audio data is excluded from the fusion process. Finally, in the case when only evidence from one camera is used for detecting the fights, the recognition performance is poor.


international geoscience and remote sensing symposium | 2011

A shadow detection method for remote sensing images using VHR hyperspectral and LIDAR data

Gustav Tolt; Michal Shimoni; Jörgen Ahlberg

In this paper, a shadow detection method combining hyperspectral and LIDAR data analysis is presented. First, a rough shadow image is computed through line-of-sight analysis on a Digital Surface Model (DSM), using an estimate of the position of the sun at the time of image acquisition. Then, large shadow and non-shadow areas in that image are detected and used for training a supervised classifier (a Support Vector Machine, SVM) that classifies every pixel in the hyperspectral image as shadow or non-shadow. Finally, small holes are filled through image morphological analysis. The method was tested on data including a 24 band hyperspectral image in the VIS/NIR domain (50 cm spatial resolution) and a DSM of 25 cm resolution. The results were in good accordance with visual interpretation. As the line-of-sight analysis step is only used for training, geometric mismatches (about 2 m) between LIDAR and hyperspectral data did not affect the results significantly, nor did uncertainties regarding the position of the sun.


international conference on computer vision | 2015

The Thermal Infrared Visual Object Tracking VOT-TIR2015 Challenge Results

Michael Felsberg; Amanda Berg; Gustav Häger; Jörgen Ahlberg; Matej Kristan; Jiri Matas; Aleš Leonardis; Luka Cehovin; Gustavo Fernández; Tomas Vojir; Georg Nebehay; Roman P. Pflugfelder

The Thermal Infrared Visual Object Tracking challenge 2015, VOT-TIR2015, aims at comparing short-term single-object visual trackers that work on thermal infrared (TIR) sequences and do not apply pre-learned models of object appearance. VOT-TIR2015 is the first benchmark on short-term tracking in TIR sequences. Results of 24 trackers are presented. For each participating tracker, a short description is provided in the appendix. The VOT-TIR2015 challenge is based on the VOT2013 challenge, but introduces the following novelties: (i) the newly collected LTIR (Link -- ping TIR) dataset is used, (ii) the VOT2013 attributes are adapted to TIR data, (iii) the evaluation is performed using insights gained during VOT2013 and VOT2014 and is similar to VOT2015.


advanced video and signal based surveillance | 2015

A thermal Object Tracking benchmark

Amanda Berg; Jörgen Ahlberg; Michael Felsberg

Short-term single-object (STSO) tracking in thermal images is a challenging problem relevant in a growing number of applications. In order to evaluate STSO tracking algorithms on visual imagery, there are de facto standard benchmarks. However, we argue that tracking in thermal imagery is different than in visual imagery, and that a separate benchmark is needed. The available thermal infrared datasets are few and the existing ones are not challenging for modern tracking algorithms. Therefore, we hereby propose a thermal infrared benchmark according to the Visual Visual Object Tracking (VOT) protocol for evaluation of STSO tracking methods. The benchmark includes the new LTIR dataset containing 20 thermal image sequences which have been collected from multiple sources and annotated in the format used in the VOT Challenge. In addition, we show that the ranking of different tracking principles differ between the visual and thermal benchmarks, confirming the need for the new benchmark.


international soi conference | 2003

Efficient active appearance model for real-time head and facial feature tracking

Fadi Dornaika; Jörgen Ahlberg

We address the 3D tracking of pose and animation of the human face in monocular image sequences using active appearance models. The classical appearance-based tracking suffers from two disadvantages: (i) the estimated out-of-plane motions are not very accurate, and (ii) the convergence of the optimization process to desired minima is not guaranteed. We aim at designing an efficient active appearance model, which is able to cope with the above disadvantages by retaining the strengths of feature-based and featureless tracking methodologies. For each frame, the adaptation is split into two consecutive stages. In the first stage, the 3D head pose is recovered using robust statistics and a measure of consistency with a statistical model of a face texture. In the second stage, the local motion associated with some facial features is recovered using the concept of the active appearance model search. Tracking experiments and method comparison demonstrate the robustness and out-performance of the developed framework.

Collaboration


Dive into the Jörgen Ahlberg's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fadi Dornaika

University of the Basque Country

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ingmar Renhorn

Swedish Defence Research Agency

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lena M. Klasen

Swedish Defence Research Agency

View shared research outputs
Top Co-Authors

Avatar

Niclas Wadströmer

Swedish Defence Research Agency

View shared research outputs
Researchain Logo
Decentralizing Knowledge