Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Esa Rahtu is active.

Publication


Featured researches published by Esa Rahtu.


advanced concepts for intelligent vision systems | 2017

Relative Camera Pose Estimation Using Convolutional Neural Networks

Iaroslav Melekhov; Juha Ylioinas; Juho Kannala; Esa Rahtu

This paper presents a convolutional neural network based approach for estimating the relative pose between two cameras. The proposed network takes RGB images from both cameras as input and directly produces the relative rotation and translation as output. The system is trained in an end-to-end manner utilising transfer learning from a large scale classification dataset. The introduced approach is compared with widely used local feature based methods (SURF, ORB) and the results indicate a clear improvement over the baseline. In addition, a variant of the proposed architecture containing a spatial pyramid pooling (SPP) layer is evaluated and shown to further improve the performance.


Neurocomputing | 2017

Exploiting inter-image similarity and ensemble of extreme learners for fixation prediction using deep features

Hamed Rezazadegan Tavakoli; Ali Borji; Jorma Laaksonen; Esa Rahtu

This paper presents a novel fixation prediction and saliency modeling framework based on inter-image similarities and ensemble of Extreme Learning Machines (ELM). The proposed framework is inspired by two observations, (1) the contextual information of a scene along with low-level visual cues modulates attention, (2) the influence of scene memorability on eye movement patterns caused by the resemblance of a scene to a former visual experience. Motivated by such observations, we develop a framework that estimates the saliency of a given image using an ensemble of extreme learners, each trained on an image similar to the input image. That is, after retrieving a set of similar images for a given image, a saliency predictor is learnt from each of the images in the retrieved image set using an ELM, resulting in an ensemble. The saliency of the given image is then measured in terms of the mean of predicted saliency value by the ensembles members.


Scientific Reports | 2018

Automatic knee osteoarthritis diagnosis from plain radiographs : a deep learning-based approach

Aleksei Tiulpin; Jérôme Thevenot; Esa Rahtu; Petri Lehenkari; Simo Saarakkala

Knee osteoarthritis (OA) is the most common musculoskeletal disorder. OA diagnosis is currently conducted by assessing symptoms and evaluating plain radiographs, but this process suffers from subjectivity. In this study, we present a new transparent computer-aided diagnosis method based on the Deep Siamese Convolutional Neural Network to automatically score knee OA severity according to the Kellgren-Lawrence grading scale. We trained our method using the data solely from the Multicenter Osteoarthritis Study and validated it on randomly selected 3,000 subjects (5,960 knees) from Osteoarthritis Initiative dataset. Our method yielded a quadratic Kappa coefficient of 0.83 and average multiclass accuracy of 66.71% compared to the annotations given by a committee of clinical experts. Here, we also report a radiological OA diagnosis area under the ROC curve of 0.93. Besides this, we present attention maps highlighting the radiological features affecting the network decision. Such information makes the decision process transparent for the practitioner, which builds better trust toward automatic methods. We believe that our model is useful for clinical decision making and for OA research; therefore, we openly release our training codes and the data set created in this study.


international joint conference on computer vision imaging and computer graphics theory and applications | 2018

Real-time Human Pose Estimation with Convolutional Neural Networks.

Marko Linna; Juho Kannala; Esa Rahtu

In this paper, we present a method for real-time multi-person human pose estimation from video by utilizing convolutional neural networks. Our method is aimed for use case specific applications, where good accuracy is essential and variation of the background and poses is limited. This enables us to use a generic network architecture, which is both accurate and fast. We divide the problem into two phases: (1) pre-training and (2) finetuning. In pre-training, the network is learned with highly diverse input data from publicly available datasets, while in finetuning we train with application specific data, which we record with Kinect. Our method differs from most of the state-of-the-art methods in that we consider the whole system, including person detector, pose estimator and an automatic way to record application specific training material for finetuning. Our method is considerably faster than many of the state-of-the-art methods. Our method can be thought of as a replacement for Kinect in restricted environments. It can be used for tasks, such as gesture control, games, person tracking, action recognition and action tracking. We achieved accuracy of 96.8% ([email protected]) with application specific data.


european conference on computer vision | 2018

ADVIO: An Authentic Dataset for Visual-Inertial Odometry

Santiago Cortes; Arno Solin; Esa Rahtu; Juho Kannala

The lack of realistic and open benchmarking datasets for pedestrian visual-inertial odometry has made it hard to pinpoint differences in published methods. Existing datasets either lack a full six degree-of-freedom ground-truth or are limited to small spaces with optical tracking systems. We take advantage of advances in pure inertial navigation, and develop a set of versatile and challenging real-world computer vision benchmark sets for visual-inertial odometry. For this purpose, we have built a test rig equipped with an iPhone, a Google Pixel Android phone, and a Google Tango device. We provide a wide range of raw sensor data that is accessible on almost any modern-day smartphone together with a high-quality ground-truth track. We also compare resulting visual-inertial tracks from Google Tango, ARCore, and Apple ARKit with two recent methods published in academic forums. The data sets cover both indoor and outdoor cases, with stairs, escalators, elevators, office environments, a shopping mall, and metro station.


international conference on information fusion | 2018

Inertial Odometry on Handheld Smartphones

Arno Solin; Santiago Cortes; Esa Rahtu; Juho Kannala


international conference on computer vision | 2017

Image-Based Localization Using Hourglass Networks

Iaroslav Melekhov; Juha Ylioinas; Juho Kannala; Esa Rahtu


IEEE Transactions on Multimedia | 2018

Summarization of User-Generated Sports Video by Using Deep Action Recognition Features

Antonio Tejero-de-Pablos; Yuta Nakashima; Tomokazu Sato; Naokazu Yokoya; Marko Linna; Esa Rahtu


Archive | 2017

MAGNETIC POSITIONING MANAGEMENT

Janne Haverinen; Esa Rahtu; Juho Kannala


Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications | 2016

Proceedings of the 11th Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications

Iman Alikhani; Hamed Rezazadegan Tavakoli; Esa Rahtu; Jorma Laaksonen

Collaboration


Dive into the Esa Rahtu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Subhransu Maji

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge