Vasileios Zografos
Linköping University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Vasileios Zografos.
computer vision and pattern recognition | 2013
Vasileios Zografos; Liam F. Ellis; Rudolf Mester
We present a novel method for clustering data drawn from a union of arbitrary dimensional subspaces, called Discriminative Subspace Clustering (DiSC). DiSC solves the subspace clustering problem by using a quadratic classifier trained from unlabeled data (clustering by classification). We generate labels by exploiting the locality of points from the same subspace and a basic affinity criterion. A number of classifiers are then diversely trained from different partitions of the data, and their results are combined together in an ensemble, in order to obtain the final clustering result. We have tested our method with 4 challenging datasets and compared against 8 state-of-the-art methods from literature. Our results show that DiSC is a very strong performer in both accuracy and robustness, and also of low computational complexity.
british machine vision conference | 2011
Vasileios Zografos; Klas Nordberg
We introduce a simple and efficient procedure for the segmentation of rigidly moving objects, imaged under an affine camera model. For this purpose we revisit the theory of “linear combination of views” (LCV), proposed by Ullman and Basri [20], which states that the set of 2d views of an object undergoing 3d rigid transformations, is embedded in a low-dimensional linear subspace that is spanned by a small number of basis views. Our work shows, that one may use this theory for motion segmentation, and cluster the trajectories of 3d objects using only two 2d basis views. We therefore propose a practical motion segmentation method, built around LCV, that is very simple to implement and use, and in addition is very fast, meaning it is well suited for real-time SfM and tracking applications. We have experimented on real image sequences, where we show good segmentation results, comparable to the state-of-the-art in literature. If we also consider computational complexity, our proposed method is one of the best performers in combined speed and accuracy.
asian conference on computer vision | 2014
Vasileios Zografos; Reiner Lenz; Erik Ringaby; Michael Felsberg; Klas Nordberg
We present a novel approach for segmenting different motions from 3D trajectories. Our approach uses the theory of transformation groups to derive a set of invariants of 3D points located on the same rigid object. These invariants are inexpensive to calculate, involving primarily QR factorizations of small matrices. The invariants are easily converted into a set of robust motion affinities and with the use of a local sampling scheme and spectral clustering, they can be incorporated into a highly efficient motion segmentation algorithm. We have also captured a new multi-object 3D motion dataset, on which we have evaluated our approach, and compared against state-of-the-art competing methods from literature. Our results show that our approach outperforms all methods while being robust to perspective distortions and degenerate configurations.
Image and Vision Computing | 2013
Vasileios Zografos; Reiner Lenz; Michael Felsberg
In this paper, we introduce a novel framework for low-level image processing and analysis. First, we process images with very simple, difference-based filter functions. Second, we fit the 2-parameter Weibull distribution to the filtered output. This maps each image to the 2D Weibull manifold. Third, we exploit the information geometry of this manifold and solve low-level image processing tasks as minimisation problems on point sets. For a proof-of-concept example, we examine the image autofocusing task. We propose appropriate cost functions together with a simple implicitly-constrained manifold optimisation algorithm and show that our framework compares very favourably against common autofocus methods from literature. In particular, our approach exhibits the best overall performance in terms of combined speed and accuracy.
international symposium on visual computing | 2009
Vasileios Zografos
In this work we examine in detail the use of optimisation algorithms on deformable template matching problems. We start with the examination of simple, direct-search methods and move on to more complicated evolutionary approaches. Our goal is twofold: first, evaluate a number of methods examined under different template matching settings and introduce the use of certain, novel evolutionary optimisation algorithms to computer vision, and second, explore and analyse any additional advantages of using a hybrid approach over existing methods. We show that in computer vision tasks, evolutionary strategies provide very good choices for optimisation. Our experiments have also indicated that we can improve the convergence speed and results of existing algorithms by using a hybrid approach.
international conference on computing theory and applications | 2007
Vasileios Zografos; Bernard F. Buxton
In this work, we present a method for model-based recognition of 3d objects from a small number of 2d intensity images taken from nearby, but otherwise arbitrary viewpoints. Our method works by linearly combining images from two (or more) viewpoints of a 3d object to synthesise novel views of the object. The object is recognised in a target image by matching to such a synthesised, novel view. All that is required is the recovery of the linear combination parameters, and since we are working directly with pixel intensities, we suggest searching the parameter space using an evolutionary optimisation algorithm in order to efficiently recover the optimal parameters and thus recognise the object in the scene
international conference on image analysis and recognition | 2005
Vasileios Zografos; Bernard F. Buxton
We revisit the problem of model-based object recognition for intensity images and attempt to address some of the shortcomings of existing Bayesian methods, such as unsuitable priors and the treatment of residuals with a non-robust error norm. We do so by using a reformulation of the Huber metric and carefully chosen prior distributions. Our proposed method is invariant to 2-dimensional affine transformations and, because it is relatively easy to train and use, it is suited for general object matching problems.
International MICCAI Workshop on Medical Computer Vision | 2015
Vasileios Zografos; Alexander Valentinitsch; Markus Rempfler; Federico Tombari; Bjoern H. Menze
We present a novel framework for the segmentation of multiple organs in 3D abdominal CT images, which does not require registration with an atlas. Instead we use discriminative classifiers that have been trained on an array of 3D volumetric features and implicitly model the appearance of the organs of interest. We fully leverage all the available data and extract the features from inside supervoxels at multiple levels of detail. Parallel to this, we employ a hierarchical auto-context classification scheme, where the trained classifier at each level is applied back onto the image to provide additional features for the next level. The final segmentation is obtained using a hierarchical conditional random field fusion step. We have tested our approach on 20 contrast enhanced CT images of 8 organs from the VISCERAL dataset and obtained results comparable to the state-of-the-art methods that require very costly registration steps and a much larger corpus of training data. Our method is accurate, fast and general enough that may be applied to a variety of realistic clinical applications and any number of organs.
scandinavian conference on image analysis | 2011
Vasileios Zografos; Reiner Lenz
We use the theory of group representations to construct very fast image descriptors that split the vector space of local RGB distributions into small group-invariant subspaces. These descriptors are group theoretical generalizations of the Fourier Transform and can be computed with algorithms similar to the FFT. Because of their computational efficiency they are especially suitable for retrieval, recognition and classification in very large image datasets. We also show that the statistical properties of these descriptors are governed by the principles of the Extreme Value Theory (EVT). This enables us to work directly with parametric probability distribution models, which offer a much lower dimensionality and higher resolution and flexibility than histogram representations. We explore the connection to EVT and analyse the characteristics of these descriptors from a probabilistic viewpoint with the help of large image databases.
international conference on pattern recognition | 2010
Klas Nordberg; Vasileios Zografos
We propose a method for segmenting an arbitrary number of moving objects using the geometry of 6 points in 2D images to infer motion consistency. This geometry allows us to determine whether or not observations of 6 points over several frames are consistent with a rigid 3D motion. The matching between observations of the 6 points and an estimated model of their configuration in 3D space is quantified in terms of a geometric error derived from distances between the points and 6 corresponding lines in the image. This leads to a simple motion inconsistency score that is derived from the geometric errors of 6 points, that in the ideal case should be zero when the motion of the points can be explained by a rigid 3D motion. Initial clusters are determined in the spatial domain and merged in motion trajectory domain based on the score. Each point is then assigned to a cluster by assigning the point to the segment of the lowest score. Our algorithm has been tested with real image sequences from the Hopkins155 database with very good results, competing with the state of the art methods, particularly for degenerate motion sequences. In contrast the motion segmentation methods based on multi-body factorization, that assumes an affine camera model, the proposed method allows the mapping from the 3D space to the 2D image to be fully projective.