Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ioannis Marras is active.

Publication


Featured researches published by Ioannis Marras.


international conference on multimedia and expo | 2006

Virtual Dental Patient: a System for Virtual Teeth Drilling

Ioannis Marras; Leontios Papaleontiou; Nikolaos Nikolaidis; Kleoniki Lyroudia; Ioannis Pitas

This paper introduces, a virtual teeth drilling system named virtual dental patient designed to aid dentists in getting acquainted with the teeth anatomy, the handling of drilling instruments and the challenges associated with the drilling procedure. The basic aim of the system is to be used for the training of dental students. The application features a 3D model of the face and the oral cavity that can be adapted to the characteristics of a specific person and animated. Drilling using a haptic device is performed on realistic teeth models (constructed from real data), within the oral cavity. Results and intermediate steps of the drilling procedure can be saved for future use


Image and Vision Computing | 2017

Statistical non-rigid ICP algorithm and its application to 3D face alignment

Shiyang Cheng; Ioannis Marras; Stefanos Zafeiriou; Maja Pantic

The problem of fitting a 3D facial model to a 3D mesh has received a lot of attention the past 1520years. The majority of the techniques fit a general model consisting of a simple parameterisable surface or a mean 3D facial shape. The drawback of this approach is that is rather difficult to describe the non-rigid aspect of the face using just a single facial model. One way to capture the 3D facial deformations is by means of a statistical 3D model of the face or its parts. This is particularly evident when we want to capture the deformations of the mouth region. Even though statistical models of face are generally applied for modelling facial intensity, there are few approaches that fit a statistical model of 3D faces. In this paper, in order to capture and describe the non-rigid nature of facial surfaces we build a part-based statistical model of the 3D facial surface and we combine it with non-rigid iterative closest point algorithms. We show that the proposed algorithm largely outperforms state-of-the-art algorithms for 3D face fitting and alignment especially when it comes to the description of the mouth region. A statistical non-rigid ICP method for 3D face alignment is proposed.Local fitting in dynamic subdivision framework helps capture subtle facial feature.2D point-driven mesh deformation in pre-processing step helps improve performance.


international conference on computer vision | 2012

Robust learning from normals for 3d face recognition

Ioannis Marras; Stefanos Zafeiriou; Georgios Tzimiropoulos

We introduce novel subspace-based methods for learning from the azimuth angle of surface normals for 3D face recognition. We show that the normal azimuth angles combined with Principal Component Analysis (PCA) using a cosine-based distance measure can be used for robust face recognition from facial surfaces. The proposed algorithms are well-suited for all types of 3D facial data including data produced by range cameras (depth images), photometric stereo (PS) and shade-from-X (SfX) algorithms. We demonstrate the robustness of the proposed algorithms both in 3D face reconstruction from synthetically occluded samples, as well as, in face recognition using the FRGC v2 3D face database and the recently collected Photoface database where the proposed method achieves state-of-the-art results. An important aspect of our method is that it can achieve good face recognition/verification performance by using raw 3D scans without any heavy preprocessing (i.e., model fitting, surface smoothing etc.).


ieee international conference on automatic face gesture recognition | 2015

Active nonrigid ICP algorithm

Shiyang Cheng; Ioannis Marras; Stefanos Zafeiriou; Maja Pantic

The problem of fitting a 3D facial model to a 3D mesh has received a lot of attention the past 15-20 years. The majority of the techniques fit a general model consisting of a simple parameterisable surface or a mean 3D facial shape. The drawback of this approach is that is rather difficult to describe the non-rigid aspect of the face using just a single facial model. One way to capture the 3D facial deformations is by means of a statistical 3D model of the face or its parts. This is particularly evident when we want to capture the deformations of the mouth region. Even though statistical models of face are generally applied for modelling facial intensity, there are few approaches that fit a statistical model of 3D faces. In this paper, in order to capture and describe the non-rigid nature of facial surfaces we build a part-based statistical model of the 3D facial surface and we combine it with non-rigid iterative closest point algorithms. We show that the proposed algorithm largely outperforms state-of-the-art algorithms for 3D face fitting and alignment especially when it comes to the description of the mouth region.


ieee international conference on automatic face gesture recognition | 2013

Online learning and fusion of orientation appearance models for robust rigid object tracking

Ioannis Marras; Joan Alabort Medina; Georgios Tzimiropoulos; Stefanos Zafeiriou; Maja Pantic

We present a robust framework for learning and fusing different modalities for rigid object tracking. Our method fuses data obtained from a standard visual camera and dense depth maps obtained by low-cost consumer depths cameras such as the Kinect. To combine these two completely different modalities, we propose to use features that do not depend on the data representation: angles. More specifically, our method combines image gradient orientations as extracted from intensity images with the directions of surface normals computed from dense depth fields provided by the Kinect. To incorporate these features in a learning framework, we use a robust kernel based on the Euler representation of angles. This kernel enables us to cope with gross measurement errors, missing data as well as typical problems in visual tracking such as illumination changes and occlusions. Additionally, the employed kernel can be efficiently implemented online. Finally, we propose to capture the correlations between the obtained orientation appearance models using a fusion approach motivated by the original AAM. Thus the proposed learning and fusing framework is robust, exact, computationally efficient and does not require off-line training. By combining the proposed models with a particle filter, the proposed tracking framework achieved robust performance in very difficult tracking scenarios including extreme pose variations.


Biomedical Signal Processing and Control | 2013

FISH image analysis using a modified radial basis function network

Christos Sagonas; Ioannis Marras; Ioannis N. Kasampalidis; Ioannis Pitas; Kleoniki Lyroudia; Georgia Karayannopoulou

Fluorescent in situ hybridization (FISH) is a valuable method for determining Her-2/neu status in breast carcinoma samples, an important prognostic indicator. Visual evaluation of FISH images is a difficult task which involves manual counting of dots in multiple images, a procedure which is both time consuming and prone to human error. A number of algorithms have recently been developed dealing with (semi)-automated analysis of FISH images. These algorithms are quite promising but further improvement is required in improving their accuracy. Here, we present a novel method for analyzing FISH images based on the statistical properties of Radial Basis Functions. Our method was evaluated on a data set of 100 breast carcinoma cases provided by the Aristotle University of Thessaloniki and the University of Pisa, with promising results.


multimedia signal processing | 2009

3D head pose estimation in monocular video sequences by sequential camera self-calibration

Ioannis Marras; Nikos Nikolaidis; Ioannis Pitas

This paper presents a novel approach for estimating 3D head pose in single-view video sequences acquired by an uncalibrated camera. Following the initialization by a face detector, a tracking technique localizes the faces in each frame in the video sequence. Head pose estimation is performed by using a structure from motion and self-calibration technique in a sequential way. The proposed method was applied to the IDIAP database that contains head pose ground truth data. The obtained results demonstrate that the method can estimate the head pose with satisfying accuracy.


Computers in Biology and Medicine | 2014

3D geometric split–merge segmentation of brain MRI datasets

Ioannis Marras; Nikolaos Nikolaidis; Ioannis Pitas

In this paper, a novel method for MRI volume segmentation based on region adaptive splitting and merging is proposed. The method, called Adaptive Geometric Split Merge (AGSM) segmentation, aims at finding complex geometrical shapes that consist of homogeneous geometrical 3D regions. In each volume splitting step, several splitting strategies are examined and the most appropriate is activated. A way to find the maximal homogeneity axis of the volume is also introduced. Along this axis, the volume splitting technique divides the entire volume in a number of large homogeneous 3D regions, while at the same time, it defines more clearly small homogeneous regions within the volume in such a way that they have greater probabilities of survival at the subsequent merging step. Region merging criteria are proposed to this end. The presented segmentation method has been applied to brain MRI medical datasets to provide segmentation results when each voxel is composed of one tissue type (hard segmentation). The volume splitting procedure does not require training data, while it demonstrates improved segmentation performance in noisy brain MRI datasets, when compared to the state of the art methods.


international symposium on electronics and telecommunications | 2010

Human-centered video analysis for multimedia postproduction

Costas I. Cotsaces; Ioannis Marras; Nikolaos Tsapanos; Nikos Nikolaidis; Ioannis Pitas

Semantic analysis of video has witnessed a significant increase of research activities during the last years. Human-centered video analysis plays a central role in this research since humans are the most frequently encountered entities in a video. Results of human-centered video analysis can be of use in numerous applications, one of them being multimedia postproduction. Three recently devised semantic analysis algorithms are reviewed in this paper.


ieee international conference on automatic face gesture recognition | 2017

Deep Refinement Convolutional Networks for Human Pose Estimation

Ioannis Marras; Petar Palasek; Ioannis Patras

This work introduces a novel Convolutional Networkarchitecture (ConvNet) for the task of human poseestimation, that is the localization of body joints in a singlestatic image. The proposed coarse to fine architecture addressesshortcomings of the baseline architecture that stem from thefact that large inaccuracies of its coarse ConvNet cannot becorrected by the refinement ConvNet that refines the estimationwithin small windows of the coarse prediction. This is achievedby a) changes in architectural parameters that both increase theaccuracy of the coarse model and make the refinement modelmore capable of correcting the errors of the coarse model,b) the introduction of a Markov Random Field (MRF)-basedspatial model network between the coarse and the refinementmodel that introduces geometric constraints and c) a trainingscheme that adapts the data augmentation and the learning rateaccording to the difficulty of the data examples. The proposedarchitecture is trained in an end-to-end fashion. Experimentalresults show that the proposed method improves the baselinemodel and provides state of the art results on the FashionPose[8] and MPII benchmarks [1].

Collaboration


Dive into the Ioannis Marras's collaboration.

Top Co-Authors

Avatar

Ioannis Pitas

Aristotle University of Thessaloniki

View shared research outputs
Top Co-Authors

Avatar

Kleoniki Lyroudia

Aristotle University of Thessaloniki

View shared research outputs
Top Co-Authors

Avatar

Nikolaos Nikolaidis

Aristotle University of Thessaloniki

View shared research outputs
Top Co-Authors

Avatar

Nikos Nikolaidis

Aristotle University of Thessaloniki

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Maja Pantic

Imperial College London

View shared research outputs
Top Co-Authors

Avatar

Ioannis Patras

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Anna Digka

Aristotle University of Thessaloniki

View shared research outputs
Top Co-Authors

Avatar

Georgia Karayannopoulou

Aristotle University of Thessaloniki

View shared research outputs
Top Co-Authors

Avatar

Georgios Mikrogeorgis

Aristotle University of Thessaloniki

View shared research outputs
Researchain Logo
Decentralizing Knowledge