Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Spiros V. Georgakopoulos is active.

Publication


Featured researches published by Spiros V. Georgakopoulos.


Integrated Computer-aided Engineering | 2016

Geodesically-corrected Zernike descriptors for pose recognition in omni-directional images

Konstantinos K. Delibasis; Spiros V. Georgakopoulos; Konstantina Kottari; Vassilis P. Plagianakos; Ilias Maglogiannis

A significant number of Computer Vision and Artificial Intelligence applications are based on descriptors that are extracted from segmented objects. One widely used class of such descriptors are the invariant moments, with Zernike moments being reported as some of the most efficient descriptors. The calculation of image moments requires the definition of distance and angle of any pixel from the centroid pixel of a specific object. While this is straightforward in images acquired by projective cameras, the classic definition of distance and angle may not be applicable to omni-directional images obtained by fish-eye cameras. In this work, we provide an efficient definition of distance and angle between pixels in omni-directional images, based on the calibration model of the acquisition camera. Thus, a more appropriate calculation of moment invariants from omnidirectional videos is achieved in time domain. A large dataset of synthetically generated binary silhouettes, as well as segmented human silhouettes from real indoor videos are used to assess experimentally the effectiveness of the proposed Zernike descriptors in recognising different poses in omni-directional video. Comparative numerical results between the traditional Zernike moments and the moments based on the proposed corrections of the Zernike polynomials are presented. Results from other state of the art image descriptors are also included. Results show that the proposed correction in the calculation of Zernike moments improves pose classification accuracy significantly. The computational complexity of the proposed implementation is also discussed.


international conference on imaging systems and techniques | 2016

Weakly-supervised Convolutional learning for detection of inflammatory gastrointestinal lesions

Spiros V. Georgakopoulos; Dimitris K. Iakovidis; Michael Vasilakakis; Vassilis P. Plagianakos; Anastasios Koulaouzidis

Graphic image annotations provide the necessary ground truth information for supervised machine learning in image-based computer-aided medical diagnosis. Performing such annotations is usually a time-consuming and cost-inefficient process requiring knowledge from domain experts. To cope with this problem we propose a novel weakly-supervised learning method based on a Convolutional Neural Network (CNN) architecture. The advantage of the proposed method over conventional supervised approaches is that only image-level semantic annotations are used in the training process, instead of pixel-level graphic annotations. This can drastically reduce the required annotation effort. Its advantage over the few state-of-the-art weakly-supervised CNN architectures is its simplicity. The performance of the proposed method is evaluated in the context of computer-aided detection of inflammatory gastrointestinal lesions in wireless capsule endoscopy videos. This is a broad category of lesions, for which early detection and treatment can be of vital importance. The results show that the proposed weakly-supervised learning method can be more effective than the conventional supervised learning, with an accuracy of 90%.


hellenic conference on artificial intelligence | 2018

Convolutional Neural Networks for Toxic Comment Classification

Spiros V. Georgakopoulos; Sotiris K. Tasoulis; Aristidis G. Vrahatis; Vassilis P. Plagianakos

Flood of information is produced in a daily basis through the global internet usage arising from the online interactive communications among users. While this situation contributes significantly to the quality of human life, unfortunately it involves enormous dangers, since online texts with high toxicity can cause personal attacks, online harassment and bullying behaviors. This has triggered both industrial and research community in the last few years while there are several attempts to identify an efficient model for online toxic comment prediction. However, these steps are still in their infancy and new approaches and frameworks are required. On parallel, the data explosion that appears constantly, makes the construction of new machine learning computational tools for managing this information, an imperative need. Thankfully advances in hardware, cloud computing and big data management allow the development of Deep Learning approaches appearing very promising performance so far. For text classification in particular the use of Convolutional Neural Networks (CNN) have recently been proposed approaching text analytics in a modern manner emphasizing in the structure of words in a document. In this work, we employ this approach to discover toxic comments in a large pool of documents provided by a current Kaggles competition regarding Wikipedias talk page edits. To justify this decision we choose to compare CNNs against the traditional bag-of-words approach for text analysis combined with a selection of algorithms proven to be very effective in text classification. The reported results provide enough evidence that CNN enhance toxic comment classification reinforcing research interest towards this direction.


Neurocomputing | 2017

Pose Recognition Using Convolutional Neural Networks on Omni-directional Images

Spiros V. Georgakopoulos; Konstantina Kottari; Kostas Delibasis; Vassilis P. Plagianakos; Ilias Maglogiannis

Abstract Convolutional neural networks (CNNs) are used frequently in several computer vision applications. In this work, we present a methodology for pose classification of binary human silhouettes using CNNs, enhanced with image features based on Zernike moments, which are modified for fisheye images. The training set consists of synthetic images that are generated from three-dimensional (3D) human models, using the calibration model of an omni-directional camera (fisheye). Testing is performed using real images, also acquired by omni-directional cameras. Here, we employ our previously proposed geodesically corrected Zernike moments (GZMI) and confirm their merit as stand-alone descriptors of calibrated fisheye images. Subsequently, we explore the efficiency of transfer learning from the previously trained model with synthetically generated silhouettes, to the problem of real pose classification, by continuing the training of the already trained network, using a few frames of annotated real silhouettes. Furthermore, we propose an enhanced architecture that combines the calculated GZMI features of each image with the features generated at CNNs’ last convolutional layer, both feeding the first hidden layer of the traditional neural network that exists at the end of the CNN. Testing is performed using synthetically generated silhouettes as well as real ones. Results show that the proposed enhancement of CNN architecture, combined with transfer learning improves pose classification accuracy for both the synthetic and the real silhouette images.


artificial intelligence applications and innovations | 2015

On-Line Fall Detection via Mobile Accelerometer Data

Spiros V. Georgakopoulos; Sotiris K. Tasoulis; Ilias Maglogiannis; Vassilis P. Plagianakos

Mobile devices have entered our daily life in several forms, such as tablets, smartphones, smartwatches and wearable devices, in general. The majority of those devices have built-in several motion sensors, such as accelerometers, gyroscopes, orientation and rotation sensors. The activity recognition or emergency event detection in cases of falls or abnormal activity conduce a challenging task, especially for elder people living independently in their homes. In this work, we present a methodology capable of performing real time fall detect, using data from a mobile accelerometer sensor. To this end, data taken from the 3-axis accelerometer is transformed using the Incremental Principal Components Analysis methodology. Next, we utilize the cumulative sum algorithm, which is capable of detecting changes using devices having limited CPU power and memory resources. Our experimental results are promising and indicate that using the proposed methodology, real time fall detection is feasible.


Information Sciences | 2015

A software tool for the automatic detection and quantification of fibrotic tissues in microscopy images

Ilias Maglogiannis; Spiros V. Georgakopoulos; Sotiris K. Tasoulis; Vassilis P. Plagianakos

The high volume of pathological microscopy images deforming tissues make the fast quantification and detection of the corrupted regions extremely difficult. To tackle this problem, we present in this paper an automated computer based tool that allows the easy selection of training regions for the various type of pathologies and adopts dimensionality reduction and classification methods for detecting and quantifying the infected areas. The output of the proposed tool is a classification result superimposed on original images, along with an overall index indicating the severity of the pathology. The experimental results are promising, since the tool exhibits high classification accuracy and the calculated index is compatible with expert physicians estimation of the degree of pathology.


international conference on engineering applications of neural networks | 2017

Detection of Malignant Melanomas in Dermoscopic Images Using Convolutional Neural Network with Transfer Learning

Spiros V. Georgakopoulos; Konstantina Kottari; Kostas Delibasis; Vassilis Plagianakos; Ilias Maglogiannis

In this work, we report the use of convolutional neural networks for the detection of malignant melanomas against nevus skin lesions in a dataset of dermoscopic images of the same magnification. The technique of transfer learning is utilized to compensate for the limited size of the available image dataset. Results show that including transfer learning in training CNN architectures improves significantly the achieved classification results.


artificial intelligence applications and innovations | 2016

Convolutional Neural Networks for Pose Recognition in Binary Omni-directional Images

Spiros V. Georgakopoulos; Konstantina Kottari; Kostas Delibasis; Vassilis P. Plagianakos; Ilias Maglogiannis

In this work, we present a methodology for pose classification of silhouettes using convolutional neural networks. The training set consists exclusively from the synthetic images that are generated from three-dimensional (3D) human models, using the calibration of an omni-directional camera (fish-eye). Thus, we are able to generate a large volume of training set that is usually required for Convolutional Neural Networks (CNNs). Testing is performed using synthetically generated silhouettes, as well as real silhouettes. This work is in the same realm with previous work utilizing Zernike image descriptors designed specifically for a calibrated fish-eye camera. Results show that the proposed method improves pose classification accuracy for synthetic images, but it is outperformed by our previously proposed Zernike descriptors in real silhouettes. The computational complexity of the proposed methodology is also examined and the corresponding results are provided.


artificial intelligence applications and innovations | 2014

Calculation of Complex Zernike Moments with Geodesic Correction for Pose Recognition in Omni-directional Images

Konstantinos K. Delibasis; Spiros V. Georgakopoulos; Vassilis P. Plagianakos; Ilias Maglogiannis

A number of Computer Vision and Artificial Intelligence applications are based on descriptors that are extracted from imaged objects. One widely used class of such descriptors are the invariant moments, with Zernike moments being reported as some of the most efficient descriptors. The calculation of image moments requires the definition of distance and angle of any pixel from the centroid pixel. While this is straightforward in images acquired by projective cameras, it is complicated and time consuming for omni-directional images obtained by fish-eye cameras. In this work, we provide an efficient way of calculating moment invariants in time domain from omni-directional images, using the calibration of the acquiring camera. The proposed implementation of the descriptors is assessed in the case of indoor video in terms of classification accuracy of the segmented human silhouettes. Numerical results are presented for different poses of human silhouettes and comparisons between the traditional and the proposed implementation of the Zernike moments are presented. The computational complexity for the proposed implementation is also provided.


international conference on artificial neural networks | 2018

Assessing Image Analysis Filters as Augmented Input to Convolutional Neural Networks for Image Classification

Kostas Delibasis; Ilias Maglogiannis; Spiros V. Georgakopoulos; Konstantina Kottari; Vassilis P. Plagianakos

Convolutional Neural Networks (CNNs) have been proven very effective in image classification and object recognition tasks, often exceeding the performance of traditional image analysis techniques. However, training a CNN requires very extensive datasets, as well as very high computational burden. In this work, we test the hypothesis that if the input includes the responses of established image analysis filters that detect salient image structures, the CNN should be able to perform better than an identical CNN fed with the plain RGB images only. Thus, we employ a number of families of image analysis filter banks and use their responses to compile a small number of filtered responses for each original RGB image. We perform a large number of CNN training/testing repetitions for a 40-class building recognition problem, on a publicly available image database, using the original images, as well as the original images augmented by the compiled filter responses. Results show that the accuracy achieved by the CNN with the augmented input is consistently higher than that of the RGB image input, both in terms of different repetitions of the execution, as well as throughout the iterations of each repetition.

Collaboration


Dive into the Spiros V. Georgakopoulos's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sotiris K. Tasoulis

University of Central Greece

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge