Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kostas Delibasis is active.

Publication


Featured researches published by Kostas Delibasis.


Multimedia Tools and Applications | 2018

Real time vision-based measurements for quality control of industrial rods on a moving conveyor

Konstantina Kottari; Kostas Delibasis; Vassilis P. Plagianakos

This work proposes a fully automated approach for vision-based quality control of manufactured metal rods. The proposed approach is able to detect the main axis of the rod and calculate its curvature, versus specifications. The proposed algorithm utilizes video acquired in real time by a single mono-ocular USB camera. A signal processing module identifies in real time the video frame that images the rod at the appropriate position on the conveyor. Initialization of the algorithm can take place either manually, or by utilizing the calibration of the camera. Concurrently, the image processing module estimates the curvature of the rod using its medial axis, to classify the rod as normal or defect. Initial results show that the proposed algorithm can operate in real time with very high accuracy under controlled illumination conditions and backgrounds. This methodology is capable of processing video at 30 frames per second, using a general purpose laptop.


Neurocomputing | 2017

Pose Recognition Using Convolutional Neural Networks on Omni-directional Images

Spiros V. Georgakopoulos; Konstantina Kottari; Kostas Delibasis; Vassilis P. Plagianakos; Ilias Maglogiannis

Abstract Convolutional neural networks (CNNs) are used frequently in several computer vision applications. In this work, we present a methodology for pose classification of binary human silhouettes using CNNs, enhanced with image features based on Zernike moments, which are modified for fisheye images. The training set consists of synthetic images that are generated from three-dimensional (3D) human models, using the calibration model of an omni-directional camera (fisheye). Testing is performed using real images, also acquired by omni-directional cameras. Here, we employ our previously proposed geodesically corrected Zernike moments (GZMI) and confirm their merit as stand-alone descriptors of calibrated fisheye images. Subsequently, we explore the efficiency of transfer learning from the previously trained model with synthetically generated silhouettes, to the problem of real pose classification, by continuing the training of the already trained network, using a few frames of annotated real silhouettes. Furthermore, we propose an enhanced architecture that combines the calculated GZMI features of each image with the features generated at CNNs’ last convolutional layer, both feeding the first hidden layer of the traditional neural network that exists at the end of the CNN. Testing is performed using synthetically generated silhouettes as well as real ones. Results show that the proposed enhancement of CNN architecture, combined with transfer learning improves pose classification accuracy for both the synthetic and the real silhouette images.


international conference on imaging systems and techniques | 2016

Real time measurements for quality control of industrial rod manufacturing

Konstantina Kottari; Kostas Delibasis; Vassilis P. Plagianakos

This work proposes a fully automated approach for vision-based quality control of manufactured metal rods. The proposed approach is able to detect the features of the rod (holes) and calculate the curvature of the object versus specifications. The proposed algorithm utilizes a single mono-ocular image. Initial results show that the proposed algorithm can operate under various camera geometric setups, as well as illumination conditions and backgrounds. This methodology is capable of processing an image in less than 0.2 second, using a general purpose laptop. Thus, we provide the capability for real time processing of manufactured parts on a moving conveyor.


artificial intelligence applications and innovations | 2014

Fish-Eye Camera Video Processing and Trajectory Estimation Using 3D Human Models

Konstantina Kottari; Kostas Delibasis; Vassilis P. Plagianakos; Ilias Maglogiannis

Video processing and analysis applications are part of Artificial Intelligence. Frequently, silhouettes in video frames lack depth information, especially in case of a single camera. In this work, we utilize a three-dimensional human body model, combined with a calibrated fish-eye camera, to obtain three-dimensional (3D) clues. More specifically, a generic 3D human model in various poses is derived from a novel mathematical formalization of a well-known class of geometric primitives, namely the generalized cylinders, which exhibit advantages over the existing parametric definitions. The use of the fish-eye camera allows the generation of rendered silhouettes, using these 3D models. Moreover, we present a very efficient algorithm for matching that 3D model with a real human figure in order to recognize the posture of a monitored person. Firstly, the silhouette is segmented in each frame and the calculation of the real human position is calculated. Subsequently, an optimization process adjusts the parameters of the 3D human model in an attempt to match the pose (position and orientation relatively to the camera) of real human. The experimental results are promising, since the pose, the trajectory and the orientation of the human can be accurately estimated.


international conference on engineering applications of neural networks | 2017

Detection of Malignant Melanomas in Dermoscopic Images Using Convolutional Neural Network with Transfer Learning

Spiros V. Georgakopoulos; Konstantina Kottari; Kostas Delibasis; Vassilis Plagianakos; Ilias Maglogiannis

In this work, we report the use of convolutional neural networks for the detection of malignant melanomas against nevus skin lesions in a dataset of dermoscopic images of the same magnification. The technique of transfer learning is utilized to compensate for the limited size of the available image dataset. Results show that including transfer learning in training CNN architectures improves significantly the achieved classification results.


artificial intelligence applications and innovations | 2016

Convolutional Neural Networks for Pose Recognition in Binary Omni-directional Images

Spiros V. Georgakopoulos; Konstantina Kottari; Kostas Delibasis; Vassilis P. Plagianakos; Ilias Maglogiannis

In this work, we present a methodology for pose classification of silhouettes using convolutional neural networks. The training set consists exclusively from the synthetic images that are generated from three-dimensional (3D) human models, using the calibration of an omni-directional camera (fish-eye). Thus, we are able to generate a large volume of training set that is usually required for Convolutional Neural Networks (CNNs). Testing is performed using synthetically generated silhouettes, as well as real silhouettes. This work is in the same realm with previous work utilizing Zernike image descriptors designed specifically for a calibrated fish-eye camera. Results show that the proposed method improves pose classification accuracy for synthetic images, but it is outperformed by our previously proposed Zernike descriptors in real silhouettes. The computational complexity of the proposed methodology is also examined and the corresponding results are provided.


international conference on universal access in human computer interaction | 2014

Activity Recognition in Assistive Environments: The STHENOS Approach

Ilias Maglogiannis; Kostas Delibasis; Dimitrios I. Kosmopoulos; Theodosios Goudas; Charalampos Doukas

The paper presents the research conducted within the framework of the STHENOS project www.sthenos.gr, which aims at the development of methodologies and systems for assistive environments. The proposed systems and applications are capable of recognizing the human activities and assist disabled or elder persons in performing every day activities and detect abnormal situations such as a fall or long periods of inactivity. The paper includes the technical details of the proposed activity recognition methodology using fisheye video cameras and wearable sensors. Initial results have proven the feasibility of the adopted approaches and the efficiency of the implemented system.


international conference on artificial neural networks | 2018

Assessing Image Analysis Filters as Augmented Input to Convolutional Neural Networks for Image Classification

Kostas Delibasis; Ilias Maglogiannis; Spiros V. Georgakopoulos; Konstantina Kottari; Vassilis P. Plagianakos

Convolutional Neural Networks (CNNs) have been proven very effective in image classification and object recognition tasks, often exceeding the performance of traditional image analysis techniques. However, training a CNN requires very extensive datasets, as well as very high computational burden. In this work, we test the hypothesis that if the input includes the responses of established image analysis filters that detect salient image structures, the CNN should be able to perform better than an identical CNN fed with the plain RGB images only. Thus, we employ a number of families of image analysis filter banks and use their responses to compile a small number of filtered responses for each original RGB image. We perform a large number of CNN training/testing repetitions for a 40-class building recognition problem, on a publicly available image database, using the original images, as well as the original images augmented by the compiled filter responses. Results show that the accuracy achieved by the CNN with the augmented input is consistently higher than that of the RGB image input, both in terms of different repetitions of the execution, as well as throughout the iterations of each repetition.


Neural Computing and Applications | 2018

Improving the performance of convolutional neural network for skin image classification using the response of image analysis filters

Spiros V. Georgakopoulos; Konstantina Kottari; Kostas Delibasis; Vassilis P. Plagianakos; I. Maglogiannis

In this work, we focus in the analysis of dermoscopy images using convolutional neural networks (CNNs). More specifically, we investigate the value of augmenting CNN inputs with the response of mid-level computer vision filters, using the traditional inputting of simple RGB pixel values as baseline. The proposed methodology is applied on two pattern recognition problems with clinical significance: the binary classification of skin lesions in dermoscopy images into “malignant” and “non-malignant” (nevus skin lesions) cases and the four-class, superpixel classification into differential structures that appear in skin lesions. The transfer learning technique is also utilized to compensate for the limited size of the available training image datasets. Results show that filter-based input augmentation using the response of mid-level computer vision filters significantly improves the classification accuracy achieved by the CNN architectures and simplifies the weights of the receptive fields.


international conference on imaging systems and techniques | 2016

Shape reconstruction using fisheye and projective cameras

Konstantina Kottari; Kostas Delibasis

Reconstructing shape from silhouettes is an interesting topic in computer vision. In the case of projective (pinhole) cameras, this task has been solved with several variations. The increasing use of omnidirectional cameras, due to their apparent advantage of covering 180 degrees field of view, necessitates the application of shape from silhouette in cases of mixed type cameras, a subject not very well studied. In this work we describe an experimental setup of three very low cost projective IP cameras, combined with a ceiling-based fisheye camera (a special type of omni-directional camera). Results are presented in reconstructing synthetic 3D human models, as well as real human subjects. Results show that including the fisheye camera enhances the accuracy of the reconstruction.

Collaboration


Dive into the Kostas Delibasis's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

C. Aronis

University of Thessaly

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dimitrios I. Kosmopoulos

University of Texas at Arlington

View shared research outputs
Researchain Logo
Decentralizing Knowledge