Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Vassilis Tsagaris is active.

Publication


Featured researches published by Vassilis Tsagaris.


Displays | 2005

Fusion of visible and infrared imagery for night color vision

Vassilis Tsagaris; Vassilis Anastassopoulos

A combined approach for fusing night-time infrared with visible imagery is presented in this paper. Night color vision is thus accomplished and the final scene has a natural day-time color appearance. Fusion is based either on non-negative matrix factorization or on a transformation that takes into consideration perceptual attributes. The final obtained color images possess a natural day-time color appearance due to the application of a color transfer technique. In this way inappropriate color mappings are avoided and the overall discrimination capabilities are enhanced. Two different data sets are employed and the experimental results establish the overall method as being efficient, compact and perceptually meaningful.


IEEE Transactions on Geoscience and Remote Sensing | 2005

Fusion of hyperspectral data using segmented PCT for color representation and classification

Vassilis Tsagaris; Vassilis Anastassopoulos; George A. Lampropoulos

Fusion of hyperspectral data is proposed by means of partitioning the hyperspectral bands into subgroups, prior to principal components transformation (PCT). The first principal component of each subgroup is employed for image visualization. The proposed approach is general, with the number of bands in each subgroup being application dependent. Nevertheless, the paper focuses on partitions with three subgroups suitable for RGB representation. One of them employs matched-filtering based on the spectral characteristics of various materials and is very promising for classification purposes. The information content of the hyperspectral bands as well as the quality of the obtained RGB images are quantitatively assessed using measures such as the correlation coefficient, the entropy, and the maximum energy-minimum correlation index. The classification performance of the proposed partitioning approaches is tested using the K-means algorithm.


International Journal of Remote Sensing | 2005

Multispectral image fusion for improved RGB representation based on perceptual attributes

Vassilis Tsagaris; Vassilis Anastassopoulos

A pixel‐level fusion technique for RGB representation of multispectral images is proposed. The technique results in highly correlated RGB components, a fact which occurs in natural colour images and is strictly related to the colour perception attributes of the human eye. Accordingly, specific properties for the covariance matrix of the final RGB image are demanded. Mutual information is employed as an objective criterion for quality refinement. The method provides dimensionality reduction, while the resulting RGB colour image is perceptually of high quality. Comparisons with existing techniques are carried out using both subjective and objective measures.


Optical Engineering | 2006

Global measure for assessing image fusion methods

Vassilis Tsagaris; Vassilis Anastassopoulos

The objective evaluation of the performance of pixel level fusion methods is addressed. For this purpose a global measure based on information theory is proposed. The measure employs mutual and conditional mutual information to assess and represent the amount of information transferred from the source images to the final fused gray-scale image. Accordingly, the common information contained in the source images is considered only once in the performance evaluation procedure. The experimental results clarify the applicability of the proposed measure in comparing different fusion methods or in optimizing the parameters of a specific algorithm.


Optical Engineering | 2009

Objective evaluation of color image fusion methods

Vassilis Tsagaris

A measure for objectively assessing the performance of color image fusion methods is proposed. Two different aspects are considered in establishing the proposed measure-namely, the amount of common information between the source images and the final fused image as well as the distribution of color information in the final image in order to achieve optimal color representation. Mutual information and conditional mutual information are employed in order to assess information transfer between the source images and the final fused image. Simultaneously, the distribution of colors in the final image is explored by means of the hue coordinate in the perceptually uniform CIELAB space. The proposed measure does not depend on the use of a target-fused image for the objective performance evaluation. It is employed experimentally for objective evaluation of fusion methods in the cases of medical imaging and night vision data.


IEEE Transactions on Geoscience and Remote Sensing | 2012

An FPGA-Based Hardware Implementation of Configurable Pixel-Level Color Image Fusion

Dimitrios Besiris; Vassilis Tsagaris; Nikolaos Fragoulis; Christos Theoharatos

Image fusion has attracted a lot of interest in recent years. As a result, different fusion methods have been proposed mainly in the fields of remote sensing and computer (e.g., night) vision, while hardware implementations have been also presented to tackle real-time processing in different application domains. In this paper, a linear pixel-level fusion method is employed and implemented on a field-programmable-gate-array-based hardware system that is suitable for remotely sensed data. Our work incorporates a fusion technique (called VTVA) that is a linear transformation based on the Cholesky decomposition of the covariance matrix of the source data. The circuit is composed of different modules, including covariance estimation, Cholesky decomposition, and transformation ones. The resulted compact hardware design can be characterized as a linear configurable implementation since the color properties of the final fused color can be selected by the user in a way of controlling the resulting correlation between color components.


Remote Sensing | 2004

Information measure for assessing pixel-level fusion methods

Vassilis Tsagaris; Vassilis Anastassopoulos

An objective measure for evaluating the performance of pixel level fusion methods is introduced in this work. The proposed measure employs mutual information and conditional mutual information in order to assess and represent the amount of information transferred from the source images to the final fused greyscale image. Accordingly, the common information contained in the source images is considered only once in the formation of the final image. The measure can be used regardless the number of source images or the assumptions about the intensity values and there is no need for an ideal or test image. The experimental results clarify the usefulness of the proposed measure.


Remote Sensing | 2004

Interpolation in multispectral data using neural networks

Vassilis Tsagaris; Antigoni Panagiotopoulou; Vassilis Anastassopoulos

A novel procedure which aims in increasing the spatial resolution of multispectral data and simultaneously creates a high quality RGB fused representation is proposed in this paper. For this purpose, neural networks are employed and a successive training procedure is applied in order to incorporate in the network structure knowledge about recovering lost frequencies and thus giving fine resolution output color images. MERIS multispectral data are employed to demonstrate the performance of the proposed method.


Archive | 2011

Performance Evaluation of Image Fusion Methods

Vassilis Tsagaris; Nikos Fragoulis; Christos Theoharatos

The recent advances in sensor technology, microelectronics and multisensor systems have motivated researchers towards processing techniques that combine the information obtained from different sensors. For this purpose a large number of image fusion techniques [Mukhopadhyay & Chanda, 2001; Pohl & van Genderen, 1998, Tsagaris & Anastassopoulos, 2005; Piella, 2003] have been proposed in the fields of remote sensing, medical diagnostics, military applications, surveillance etc. The main goal of these image fusion techniques is to provide a compact representation of the multiple input images into a single grayscale one that contains all the important original features. Such an image provides improved interpretation capabilities but can also be used for further computer processing tasks, like feature extraction or classification. The performance of image fusion techniques is sometimes assessed subjectively by human visual inspection. The reproduction of subjective tests is often time-consuming and expensive, while the exact same conditions for the test cannot be guaranteed. This has led to a rising demand for objective measures in order to rapidly compare the results obtained with different algorithms or to obtain optimal settings for a specific fusion algorithm. The objective evaluation of the performance of pixel level fusion methods is addressed in this book chapter. The image fusion processes can be classified in grayscale or color methods depending on the resulting fused image. For this purpose the general framework of objective evaluation of image fusion is discussed and different fusion measures are discussed. Moreover, a global measure for grayscale image fusion schemes, IFPM, based on information theory is presented. The measure employs mutual and conditional mutual information in order to assess and represent the amount of information transferred from the source images to the final fused grayscale image. Accordingly, the common information contained in the source images is considered only once in the performance evaluation procedure. The experimental results clarify the applicability of the IFPM measure in comparing different fusion methods or in optimizing the parameters of a specific algorithm. Moreover, a measure for objectively assessing the performance of color image fusion methods, CIFM, is presented in this chapter. Two different aspects are considered in establishing the measure, namely the amount of common information between the source images and the final fused image as well as the distribution of color information in the final


Archive | 2009

Information Fusion in Ad hoc Wireless Sensor Networks for Aircraft Health Monitoring

Nikos Fragoulis; Vassilis Tsagaris; Vassilis Anastassopoulos

In this paper the use of an ad hoc wireless sensor network for implementing a structural health monitoring system is discussed. The network is consisted of sensors deployed throughout the aircraft. These sensors being in the form of a microelectronic chip and consisted of sensing, data processing and communicating components could be easily embedded in any mechanical aircraft component. The established sensor network, due to its ad hoc nature is easily scalable, allowing adding or removing any number of sensors. The position of the sensor nodes need not necessarily to be engineered or predetermined, giving this way the ability to be deployed in inaccessible points. Information collected from various sensors of different modalities throughout the aircraft is then fused in order to provide a more comprehensive image of the aircraft structural health. Sensor level fusion along with decision quality information is used, in order to enhance detection performance.

Collaboration


Dive into the Vassilis Tsagaris's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Apostolos Ifantis

Technological Educational Institute of Patras

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge