Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Elli Angelopoulou is active.

Publication


Featured researches published by Elli Angelopoulou.


IEEE Transactions on Information Forensics and Security | 2012

An Evaluation of Popular Copy-Move Forgery Detection Approaches

Vincent Christlein; Christian Riess; Johannes Jordan; Elli Angelopoulou

A copy-move forgery is created by copying and pasting content within the same image, and potentially postprocessing it. In recent years, the detection of copy-move forgeries has become one of the most actively researched topics in blind image forensics. A considerable number of different algorithms have been proposed focusing on different types of postprocessed copies. In this paper, we aim to answer which copy-move forgery detection algorithms and processing steps (e.g., matching, filtering, outlier detection, affine transformation estimation) perform best in various postprocessing scenarios. The focus of our analysis is to evaluate the performance of previously proposed feature sets. We achieve this by casting existing algorithms in a common pipeline. In this paper, we examined the 15 most prominent feature sets. We analyzed the detection performance on a per-image basis and on a per-pixel basis. We created a challenging real-world copy-move dataset, and a software framework for systematic image manipulation. Experiments show, that the keypoint-based features Sift and Surf, as well as the block-based DCT, DWT, KPCA, PCA, and Zernike features perform very well. These feature sets exhibit the best robustness against various noise sources and downsampling, while reliably identifying the copied regions.


Iet Image Processing | 2013

Retinal vessel segmentation by improved matched filtering: evaluation on a new high-resolution fundus image database

Jan Odstrcilik; Radim Kolar; Attila Budai; Joachim Hornegger; Jiri Jan; Jirí Gazárek; Tomas Kubena; Pavel Cernosek; Ondrej Svoboda; Elli Angelopoulou

Automatic assessment of retinal vessels plays an important role in the diagnosis of various eye, as well as systemic diseases. A public screening is highly desirable for prompt and effective treatment, since such diseases need to be diagnosed at an early stage. Automated and accurate segmentation of the retinal blood vessel tree is one of the challenging tasks in the computer-aided analysis of fundus images today. We improve the concept of matched filtering, and propose a novel and accurate method for segmenting retinal vessels. Our goal is to be able to segment blood vessels with varying vessel diameters in high-resolution colour fundus images. All recent authors compare their vessel segmentation results to each other using only low-resolution retinal image databases. Consequently, we provide a new publicly available high-resolution fundus image database of healthy and pathological retinas. Our performance evaluation shows that the proposed blood vessel segmentation approach is at least comparable with recent state-of-the-art methods. It outperforms most of them with an accuracy of 95% evaluated on the new database.


IEEE Transactions on Information Forensics and Security | 2013

Exposing Digital Image Forgeries by Illumination Color Classification

T. J. de Carvalho; Christian Riess; Elli Angelopoulou; Helio Pedrini; A. de Rezende Rocha

For decades, photographs have been used to document space-time events and they have often served as evidence in courts. Although photographers are able to create composites of analog pictures, this process is very time consuming and requires expert knowledge. Today, however, powerful digital image editing software makes image modifications straightforward. This undermines our trust in photographs and, in particular, questions pictures as evidence for real-world events. In this paper, we analyze one of the most common forms of photographic manipulation, known as image composition or splicing. We propose a forgery detection method that exploits subtle inconsistencies in the color of the illumination of images. Our approach is machine-learning-based and requires minimal user interaction. The technique is applicable to images containing two or more people and requires no expert interaction for the tampering decision. To achieve this, we incorporate information from physics- and statistical-based illuminant estimators on image regions of similar material. From these illuminant estimates, we extract texture- and edge-based features which are then provided to a machine-learning approach for automatic decision-making. The classification performance using an SVM meta-fusion classifier is promising. It yields detection rates of 86% on a new benchmark dataset consisting of 200 images, and 83% on 50 images that were collected from the Internet.


Medical Image Analysis | 2012

An endoscopic 3D scanner based on structured light

Christoph Schmalz; Frank Forster; Anton Schick; Elli Angelopoulou

We present a new endoscopic 3D scanning system based on Single Shot Structured Light. The proposed design makes it possible to build an extremely small scanner. The sensor head contains a catadioptric camera and a pattern projection unit. The paper describes the working principle and calibration procedure of the sensor. The prototype sensor head has a diameter of only 3.6mm and a length of 14mm. It is mounted on a flexible shaft. The scanner is designed for tubular cavities and has a cylindrical working volume of about 30mm length and 30mm diameter. It acquires 3D video at 30 frames per second and typically generates approximately 5000 3D points per frame. By design, the resolution varies over the working volume, but is generally better than 200μm. A prototype scanner has been built and is evaluated in experiments with phantoms and biological samples. The recorded average error on a known test object was 92μm.


information hiding | 2010

Scene illumination as an indicator of image manipulation

Christian Riess; Elli Angelopoulou

The goal of blind image forensics is to distinguish original and manipulated images. We propose illumination color as a new indicator for the assessment of image authenticity. Many images exhibit a combination of multiple illuminants (flash photography, mixture of indoor and outdoor lighting, etc.). In the proposed method, the user selects illuminated areas for further investigation. The illuminant colors are locally estimated, effectively decomposing the scene in a map of differently illuminated regions. Inconsistencies in such a map suggest possible image tampering. Our method is physics-based, which implies that the outcome of the estimation can be further constrained if additional knowledge on the scene is available. Experiments show that these illumination maps provide a useful and very general forensics tool for the analysis of color images.


international workshop on information forensics and security | 2010

On rotation invariance in copy-move forgery detection

Vincent Christlein; Christian Riess; Elli Angelopoulou

The goal of copy-move forgery detection is to find duplicated regions within the same image. Copy-move detection algorithms operate roughly as follows: extract blockwise feature vectors, find similar feature vectors, and select feature pairs that share highly similar shift vectors. This selection plays an important role in the suppression of false matches. However, when the copied region is additionally rotated or scaled, shift vectors are no longer the most appropriate selection technique. In this paper, we present a rotation-invariant selection method, which we call Same Affine Transformation Selection (SATS). It shares the benefits of the shift vectors at an only slightly increased computational cost. As a byproduct, the proposed method explicitly recovers the parameters of the affine transformation applied to the copied region. We evaluate our approach on three recently proposed feature sets. Our experiments on ground truth data show that SATS outperforms shift vectors when the copied region is rotated, independent of the size of the image.


computer vision and pattern recognition | 2007

Active Visual Object Reconstruction using D-, E-, and T-Optimal Next Best Views

Stefan Wenhardt; Benjamin Deutsch; Elli Angelopoulou; Heinrich Niemann

In visual 3-D reconstruction tasks with mobile cameras, one wishes to move the cameras so that they provide the views that lead to the best reconstruction result. When the camera motion is adapted during the reconstruction, the view of interest is the next best view for the current shape estimate. We present such a next best view planning approach for visual 3-D reconstruction. The reconstruction is based on a probabilistic state estimation with sensor actions. The next best view is determined by a metric of the state estimations uncertainty. We compare three metrics: D-optimality, which is based on the entropy and corresponds to the (D)eterminant of the covariance matrix of a Gaussian distribution, E-optimality, and T-optimality, which are based on (E)igenvalues or on the (T)race of this matrix, respectively. We show the validity of our approach with a simulation as well as real-world experiments, and compare reconstruction accuracy and computation time for the optimality criteria.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1998

Sign of Gaussian curvature from curve orientation in photometric space

Elli Angelopoulou; Lawrence B. Wolff

We compute the sign of Gaussian curvature using a purely geometric definition. Consider a point p on a smooth surface S and a closed curve /spl gamma/ on S which encloses p. The image of /spl gamma/ on the unit normal Gaussian sphere is a new curve /spl beta/. The Gaussian curvature at p is defined as the ratio of the area enclosed by /spl gamma/ over the area enclosed by /spl beta/ as /spl gamma/ contracts to p. The sign of Gaussian curvature at p is determined by the relative orientations of the closed curves /spl gamma/ and /spl beta/. We directly compute the relative orientation of two such curves from intensity data. We employ three unknown illumination conditions to create a photometric scatter plot. This plot is in one-to-one correspondence with the subset of the unit Gaussian sphere containing the mutually illuminated surface normal. This permits direct computation of the sign of Gaussian curvature without the recovery of surface normals. Our method is albedo invariant. We assume diffuse reflectance, but the nature of the diffuse reflectance can be general and unknown. Error analysis on simulated images shows the accuracy of our technique. We also demonstrate the performance of this methodology on empirical data.


international conference on computer vision | 2011

Color constancy and non-uniform illumination: Can existing algorithms work?

Michael Bleier; Christian Riess; Shida Beigpour; Eva Eibenberger; Elli Angelopoulou; Tobias Tröger; André Kaup

The color and distribution of illuminants can significantly alter the appearance of a scene. The goal of color constancy (CC) is to remove the color bias introduced by the illuminants. Most existing CC algorithms assume a uniformly illuminated scene. However, more often than not, this assumption is an insufficient approximation of real-world illumination conditions (multiple light sources, shadows, interreflections, etc.). Thus, illumination should be locally determined, taking under consideration that multiple illuminants may be present. In this paper we investigate the suitability of adapting 5 state-of-the-art color constancy methods so that they can be used for local illuminant estimation. Given an arbitrary image, we segment it into superpixels of approximately similar color. Each of the methods is applied independently on every superpixel. For improved accuracy, these independent estimates are combined into a single illuminant-color value per superpixel. We evaluated different fusion methodologies. Our experiments indicate that the best performance is obtained by fusion strategies that combine the outputs of the estimators using regression.


international conference on computer vision | 1999

Spectral gradient: a material descriptor invariant to geometry and incident illumination

Elli Angelopoulou; Sang Wook Lee; Ruzena Bajcsy

The light reflected from a surface depends on the scene geometry, the incident illumination and the surface material. A novel methodology is presented which extracts reflectivity information of the various materials in the scene independent of incident light and scene geometry. A scene is captured under different narrow-band color filters and the spectral derivatives of the scene are computed. The resulting spectral derivatives form a spectral gradient at each pixel. This spectral gradient is a material descriptor which is invariant to scene geometry and incident illumination for smooth diffuse surfaces. Spectral gradients can discriminate among smooth dielectrics with different reflectance properties independent of viewing conditions.

Collaboration


Dive into the Elli Angelopoulou's collaboration.

Top Co-Authors

Avatar

Christian Riess

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Joachim Hornegger

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Johannes Jordan

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Philip Mewes

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Vincent Christlein

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Bernecker

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Eva Eibenberger

University of Erlangen-Nuremberg

View shared research outputs
Researchain Logo
Decentralizing Knowledge