Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Johannes Jordan is active.

Publication


Featured researches published by Johannes Jordan.


IEEE Transactions on Information Forensics and Security | 2012

An Evaluation of Popular Copy-Move Forgery Detection Approaches

Vincent Christlein; Christian Riess; Johannes Jordan; Elli Angelopoulou

A copy-move forgery is created by copying and pasting content within the same image, and potentially postprocessing it. In recent years, the detection of copy-move forgeries has become one of the most actively researched topics in blind image forensics. A considerable number of different algorithms have been proposed focusing on different types of postprocessed copies. In this paper, we aim to answer which copy-move forgery detection algorithms and processing steps (e.g., matching, filtering, outlier detection, affine transformation estimation) perform best in various postprocessing scenarios. The focus of our analysis is to evaluate the performance of previously proposed feature sets. We achieve this by casting existing algorithms in a common pipeline. In this paper, we examined the 15 most prominent feature sets. We analyzed the detection performance on a per-image basis and on a per-pixel basis. We created a challenging real-world copy-move dataset, and a software framework for systematic image manipulation. Experiments show, that the keypoint-based features Sift and Surf, as well as the block-based DCT, DWT, KPCA, PCA, and Zernike features perform very well. These feature sets exhibit the best robustness against various noise sources and downsampling, while reliably identifying the copied regions.


international conference on image processing | 2011

Edge detection in multispectral images using the n-dimensional self-organizing map

Johannes Jordan; Elli Angelopoulou

We propose a new method for performing edge detection in multi-spectral images based on the self-organizing map (SOM) concept. Previously, 1-dimensional or 2-dimensional SOMs were trained to provide a linear mapping of high-dimensional multispectral vectors. Then, edge detection was applied on that mapping. However, the 1-dimensional SOM may not converge on a suitable global order for images with rich content. Likewise, the 2-dimensional SOM introduces false edges due to linearization artifacts. Our method feeds the edge detector without linearization. Instead, it exploits directly the distances of SOM neurons. This avoids the aforementioned drawbacks and is more general, as a SOM of arbitrary dimensionality can be used. We show that our method achieves significantly better edge detection results than previous work on a high-resolution multispectral image database.


vision modeling and visualization | 2010

Gerbil-A Novel Software Framework for Visualization and Analysis in the Multispectral Domain

Johannes Jordan; Elli Angelopoulou

Multispectral imaging has been gaining popularity and has been gradually applied to many fields besides remote sensing. Multispectral data provides unique information about material classification and reflectance analysis in general. However, due to the high dimensionality of the data, both human observers as well as computers, have difficulty interpreting this wealth of information. We present a new software package that facilitates the visualization of the relationship between spectral and topological information in a novel fashion. It puts emphasis on the spectral gradient, which is shown to provide enhanced information for many reflectance analysis tasks. It also includes a rich toolbox for evaluation of image segmentation and other algorithms in the multispectral domain. We combine the parallel coordinates visualization technique with hashing for a highly interactive visual connection between spectral distribution, spectral gradient and topology. The framework is released as open-source, has a modern cross-platform design and is well integrated into existing established computer vision software (OpenCV).


international conference on image processing | 2013

Mean-shift clustering for interactive multispectral image analysis

Johannes Jordan; Elli Angelopoulou

Mean shift clustering and its recent variants are a viable and popular image segmentation tool. In this paper we investigate mean shift segmentation on multispectral and hyperspectral images and propose three new algorithms. First, we improve segmentation performance by running mean shift on the spectral gradient. At the same time, we adapt a popular superpixel segmentation method to the multispectral domain using modified similarity measures from spectral mapping. Based on superpixels, we design two mean shift variants that both obtain competitive segmentation results in significantly reduced running time. For one variant, the speedup in our benchmark is over 100 times. This enables mean shift clustering in an interactive setting.


british machine vision conference | 2015

A Unified Bayesian Approach to Multi-Frame Super-Resolution and Single-Image Upsampling in Multi-Sensor Imaging.

Thomas Köhler; Johannes Jordan; Andreas K. Maier; Joachim Hornegger

For a variety of multi-sensor imaging systems, there is a strong need for resolution enhancement. In this paper, we propose a unified method for single-image upsampling and multi-frame super-resolution of multi-channel images. We derive our algorithm from a Bayesian model that is formulated by a novel image prior to exploit sparsity of individual channels as well as a locally linear regression between the complementary channels. The reconstruction of high-resolution multi-channel images from low-resolution ones and the estimation of associated hyperparameters to define our prior model is formulated as a joint energy minimization. We introduce an alternating minimization scheme to solve this non-convex optimization problem efficiently. Our framework is applicable to various types of multi-sensor setups that are addressed in our experimental evaluation, including color, multispectral and 3-D range imaging. Comprehensive qualitative and quantitative comparisons demonstrate that our method outperforms state-of-the-art algorithms.


international conference on computer vision | 2009

A common framework for ambient illumination in the dichromatic reflectance model

Christian Riess; Johannes Jordan; Elli Angelopoulou

The dichromatic reflectance model has been successfully applied on different tasks in color research, such as color constancy and highlight or shadow segmentation. In its original version, it incorporates only one direct illuminant. In this work, we analyze a recently published model, the Bi-Illuminant Dichromatic Reflectance Model (BIDR) proposed by Maxwell et al., that extends the dichromatic reflectance model for a very general ambient term. The proposed method can represent optical phenomena like interreflections and inhomogeneous ambient light. We will show that this new model is sufficiently general, in the sense that it encompasses established variations and extensions to the dichromatic reflectance model.


Proceedings of SPIE | 2013

Atlas-based linear volume-of-interest (ABL-VOI) image correction

Andreas K. Maier; Z. Jiang; Johannes Jordan; Christian Riess; Hannes G. Hofmann; Joachim Hornegger

Volume-of-interest imaging offers the ability to image small volumes at a fraction of the dose of a full scan. Reconstruction methods that do not involve prior knowledge are able to recover almost artifact-free images. Although the images appear correct, they often suffer from the problem that low-frequency information that would be included in a full scan is missing. This can often be observed as a scaling error of the reconstructed object densities. As this error is dependent on the object and the truncation in the respective scan, only algorithms that have the correct information about the extent of the object are able to reconstruct the density values correctly. In this paper, we investigate a method to recover the lost low-frequency information. We assume that the correct scaling can be modeled by a linear transformation of the object densities. In order to determine the correct scaling, we employ an atlas of correctly scaled volumes. From the atlas and the given reconstruction volume, we extract patch-based features that are matched against each other. Doing so, we get correspondences between the atlas images and the reconstruction VOI that allow the estimation of the linear transform. We investigated several scenarios for the method: In closed condition, we assumed that a prior scan of the patient was already available. In the open condition test, we excluded the respective patient’s data from the matching process. The original offset between the full view and the truncated data was 133 HU on average in the six data sets. The average noise in the reconstructions was 140 HU. In the closed condition, we were able to estimate this scaling up to 9 HU and in open condition, we still could estimate the offset up to 23 HU.


international conference on image processing | 2012

Supervised multispectral image segmentation with power watersheds

Johannes Jordan; Elli Angelopoulou

In recent years, graph-based methods have had a significant impact on image segmentation. They are especially noteworthy for supervised segmentation, where the user provides task-specific foreground and background seeds. We adapt the power watershed framework to multispectral and hyperspectral image data and incorporate similarity measures from the field of spectral matching. We also propose a new data-driven graph edge weighting. Our weights are computed by the topological information of a self-organizing map. We show that graph weights based on a simple Lp-norm, as used in other modalities, do not give satisfactory segmentation results for multispectral data, while similarity measures that were specifically designed for this domain perform better. Our new approach is competitive and has an advantage in some of the tested scenarios.


workshop on hyperspectral image and signal processing evolution in remote sensing | 2013

Hyperspectral image visualization with a 3-D self-organizing map

Johannes Jordan; Elli Angelopoulou

False-color visualization is a powerful component of interactive hyper-spectral image analysis. We propose a novel unsupervised technique for false coloring that is based on self-organizing map (SOM) dimensionality reduction. We first train a modified 3-dimensional SOM on the image data. Instead of a single answer, our SOM returns several answers to each data query. Then we employ a novel rank-based linear weighting to create a meaningful RGB representation of the query result. We analyze and compare our visualization results on publicly available remote sensing and laboratory image data. The obtained false coloring is superior to established principal component based false coloring while retaining computational efficiency.


Journal of Electrical and Computer Engineering | 2016

A Novel Framework for Interactive Visualization and Analysis of Hyperspectral Image Data

Johannes Jordan; Elli Angelopoulou; Andreas K. Maier

Multispectral and hyperspectral images are well established in various fields of application like remote sensing, astronomy, and microscopic spectroscopy. In recent years, the availability of new sensor designs, more powerful processors, and high-capacity storage further opened this imaging modality to a wider array of applications like medical diagnosis, agriculture, and cultural heritage. This necessitates new tools that allow general analysis of the image data and are intuitive to users who are new to hyperspectral imaging. We introduce a novel framework that bundles new interactive visualization techniques with powerful algorithms and is accessible through an efficient and intuitive graphical user interface. We visualize the spectral distribution of an image via parallel coordinates with a strong link to traditional visualization techniques, enabling new paradigms in hyperspectral image analysis that focus on interactive raw data exploration. We combine novel methods for supervised segmentation, global clustering, and nonlinear false-color coding to assist in the visual inspection. Our framework coined Gerbil is open source and highly modular, building on established methods and being easily extensible for application-specific needs. It satisfies the need for a general, consistent software framework that tightly integrates analysis algorithms with an intuitive, modern interface to the raw image data and algorithmic results. Gerbil finds its worldwide use in academia and industry alike with several thousand downloads originating from 45 countries.

Collaboration


Dive into the Johannes Jordan's collaboration.

Top Co-Authors

Avatar

Elli Angelopoulou

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Andreas K. Maier

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Christian Riess

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Joachim Hornegger

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Antonio Robles-Kelly

Australian National University

View shared research outputs
Top Co-Authors

Avatar

Hannes G. Hofmann

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Rolf Wanka

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Sabine Helwig

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Thomas Köhler

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Vincent Christlein

University of Erlangen-Nuremberg

View shared research outputs
Researchain Logo
Decentralizing Knowledge