Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David Megías is active.

Publication


Featured researches published by David Megías.


Signal Processing | 2010

Efficient self-synchronised blind audio watermarking system based on time domain and FFT amplitude modification

David Megías; Jordi Serra-Ruiz; Mehdi Fallahpour

Many audio watermarking schemes divide the audio signal into several blocks such that part of the watermark is embedded into each of them. One of the key issues in these block-oriented watermarking schemes is to preserve the synchronisation, i.e. to recover the exact position of each block in the mark recovery process. In this paper, a novel time domain synchronisation technique is presented together with a new blind watermarking scheme which works in the discrete Fourier transform (DFT or FFT) domain. The combined scheme provides excellent imperceptibility results whilst achieving robustness against typical attacks. Furthermore, the execution of the scheme is fast enough to be used in real-time applications. The excellent transparency of the embedding algorithm makes it particularly useful for professional applications, such as the embedding of monitoring information in broadcast signals. The scheme is also compared with some recent results of the literature.


IEICE Electronics Express | 2009

High capacity audio watermarking using FFT amplitude interpolation

Mehdi Fallahpour; David Megías

An audio watermarking technique in the frequency domain which takes advantage of interpolation is proposed. Interpolated FFT samples are used to generate imperceptible marks. The experimental results show that the suggested method has very high capacity (about 3kbps), without significant perceptual distortion (ODG about -0.5) and provides robustness against common audio signal processing such as echo, add noise, filtering, resampling and MPEG compression (MP3). Depending on the specific application, the tuning parameters could be selected adaptively to achieve even more capacity and better transparency.


Multimedia Systems | 2014

Privacy-aware peer-to-peer content distribution using automatically recombined fingerprints

David Megías; Josep Domingo-Ferrer

Multicast distribution of content is not suited to content-based electronic commerce because all buyers obtain exactly the same copy of the content, in such a way that unlawful redistributors cannot be traced. Unicast distribution has the shortcoming of requiring one connection with each buyer, but it allows the merchant to embed a different serial number in the copy obtained by each buyer, which enables redistributor tracing. Peer-to-peer (P2P) distribution is a third option which may combine some of the advantages of multicast and unicast: on the one hand, the merchant only needs unicast connections with a few seed buyers, who take over the task of further spreading the content; on the other hand, if a proper fingerprinting mechanism is used, unlawful redistributors of the P2P-distributed content can still be traced. In this paper, we propose a novel fingerprinting mechanism for P2P content distribution which allows redistributor tracing, while preserving the privacy of most honest buyers and offering collusion resistance and buyer frameproofness.


Multimedia Tools and Applications | 2011

Subjectively adapted high capacity lossless image data hiding based on prediction errors

Mehdi Fallahpour; David Megías; Mohammad Ghanbari

This article reports on a lossless data hiding scheme for digital images where the data hiding capacity is either determined by minimum acceptable subjective quality or by the demanded capacity. In the proposed method data is hidden within the image prediction errors, where the most well-known prediction algorithms such as the median edge detector (MED), gradient adjacent prediction (GAP) and Jiang prediction are tested for this purpose. In this method, first the histogram of the prediction errors of images are computed and then based on the required capacity or desired image quality, the prediction error values of frequencies larger than this capacity are shifted. The empty space created by such a shift is used for embedding the data. Experimental results show distinct superiority of the image prediction error histogram over the conventional image histogram itself, due to much narrower spectrum of the former over the latter. We have also devised an adaptive method for hiding data, where subjective quality is traded for data hiding capacity. Here the positive and negative error values are chosen such that the sum of their frequencies on the histogram is just above the given capacity or above a certain quality.


Computer Communications | 2013

Distributed multicast of fingerprinted content based on a rational peer-to-peer community

Josep Domingo-Ferrer; David Megías

In conventional multicast transmission, one sender sends the same content to a set of receivers. This precludes fingerprinting the copy obtained by each receiver (in view of redistribution control and other applications). A straightforward alternative is for the sender to separately fingerprint and send in unicast one copy of the content for each receiver. This approach is not scalable and may implode the sender. We present a scalable solution for distributed multicast of fingerprinted content, in which receivers rationally co-operate in fingerprinting and spreading the content. Furthermore, fingerprinting can be anonymous, in order for honest receivers to stay anonymous.


Computers & Security | 2013

LSB matching steganalysis based on patterns of pixel differences and random embedding

Daniel Lerch-Hostalot; David Megías

This paper presents a novel method for detection of LSB matching steganography in grayscale images. This method is based on the analysis of the differences between neighboring pixels before and after random data embedding. In natural images, there is a strong correlation between adjacent pixels. This correlation is disturbed by LSB matching generating new types of correlations. The presented method generates patterns from these correlations and analyzes their variation when random data are hidden. The experiments performed for two different image databases show that the method yields better classification accuracy compared to prior art for both LSB matching and HUGO steganography. In addition, although the method is designed for the spatial domain, some experiments show its applicability also for detecting JPEG steganography.


Multimedia Systems | 2014

Secure logarithmic audio watermarking scheme based on the human auditory system

Mehdi Fallahpour; David Megías

This paper proposes a high capacity audio watermarking algorithm in the logarithm domain based on the absolute threshold of hearing of the human auditory system (HAS), which makes this scheme a novel technique. When considering the fact that the human ear requires more precise samples at low amplitudes (soft sounds), the use of the logarithm helps us design a logarithmic quantization algorithm. The key idea is to divide the selected frequency band into short frames and quantize the samples based on the HAS. Using frames and the HAS improves the robustness, since embedding a secret bit into a set of samples is more reliable than embedding it into a single sample. In addition, the quantization level is adjusted according to the HAS. Apart from remarkable capacity, transparency and robustness, this scheme provides three parameters (frequency band, scale factor and frame size) which facilitate the regulation of the watermarking properties. The experimental results show that the method has a high capacity (800–7,000 bits per second), without significant perceptual distortion (ODG >1) and provides robustness against common audio signal processing such as added noise, filtering and MPEG compression (MP3).


international conference on image processing | 2009

High capacity, reversible data hiding in medical images

Mehdi Fallahpour; David Megías; Mohammed Ghanbari

In this paper we introduce a highly efficient reversible data hiding technique. It is based on dividing the image into tiles and shifting the histograms of each image tile between its minimum and maximum frequency. Data are then inserted at the pixel level with the largest frequency to maximize data hiding capacity. It exploits the special properties of medical images, where the histogram of their non-overlapping image tiles mostly peak around some gray values and the rest of the spectrum is mainly empty. The zeros (or minima) and peaks (maxima) of the histograms of the image tiles are then relocated to embed the data. The grey values of some pixels are therefore modified. High capacity, high fidelity, reversibility and multiple data insertions are the key requirements of data hiding in medical images. We show how histograms of image tiles of medical images can be exploited to achieve these requirements. Compared with data hiding method in the whole image, our scheme can result in 30%–200% capacity improvement with still better image quality, depending on the medical image content.


IEEE Transactions on Audio, Speech, and Language Processing | 2015

Audio watermarking based on Fibonacci numbers

Mehdi Fallahpour; David Megías

This paper presents a novel high-capacity audio watermarking system to embed data and extract them in a bit-exact manner by changing some of the magnitudes of the FFT spectrum. The key idea is to divide the FFT spectrum into short frames and change the magnitude of the selected FFT samples using Fibonacci numbers. Taking advantage of Fibonacci numbers, it is possible to change the frequency samples adaptively. In fact, the suggested technique guarantees and proves, mathematically, that the maximum change is less than 61% of the related FFT sample and the average error for each sample is 25%. Using the closest Fibonacci number to FFT magnitudes results in a robust and transparent technique. On top of very remarkable capacity, transparency and robustness, this scheme provides two parameters which facilitate the regulation of these properties. The experimental results show that the method has a high capacity (700 bps to 3 kbps), without significant perceptual distortion (ODG is about -1) and provides robustness against common audio signal processing such as echo, added noise, filtering, and MPEG compression (MP3). In addition to the experimental results, the fidelity of suggested system is proved mathematically.


Lecture Notes in Computer Science | 2006

Theoretical framework for a practical evaluation and comparison of audio watermarking schemes in the triangle of robustness, transparency and capacity

Jana Dittmann; David Megías; Andreas Lang; Jordi Herrera-Joancomartí

Digital watermarking is a growing research area to mark digital content (image, audio, video, etc.) by embedding information into the content itself. This technique opens or provides additional and useful features for many application fields (like DRM, annotation, integrity proof and many more). The role of watermarking algorithm evaluation (in a broader sense benchmarking) is to provide a fair and automated analysis of a specific approach if it can fulfill certain application requirements and to perform a comparison with different or similar approaches. Today most algorithm designers use their own methodology and therefore the results are hardly comparable. Derived from the variety of actually presented evaluation procedures in this paper, firstly we introduce a theoretical framework for digital robust watermarking algorithms where we focus on the triangle of robustness, transparency and capacity. The main properties and measuring methods are described. Secondly, a practical environment shows the predefined definition and introduces the practical relevance needed for robust audio watermarking benchmarking. Our goal is to provide a more partial precise methodology to test and compare watermarking algorithms. The hope is that watermarking algorithm designers will use our introduced methodology for testing their algorithms to allow a comparison with existing algorithms more easily. Our work should be seen as a scalable and improvable attempt for a formalization of a benchmarking methodology in the triangle of transparency, capacity and robustness.

Collaboration


Dive into the David Megías's collaboration.

Top Co-Authors

Avatar

Mehdi Fallahpour

Open University of Catalonia

View shared research outputs
Top Co-Authors

Avatar

Jordi Herrera-Joancomartí

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

Amna Qureshi

Open University of Catalonia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Helena Rifà-Pous

Open University of Catalonia

View shared research outputs
Top Co-Authors

Avatar

Mehdi Fallahpour

Open University of Catalonia

View shared research outputs
Top Co-Authors

Avatar

Jordi Serra-Ruiz

Open University of Catalonia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joan Serra-Sagristà

Autonomous University of Barcelona

View shared research outputs
Researchain Logo
Decentralizing Knowledge