Mylène C. Q. Farias
University of Brasília
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mylène C. Q. Farias.
international conference on image processing | 2005
Mylène C. Q. Farias; Sanjit K. Mitra
In this paper we present a no-reference video quality metric based on individual measurements of three artifacts: blockiness, blurriness, and noisiness. The set of artifact metrics (physical strength measurements) was designed to be simple enough to be used in real-time applications. The metrics are tested using a proposed procedure that uses synthetic artifacts and subjective data obtained from previous experiments. The technique has the advantage of allowing us to test each metric on videos which contain only the desired artifact signal or a combination of artifact signals. Models for the overall annoyance based on a combination of the artifact metrics using both a Minkowski metric and a linear model are developed. Both models present a very good correlation with the data and show no statistical difference in their performances.
IEEE Transactions on Circuits and Systems for Video Technology | 2005
Chowdary Adsumilli; Mylène C. Q. Farias; Sanjit K. Mitra; Marco Carli
A robust error concealment scheme using data hiding which aims at achieving high perceptual quality of images and video at the end-user despite channel losses is proposed. The scheme involves embedding a low-resolution version of each image or video frame into itself using spread-spectrum watermarking, extracting the embedded watermark from the received video frame, and using it as a reference for reconstruction of the parent image or frame, thus detecting and concealing the transmission errors. Dithering techniques have been used to obtain a binary watermark from the low-resolution version of the image/video frame. Multiple copies of the dithered watermark are embedded in frequencies in a specific range to make it more robust to channel errors. It is shown experimentally that, based on the frequency selection and scaling factor variation, a high-quality watermark can be extracted from a low-quality lossy received image/video frame. Furthermore, the proposed technique is compared to its two-part variant where the low-resolution version is encoded and transmitted as side information instead of embedding it. Simulation results show that the proposed concealment technique using data hiding outperforms existing approaches in improving the perceptual quality, especially in the case of higher loss probabilities.
Vision Research | 2007
John M. Foley; Srinivasa Varadharajan; Chin C. Koh; Mylène C. Q. Farias
Contrast thresholds of vertical Gabor patterns were measured as a function of their eccentricity, size, shape, and phase using a 2AFC method. The patterns were 4 c/deg and they were presented for 90 or 240 ms. Log thresholds increase linearly with eccentricity at a mean rate of 0.47 dB/wavelength. For patterns centered on the fovea, thresholds decrease as the area of the pattern increases over the entire standard deviation range of 12 wavelengths. The TvA functions are concave up on log-log coordinates. For small patterns there is an interaction between shape and size that depends on phase. Threshold contrast energy is a U-shaped function of area with a minimum in the vicinity of 0.4 wavelength indicating detection by small receptive fields. Observers can discriminate among patterns of different sizes when the patterns are at threshold indicating that more than one mechanism is involved. The results are accounted for by a model in which patterns excite an array of slightly elongated receptive fields that are identical except that their sensitivity decreases exponentially with eccentricity. Excitation is raised to a power and then summed linearly across receptive fields to determine the threshold. The results are equally well described by an internal-noise-limited model. The TvA functions are insufficient to separately estimate the noise and the exponent of the power function. However, an experiment that shows that mixing sizes within the trial sequence has no effect on thresholds, suggests that the limiting noise does not increase with the number of mechanisms monitored.
international conference on image processing | 2002
Mylène C. Q. Farias; Sanjit K. Mitra; Marco Carli; Alessandro Neri
A comparison between an objective quality measure and the perceived mean annoyance values of watermarked videos is presented. A psychophysical experiment has been performed to measure the detection threshold and mean annoyance values of several watermarked videos, using two different marks. The results of this experiment were then compared with an objective quality measure, obtained through a tracing watermarking system. An estimation of the detection threshold of the watermarked videos was found.
computer vision and pattern recognition | 2004
Elisa Drelie Gelasca; Touradj Ebrahimi; Mylène C. Q. Farias; Marco Carli; Sanjit K. Mitra
To be reliable, an automatic segmentation evaluation metric has to be validated by subjective tests. In this paper, a formal protocol for subjective tests for segmentation quality assessment is presented. The most common artifacts produced by segmentation algorithms are identified and an extensive analysis of their effects on the perceived quality is performed. A psychophysical experiment was performed to assess the quality of video with segmentation errors. The results show how an objective segmentation evaluation metric can be defined as a function of various error types.
Archive | 2010
Mylène C. Q. Farias
Digital video communication has evolved into an important field in the past few years. There have been significant advances in compression and transmission techniques, which have made possible to deliver high quality video to the end user. In particular, the advent of new technologies has allowed the creation of many new telecommunication services (e.g., direct broadcast satellite, digital television, high definition TV, video teleconferencing, Internet video). To quantify the performance of a digital video communication system, it is important to have a measure of video quality changes at each of the communication system stages. Since in the majority of these applications the transformed or processed video is destined for human consumption, humans will ultimately decide if the operation was successful or not. Therefore, human perception should be taken into account when trying to establish the degree to which a video can be compressed, deciding if the video transmission was successful, or deciding whether visual enhancements have provided an actual benefit. Measuring the quality of a video implies a direct or indirect comparison of the test video with the original video. The most accurate way to determine the quality of a video is by measuring it using psychophysical experiments with human subjects (ITU-R, 1998). Unfortunately, psychophysical experiments are very expensive, time-consuming and hard to incorporate into a design process or an automatic quality of service control. Therefore, the ability to measure video quality accurately and efficiently, without using human observers, is highly desirable in practical applications. Good video quality metrics can be employed to monitor video quality, compare the performance of video processing systems and algorithms, and to optimize the algorithms and parameter settings for a video processing system. With this in mind, fast algorithms that give a physical measure (objective metric) of the video quality are used to obtain an estimate of the quality of a video when being transmitted, received or displayed. Customarily, quality measurements have been largely limited to a few objective measures, such as the mean absolute error (MAE), the mean square error (MSE), and the peak signal-to-noise ratio (PSNR), supplemented by limited subjective evaluation. Although the use of such metrics is fairly standard in published literature, it suffers from one major weakness. The outputs of these measures do not always correspond well with human judgements of quality. In the past few years, a big effort in the scientific community has been devoted to the development of better video quality metrics that correlate well with the human perception of quality (Daly, 1993; Lubin, 1993; Watson et al., 2001; Wolf et al., 1991). Although much Source: Digital Video, Book edited by: Floriano De Rango, ISBN 978-953-7619-70-1, pp. 500, February 2010, INTECH, Croatia, downloaded from SCIYO.COM
international conference on multimedia and expo | 2003
Mylène C. Q. Farias; Sanjit K. Mitra; John M. Foley
In this paper, we create synthetic artifacts that are perceived to be predominantly blocky, blurry or noisy. We present them alone or in various combinations and have subjects rate the perceived strength of each artifact and the overall annoyance of the combined artifacts. We found that a simple linear model with no interactions predicted how the perceived artifacts combine to determine overall annoyance. The estimated coefficients indicate that, when the artifacts are equated in perceived strength, blurriness contributes the most to annoyance followed by noise and then blockiness, although the relative weights of blockiness and noise vary with the video content.
Multimedia Tools and Applications | 2016
Mikołaj Leszczuk; Mateusz Hanusiak; Mylène C. Q. Farias; Emmanuel Wyckens; George Heston
In addition to traditional Quality of Service (QoS), Quality of Experience (QoE) poses a real challenge for Internet service providers, audio-visual services, broadcasters and new Over-The-Top (OTT) services. Therefore, objective audio-visual metrics are frequently being dedicated in order to monitor, troubleshoot, investigate and set benchmarks of content applications working in real-time or off-line. The concept proposed here, Monitoring of Audio Visual Quality by Key Performance Indicators (MOAVI), is able to isolate and focus investigation, set-up algorithms, increase the monitoring period and guarantee better prediction of perceptual quality. MOAVI artefacts Key Performance Indicators (KPI) are classified into four categories, based on their origin: capturing, processing, transmission, and display. In the paper, we present experiments carried out over several steps with four experimental set-ups for concept verification. The methodology takes into the account annoyance visibility threshold. The experimental methodology is adapted from International Telecommunication Union – Telecommunication Standardization Sector (ITU-T) Recommendations: P.800, P.910 and P.930. We also present the results of KPI verification tests. Finally, we also describe the first implementation of MOAVI KPI in a commercial product: the NET-MOZAIC probe. Net Research, LLC, currently offers the probe as a part of NET-xTVMS Internet Protocol Television (IPTV) and Cable Television (CATV) monitoring system.
quality of multimedia experience | 2016
Pedro Garcia Freitas; Welington Y. L. Akamine; Mylène C. Q. Farias
In this paper, we propose a new no-reference image quality assessment (NR-IQA) method that uses a machine learning technique based on Local Ternary Pattern (LTP) descriptors. LTP descriptors are a generalization of Local Binary Pattern (LBP) texture descriptors that provide a significant performance improvement when compared to LBP. More specifically, LTP is less susceptible to noise in uniform regions, but no longer rigidly invariant to gray-level transformation. Due to its insensitivity to noise, LTP descriptors are not able to detect milder image degradation. To tackle this issue, we propose a strategy that uses multiple LTP channels to extract texture information. The prediction algorithm uses the histograms of these LTP channels as features for the training procedure. The proposed method is able to blindly predict image quality, i.e., the method is no-reference (NR). Results show that the proposed method is considerably faster than other state-of-the-art no-reference methods, while maintaining a competitive image quality prediction accuracy.
brazilian symposium on computer graphics and image processing | 2011
Pedro Garcia Freitas; Mylène C. Q. Farias; Aletéia Patrícia Favacho de Araújo
In this paper, we present a simple and fast inverse half toning algorithm, targeted at reconstructing half toned images generated using dispersed-dot ordered dithering algorithms. The proposed algorithm uses a simple set of linear filters combined with a stochastic model in order to predict the best intensity values for the binary image pixels. The algorithm produces images with a better perceptual quality than the available algorithms in the literature, preserving most of the fine details of the original gray-level image. It has a high performance, which can be further improved with the use of parallelization techniques.