Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Fritz Lebowsky is active.

Publication


Featured researches published by Fritz Lebowsky.


advanced information networking and applications | 2013

Using Artificial Neural Network for Automatic Assessment of Video Sequences

Brice Ekobo Akoa; Emmanuel Simeu; Fritz Lebowsky

This paper presents a methodology for monitoring quality of service in multimedia networks. The proposal consists in the use of a simple and generic artificial neural networks (ANN) architecture that enables predicting the video quality. The main purpose is to develop automatic means for generating a numerical score that quantifies objectively the human assessing of video streams. The challenge is to create a video quality measurement tool (VQMT) assessing the video quality directly from the available measurements, by building a nonlinear correlation map between the measurements and the human rating mean opinion score (MOS). Promising results are obtained using the ANN for nonlinear modeling combined with fundamental measurable metrics, namely packet loss rate, peak signal to noise ratio, spatial indexes and temporal indexes. A statistical analysis is provided comparing this solutions performance with data-sets obtained through subjective human rating.


international on-line testing symposium | 2013

Video decoder monitoring using non-linear regression

Brice Ekobo Akoa; Emmanuel Simeu; Fritz Lebowsky

In this research work, a non-linear regression-based prediction method is incorporated into a digital video decoder loop to monitor the visual quality of videos during the decoding process. Considering well-known video quality metrics, a Video Quality Monitoring Tool (VQMT) has been developed for efficient re-use in a variety of video processing tasks. The idea is based on the fact that when human observers rate video quality, they consider reference aspects such as Noise affecting the video or Neatness of images. In addition, transmission errors such as packet loss rate may impact video quality as well. Therefore, defining a Regression model between each one of these reference aspects and the Mean Opinion Score (MOS) provided by human observers can lead to an automatic way to supervise video decoding quality. Promising results have been achieved using a Non-linear Regression (NLR) method together with fundamental video quality metrics namely PLR (Packet Loss Rate), PSNR (Peak Signal to Noise Ratio), the SI (Spatial Index) and the TI (Temporal Index).


Journal of The Society for Information Display | 2007

Mathematical modeling of the LCD response time

Pierre Adam; Pascal Bertolino; Fritz Lebowsky

— Techniques to reduce LCD motion blur are extensively used in industry and they depend on an inherent LCD parameter: response time. However, normative response time is not a sufficient reference to improve LCD performance and all the gray-to-gray response-time quantities are required to obtain good improvement. However, measuring and gathering all the gray-to-gray transitions takes an excessive amount of time. Consequently, we propose a novel LCD model to simulate as well as compute gray-to-gray transitions (response time and behavior) from a reduced measurement set in order to decrease the response-time measurement.


international conference on microelectronics | 2015

Image quality assessment using nonlinear learning methods

Rshdee Alhakim; Ghislain Takam Tchendjou; Emmanuel Simeu; Fritz Lebowsky

Objective image quality assessment plays an important role in various image processing applications, where the goal of this process is to automatically evaluate the image quality in agreement with human visual perception. In this paper, we propose three different nonlinear learning approaches in order to design image quality assessment models, which serve to predict the perceived image quality. The nonlinear learning approaches used for the aforementioned purpose are nonlinear regression, artificial neural network and regression tree. The largest publicly available image quality database TID2013 is used to benchmark and evaluate the prediction models. The image quality metrics, provided by this TID2013, are not independent and have the redundant information of image quality. This issue might have a negative impact on the training performance and cause overfitting. To avoid this problem and to simplify the model structure, we select the most significant image quality metrics, based on Pearsons correlation measure and principal component analysis. Simulation results confirm that the three nonlinear learning models have high efficiency in predicting image quality. In addition, the regression tree model has low complexity and easy implementation, comparing to the two other prediction models.


international on-line testing symposium | 2016

Evaluation of machine learning algorithms for image quality assessment

Ghislain Takam Tchendjou; Rshdee Alhakim; Emmanuel Simeu; Fritz Lebowsky

In this article, we apply different machine learning (ML) techniques for building objective models, that permit to automatically assess the image quality in agreement with human visual perception. The six ML methods proposed are discriminant analysis, k-nearest neighbors, artificial neural network, non-linear regression, decision tree and fuzzy logic. Both the stability and the robustness of designed models are evaluated by using Monte-Carlo cross-validation approach (MCCV). The simulation results demonstrate that fuzzy logic model provides the best prediction accuracy.


Proceedings of SPIE | 2014

Preserving color fidelity for display devices using scalable memory compression architecture for text, graphics, and video

Fritz Lebowsky; Marina Nicolas

High-end monitors and TVs based on LCD technology continue to increase their native display resolution to 4k by 2k and beyond. Subsequently, uncompressed pixel amplitude processing becomes costly not only when transmitting over cable or wireless communication channels, but also when processing with array processor architectures. For motion video content, spatial preprocessing from YCbCr 444 to YCbCr 420 is widely accepted. However, due to spatial low pass filtering in horizontal and vertical direction, quality and readability of small text and graphics content is heavily compromised when color contrast is high in chrominance channels. On the other hand, straight forward YCbCr 444 compression based on mathematical error coding schemes quite often lacks optimal adaptation to visually significant image content. We present a block-based memory compression architecture for text, graphics, and video enabling multidimensional error minimization with context sensitive control of visually noticeable artifacts. As a result of analyzing image context locally, the number of operations per pixel can be significantly reduced, especially when implemented on array processor architectures. A comparative analysis based on some competitive solutions highlights the effectiveness of our approach, identifies its current limitations with regard to high quality color rendering, and illustrates remaining visual artifacts.


international conference on microelectronics | 2013

Using classification for video quality evaluation

Brice Ekobo Akoa; Emmanuel Simeu; Fritz Lebowsky

This paper presents a methodology for monitoring quality of service in multimedia networks. The proposal consists in the use of a simple and generic classification algorithm that enables classify the quality of a given video. The main purpose is to objectively classify video quality according to the ITU-T continuous scale, faithfully with human judgment on video quality. The challenge is to create a video quality monitoring tool (VQMT) classifying the video quality directly from the available video quality metrics, by matching the quality level of a given video to a class of video quality among the 5 considered video quality classes (Excellent, Good, Fair, Poor and Bad). Promising results are obtained using a k-NN classification tool trained on a dataset of a subjective experience along with fundamental measurable metrics, namely packet loss rate, peak signal to noise ratio, spatial indexes and temporal indexes. A statistical analysis is provided comparing this solutions performance with data-sets obtained through subjective human rating.


Proceedings of SPIE | 2013

Optimizing color fidelity for display devices using contour phase predictive coding for text, graphics, and video content

Fritz Lebowsky

High-end monitors and TVs based on LCD technology continue to increase their native display resolution to 4k2k and beyond. Subsequently, uncompressed pixel data transmission becomes costly when transmitting over cable or wireless communication channels. For motion video content, spatial preprocessing from YCbCr 444 to YCbCr 420 is widely accepted. However, due to spatial low pass filtering in horizontal and vertical direction, quality and readability of small text and graphics content is heavily compromised when color contrast is high in chrominance channels. On the other hand, straight forward YCbCr 444 compression based on mathematical error coding schemes quite often lacks optimal adaptation to visually significant image content. Therefore, we present the idea of detecting synthetic small text fonts and fine graphics and applying contour phase predictive coding for improved text and graphics rendering at the decoder side. Using a predictive parametric (text) contour model and transmitting correlated phase information in vector format across all three color channels combined with foreground/background color vectors of a local color map promises to overcome weaknesses in compression schemes that process luminance and chrominance channels separately. The residual error of the predictive model is being minimized more easily since the decoder is an integral part of the encoder. A comparative analysis based on some competitive solutions highlights the effectiveness of our approach, discusses current limitations with regard to high quality color rendering, and identifies remaining visual artifacts.


color imaging conference | 2005

Color discrimination problems in digital TV systems

Fritz Lebowsky; Marina Nicolas

Color image artifacts become strongly noticeable as TV screens such as LCD flat panels or plasma displays grow in size and as consumers reach a higher level of image quality perception. At first we introduce you to visually noticeable color artifacts present in current digital TV systems, then we show how well they can be discriminated in different color spaces such as RGB or CIELAB and characterize their components. We also found a re-synthesized artifact helpful to elaborate an algorithm that reduces noticeable color artifacts to a level below the threshold of visual perception with regard to a reduced viewing distance.


Proceedings of SPIE | 2014

Using statistical analysis and artificial intelligence tools for automatic assessment of video sequences

Brice Ekobo Akoa; Emmanuel Simeu; Fritz Lebowsky

This paper proposes two novel approaches to Video Quality Assessment (VQA). Both approaches attempt to develop video evaluation techniques capable of replacing human judgment when rating video quality in subjective experiments. The underlying study consists of selecting fundamental quality metrics based on Human Visual System (HVS) models and using artificial intelligence solutions as well as advanced statistical analysis. This new combination enables suitable video quality ratings while taking as input multiple quality metrics. The first method uses a neural network based machine learning process. The second method consists in evaluating the video quality assessment using non-linear regression model. The efficiency of the proposed methods is demonstrated by comparing their results with those of existing work done on synthetic video artifacts. The results obtained by each method are compared with scores from a database resulting from subjective experiments.

Collaboration


Dive into the Fritz Lebowsky's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Emmanuel Simeu

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge