Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jincheol Park is active.

Publication


Featured researches published by Jincheol Park.


IEEE Transactions on Image Processing | 2013

Video Quality Pooling Adaptive to Perceptual Distortion Severity

Jincheol Park; Kalpana Seshadrinathan; Sanghoon Lee; Alan C. Bovik

It is generally recognized that severe video distortions that are transient in space and/or time have a large effect on overall perceived video quality. In order to understand this phenomena, we study the distribution of spatio-temporally local quality scores obtained from several video quality assessment (VQA) algorithms on videos suffering from compression and lossy transmission over communication channels. We propose a content adaptive spatial and temporal pooling strategy based on the observed distribution. Our method adaptively emphasizes “worst” scores along both the spatial and temporal dimensions of a video sequence and also considers the perceptual effect of large-area cohesive motion flow such as egomotion. We demonstrate the efficacy of the method by testing it using three different VQA algorithms on the LIVE Video Quality database and the EPFL-PoliMI video quality database.


IEEE Journal of Selected Topics in Signal Processing | 2014

3D Visual Discomfort Prediction: Vergence , Foveation, and the Physiological Optics of Accommodation

Jincheol Park; Sanghoon Lee; Alan C. Bovik

To achieve clear binocular vision, neural processes that accomplish accommodation and vergence are performed via two collaborative, cross-coupled processes: accommodation-vergence (AV) and vergence-accommodation (VA). However, when people watch stereo images on stereoscopic displays, normal neural functioning may be disturbed owing to anomalies of the cross-link gains. These anomalies are likely the main cause of visual discomfort experienced when viewing stereo images, and are called Accommodation-Vergence Mismatches (AVM). Moreover, the absence of any useful accommodation depth cues when viewing 3D content on a flat panel (planar) display induces anomalous demands on binocular fusion, resulting in possible additional visual discomfort. Most prior efforts in this direction have focused on predicting anomalies in the AV cross-link using measurements on a computed disparity map. We further these contributions by developing a model that accounts for both accommodation and vergence, resulting in a new visual discomfort prediction algorithm dubbed the 3D-AVM Predictor. The 3D-AVM model and algorithm make use of a new concept we call local 3D bandwidth (BW) which is defined in terms of the physiological optics of binocular vision and foveation. The 3D-AVM Predictor accounts for anomalous motor responses of both accommodation and vergence, yielding predictive power that is statistically superior to prior models that rely on a computed disparity distribution only.


IEEE Transactions on Multimedia | 2009

Optimal Channel Adaptation of Scalable Video Over a Multicarrier-Based Multicell Environment

Jincheol Park; Hyungkeuk Lee; Sanghoon Lee; Alan C. Bovik

To achieve seamless multimedia streaming services over wireless networks, it is important to overcome inter-cell interference (ICI), particularly in cell border regions. In this regard scalable video coding (SVC) has been actively studied due to its advantage of channel adaptation. We explore an optimal solution for maximizing the expected visual entropy over an orthogonal frequency division multiplexing (OFDM)-based broadband network from the perspective of cross-layer optimization. An optimization problem is parameterized by a set of source and channel parameters that are acquired along the user location over a multicell environment. A suboptimal solution is suggested using a greedy algorithm that allocates the radio resources to the scalable bitstreams as a function of their visual importance. The simulation results show that the greedy algorithm effectively resists ICI in the cell border region, while conventional nonscalable coding suffers severely because of ICI.


IEEE Transactions on Image Processing | 2015

3D Visual Discomfort Predictor: Analysis of Disparity and Neural Activity Statistics

Jincheol Park; Heeseok Oh; Sanghoon Lee; Alan C. Bovik

Being able to predict the degree of visual discomfort that is felt when viewing stereoscopic 3D (S3D) images is an important goal toward ameliorating causative factors, such as excessive horizontal disparity, misalignments or mismatches between the left and right views of stereo pairs, or conflicts between different depth cues. Ideally, such a model should account for such factors as capture and viewing geometries, the distribution of disparities, and the responses of visual neurons. When viewing modern 3D displays, visual discomfort is caused primarily by changes in binocular vergence while accommodation in held fixed at the viewing distance to a flat 3D screen. This results in unnatural mismatches between ocular fixations and ocular focus that does not occur in normal direct 3D viewing. This accommodation vergence conflict can cause adverse effects, such as headaches, fatigue, eye strain, and reduced visual ability. Binocular vision is ultimately realized by means of neural mechanisms that subserve the sensorimotor control of eye movements. Realizing that the neuronal responses are directly implicated in both the control and experience of 3D perception, we have developed a model-based neuronal and statistical framework called the 3D visual discomfort predictor (3D-VDP) that automatically predicts the level of visual discomfort that is experienced when viewing S3D images. 3D-VDP extracts two types of features: 1) coarse features derived from the statistics of binocular disparities and 2) fine features derived by estimating the neural activity associated with the processing of horizontal disparities. In particular, we deploy a model of horizontal disparity processing in the extrastriate middle temporal region of occipital lobe. We compare the performance of 3D-VDP with other recent discomfort prediction algorithms with respect to correlation against recorded subjective visual discomfort scores, and show that 3D-VDP is statistically superior to the other methods.


IEEE Transactions on Circuits and Systems for Video Technology | 2010

Perceptually Unequal Packet Loss Protection by Weighting Saliency and Error Propagation

Hojin Ha; Jincheol Park; Sanghoon Lee; Alan C. Bovik

We describe a method for achieving perceptually minimal video distortion over packet-erasure networks using perceptually unequal loss protection (PULP). There are two main ingredients in the algorithm. First, a perceptual weighting scheme is employed wherein the compressed video is weighted as a function of the nonuniform distribution of retinal photoreceptors. Secondly, packets are assigned temporal importance within each group of pictures (GOP), recognizing that the severity of error propagation increases with elapsed time within a GOP. Using both frame-level perceptual importance and GOP-level hierarchical importance, the PULP algorithm seeks efficient forward error correction assignment that balances efficiency and fairness by controlling the size of identified salient region(s) relative to the channel state. PULP demonstrates robust performance and significantly improved subjective and objective visual quality in the face of burst packet losses.


IEEE Transactions on Image Processing | 2014

No-Reference Sharpness Assessment of Camera-Shaken Images by Analysis of Spectral Structure

Taegeun Oh; Jincheol Park; Kalpana Seshadrinathan; Sanghoon Lee; Alan C. Bovik

The tremendous explosion of image-, video-, and audio-enabled mobile devices, such as tablets and smart-phones in recent years, has led to an associated dramatic increase in the volume of captured and distributed multimedia content. In particular, the number of digital photographs being captured annually is approaching 100 billion in just the U.S. These pictures are increasingly being acquired by inexperienced, casual users under highly diverse conditions leading to a plethora of distortions, including blur induced by camera shake. In order to be able to automatically detect, correct, or cull images impaired by shake-induced blur, it is necessary to develop distortion models specific to and suitable for assessing the sharpness of camera-shaken images. Toward this goal, we have developed a no-reference framework for automatically predicting the perceptual quality of camera-shaken images based on their spectral statistics. Two kinds of features are defined that capture blur induced by camera shake. One is a directional feature, which measures the variation of the image spectrum across orientations. The second feature captures the shape, area, and orientation of the spectral contours of camera shaken images. We demonstrate the performance of an algorithm derived from these features on new and existing databases of images distorted by camera shake.


international conference on image processing | 2011

Spatio-temporal quality pooling accounting for transient severe impairments and egomotion

Jincheol Park; Kalpana Seshadrinathan; Sanghoon Lee; Alan C. Bovik

With the increasing popularity of video applications, the reliable measurement of perceived video quality has increased in importance. We study methods for pooling video quality scores over space and time. The method accounts for localized severe impairments of the signal which exhibit significant influence on the subjective impression of the overall signal quality. It also accounts for the effect of camera motion (egomotion) on perceived quality. The method arrived at is tested on the LIVE Video Quality Database and is shown to perform quite well.


IEEE Transactions on Circuits and Systems for Video Technology | 2011

Perceptually Scalable Extension of H.264

Hojin Ha; Jincheol Park; Sanghoon Lee; Alan C. Bovik

We propose a novel visual scalable video coding (VSVC) framework, named VSVC H.264/AVC. In this approach, the non-uniform sampling characteristic of the human eye is used to modify scalable video coding (SVC) H.264/AVC. We exploit the visibility of video content and the scalability of the video codec to achieve optimal subjective visual quality given limited system resources. To achieve the largest coding gain with controlled perceptual quality degradation, a perceptual weighting scheme is deployed wherein the compressed video is weighted as a function of visual saliency and of the non-uniform distribution of retinal photoreceptors. We develop a resource allocation algorithm emphasizing both efficiency and fairness by controlling the size of the salient region in each quality layer. Efficiency is emphasized on the low quality layer of the SVC. The bits saved by eliminating perceptual redundancy in regions of low interest are allocated to lower block-level distortions in salient regions. Fairness is enforced on the higher quality layers by enlarging the size of the salient regions. The simulation results show that the proposed VSVC framework significantly improves the subjective visual quality of compressed videos.


international conference on image processing | 2010

Temporal pooling of video quality estimates using perceptual motion models

Kwang-Hyun Lee; Jincheol Park; Sanghoon Lee; Alan C. Bovik

Emerging multimedia applications have increased the need for video quality measurement. Motion is critical to this task, but is complicated owing to a variety of object movements and movement of the camera. Here, we categorize the various motion situations and deploy appropriate perceptual models to each category. We use these models to create a new approach to objective video quality assessment. Performance evaluation on the Laboratory for Image and Video Engineering (LIVE) Video Quality Database shows competitive performance compared to the leading contemporary VQA algorithms.


IEEE Transactions on Image Processing | 2015

3D visual discomfort predictor: Analysis of horizontal disparity and neural activity statistics

Jincheol Park; Heeseok Oh; Sanghoon Lee; Alan C. Bovik

Being able to predict the degree of visual discomfort that is felt when viewing stereoscopic 3D (S3D) images is an important goal toward ameliorating causative factors, such as excessive horizontal disparity, misalignments or mismatches between the left and right views of stereo pairs, or conflicts between different depth cues. Ideally, such a model should account for such factors as capture and viewing geometries, the distribution of disparities, and the responses of visual neurons. When viewing modern 3D displays, visual discomfort is caused primarily by changes in binocular vergence while accommodation in held fixed at the viewing distance to a flat 3D screen. This results in unnatural mismatches between ocular fixations and ocular focus that does not occur in normal direct 3D viewing. This accommodation vergence conflict can cause adverse effects, such as headaches, fatigue, eye strain, and reduced visual ability. Binocular vision is ultimately realized by means of neural mechanisms that subserve the sensorimotor control of eye movements. Realizing that the neuronal responses are directly implicated in both the control and experience of 3D perception, we have developed a model-based neuronal and statistical framework called the 3D visual discomfort predictor (3D-VDP) that automatically predicts the level of visual discomfort that is experienced when viewing S3D images. 3D-VDP extracts two types of features: 1) coarse features derived from the statistics of binocular disparities and 2) fine features derived by estimating the neural activity associated with the processing of horizontal disparities. In particular, we deploy a model of horizontal disparity processing in the extrastriate middle temporal region of occipital lobe. We compare the performance of 3D-VDP with other recent discomfort prediction algorithms with respect to correlation against recorded subjective visual discomfort scores, and show that 3D-VDP is statistically superior to the other methods.

Collaboration


Dive into the Jincheol Park's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alan C. Bovik

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge