Sewoong Ahn
Yonsei University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sewoong Ahn.
IEEE Transactions on Image Processing | 2017
Heeseok Oh; Sewoong Ahn; Jongyoo Kim; Sanghoon Lee
Previously, no-reference (NR) stereoscopic 3D (S3D) image quality assessment (IQA) algorithms have been limited to the extraction of reliable hand-crafted features based on an understanding of the insufficiently revealed human visual system or natural scene statistics. Furthermore, compared with full-reference (FR) S3D IQA metrics, it is difficult to achieve competitive quality score predictions using the extracted features, which are not optimized with respect to human opinion. To cope with this limitation of the conventional approach, we introduce a novel deep learning scheme for NR S3D IQA in terms of local to global feature aggregation. A deep convolutional neural network (CNN) model is trained in a supervised manner through two-step regression. First, to overcome the lack of training data, local patch-based CNNs are modeled, and the FR S3D IQA metric is used to approximate a reference ground-truth for training the CNNs. The automatically extracted local abstractions are aggregated into global features by inserting an aggregation layer in the deep structure. The locally trained model parameters are then updated iteratively using supervised global labeling, i.e., subjective mean opinion score (MOS). In particular, the proposed deep NR S3D image quality evaluator does not estimate the depth from a pair of S3D images. The S3D image quality scores predicted by the proposed method represent a significant improvement over those of previous NR S3D IQA algorithms. Indeed, the accuracy of the proposed method is competitive with FR S3D IQA metrics, having ~ 91% correlation in terms of MOS.Previously, no-reference (NR) stereoscopic 3D (S3D) image quality assessment (IQA) algorithms have been limited to the extraction of reliable hand-crafted features based on an understanding of the insufficiently revealed human visual system or natural scene statistics. Furthermore, compared with full-reference (FR) S3D IQA metrics, it is difficult to achieve competitive quality score predictions using the extracted features, which are not optimized with respect to human opinion. To cope with this limitation of the conventional approach, we introduce a novel deep learning scheme for NR S3D IQA in terms of local to global feature aggregation. A deep convolutional neural network (CNN) model is trained in a supervised manner through two-step regression. First, to overcome the lack of training data, local patch-based CNNs are modeled, and the FR S3D IQA metric is used to approximate a reference ground-truth for training the CNNs. The automatically extracted local abstractions are aggregated into global features by inserting an aggregation layer in the deep structure. The locally trained model parameters are then updated iteratively using supervised global labeling, i.e., subjective mean opinion score (MOS). In particular, the proposed deep NR S3D image quality evaluator does not estimate the depth from a pair of S3D images. The S3D image quality scores predicted by the proposed method represent a significant improvement over those of previous NR S3D IQA algorithms. Indeed, the accuracy of the proposed method is competitive with FR S3D IQA metrics, having ~ 91% correlation in terms of MOS.
international conference on multimedia and expo | 2016
Sewoong Ahn; Junghwan Kim; Hak-Sub Kim; Sanghoon Lee
By analyzing the statistical behaviors on human visual attention, we discover a clue that the fixation behaviors are highly correlated with how much the viewers feel visual discomfort on stereoscopic images differently from conventional subjective assessments. In order to quantify the correlation between visual attention and discomfort, we explore a novel methodology termed transition of visual attention (ToVA) according to various disparities, which accounts depth attributes of 3D images by eye-tracker experiments. Moreover, the saliency entropy is defined to quantify the distribution of fixations for 3D images. Then, we measure ToVA in terms of the relative saliency entropy using Kullback-Leibler divergence. In order to evaluate the effectiveness of ToVA, a successful example application is also provided, whereby ToVA is applied to obtaining subjective results of measuring discomfort experienced when viewing 3D displays rather than relying on the conventional subjective test by using scoring system.
Energy Policy | 2007
Ho-Jun Song; Seungmoon Lee; Sanjeev Maken; Sewoong Ahn; Jin-Won Park; Byoungryul Min; Won-Gun Koh
IEEE Transactions on Broadcasting | 2016
Hak-Sub Kim; Sewoong Ahn; Woojae Kim; Sanghoon Lee
quality of multimedia experience | 2018
Jaekvung Kim; Woojae Kim; Sewoong Ahn; Jinwoo Kim; Sanghoon Lee
international conference on image processing | 2018
Jongyoo Kim; Anh-Duc Nguyen; Sewoong Ahn; Chong Luo; Sanghoon Lee
international conference on image processing | 2018
Sewoong Ahn; Woojae Kim; Jinwoo Kim; Jaekyung Kim; Sanghoon Lee
international conference on image processing | 2018
Sewoong Ahn; Sanghoon Lee
european conference on computer vision | 2018
Woojae Kim; Jongyoo Kim; Sewoong Ahn; Jinwoo Kim; Sanghoon Lee
IEEE Transactions on Image Processing | 2018
Heeseok Oh; Sewoong Ahn; Sanghoon Lee; Alan C. Bovik