Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zong Qin is active.

Publication


Featured researches published by Zong Qin.


IEEE\/OSA Journal of Display Technology | 2016

See-Through Image Blurring of Transparent Organic Light-Emitting Diodes Display: Calculation Method Based on Diffraction and Analysis of Pixel Structures

Zong Qin; Yu‐Hsiang Tsai; Yen-Wei Yeh; Yi-Pai Huang; Han-Ping D. Shieh

In this paper, the issue of see-through image blurring of transparent organic light-emitting diodes display is demonstrated, and a systematic method is proposed to calculate blurred see-through image based on diffraction theory. In the calculation, distances of background object and the viewer are considered to correct Fresnel diffraction and chromatic image can be finally synthesized. Calculated and actually captured see-through images match well by using different background distances and pixel structures to verify, as error ratio of only 2.3% and 4.3%, respectively, in the term of blurred images fringe length. Based on the calculation results, see-through image qualities corresponding to four pixel structures are evaluated and two structures are verified to have better image quality. Finally, higher resolutions that lead to much more significant see-through image blurring are discussed.


IEEE Photonics Journal | 2017

Evaluation of a Transparent Display's Pixel Structure Regarding Subjective Quality of Diffracted See-Through Images

Zong Qin; Jing Xie; Fang-Cheng Lin; Yi-Pai Huang; Han-Ping D. Shieh

Transparent displays utilizing transparent windows suffer from blurred see-through images caused by diffraction; however, current studies still rely on experiments with actual display panels and human observers to investigate see-through image quality. Moreover, the influence of pixel structure on subjective see-through image quality has not been clearly demonstrated. To improve the inefficient investigation methodology and quantitatively evaluate pixel structure, we first propose a simulation method for diffracted see-through images. Next, by testing six mainstream full-reference image quality assessment algorithms, multiscale structure similarity (MS-SSIM) is revealed to be the most suitable predictor of subjective image quality for our study. Based on public image databases, the influences of aperture ratio, resolution, and the geometry of the transparent window are evaluated by combining the proposed simulation method and MS-SSIM. As a result, an aperture ratio increase of 0.1 leads to a considerable increase of subjective image quality, as more than eight percentage points. Then, a resolution increase of 100PPI only leads to an approximately three percentage point decrease of image quality. Finally, little influence of the geometry of the transparent window is demonstrated. Additionally, physical reasons why these aspects of pixel structure perform in such a manner are also given.


Applied Optics | 2017

Contrast-sensitivity-based evaluation method of a surveillance camera's visual resolution: improvement from the conventional slanted-edge spatial frequency response method

Zong Qin; Po-Jung Wong; Wei-Chung Chao; Fang-Cheng Lin; Yi-Pai Huang; Han-Ping D. Shieh

Visual resolution is an important specification of a surveillance camera, and it is usually quantified by linewidths per picture height (LW/PH). The conventional evaluation method adopts the slanted-edge spatial frequency response (e-SFR) and uses a fixed decision contrast ratio to determine LW/PH. However, this method brings about a considerable error with respect to subjectively judged results because the perceptibility of the human vision system (HVS) varies with spatial frequency. Therefore, in this paper, a systematic calculation method, which combines the contrast sensitivity function characterizing the HVS and e-SFR, is proposed to solve LW/PH. Eight 720P camera modules in day mode, four 720P modules in night mode, and two 1080P modules in day mode are actually adopted. Corresponding to the three modes, mean absolute error between objective and subjective LW/PH are suppressed to as low as 26 (3.6% of 720P), 27 (3.8% of 720P), and 49 (4.5% of 1080P), while those of the conventional method are 68 (9.4% of 720P), 95 (13.2% of 720P), and 118 (10.9% of 1080P).


THREE-DIMENSIONAL IMAGING, VISUALIZATION, AND DISPLAY 2016 | 2016

Compact and high resolution virtual mouse using lens array and light sensor

Zong Qin; Yu-Cheng Chang; Yu-Jie Su; Yi-Pai Huang; Han-Ping D. Shieh

Virtual mouse based on IR source, lens array and light sensor was designed and implemented. Optical architecture including lens amount, lens pitch, baseline length, sensor length, lens-sensor gap, focal length etc. was carefully designed to achieve low detective error, high resolution, and simultaneously, compact system volume. System volume is 3.1mm (thickness) × 4.5mm (length) × 2, which is much smaller than that of camera-based device. Relative detective error of 0.41mm and minimum resolution of 26ppi were verified in experiments, so that it can replace conventional touchpad/touchscreen. If system thickness is eased to 20mm, resolution higher than 200ppi can be achieved to replace real mouse.


IEEE Photonics Journal | 2017

Maximal Acceptable Ghost Images for Designing a Legible Windshield-Type Vehicle Head-Up Display

Zong Qin; Fang-Cheng Lin; Yi-Pai Huang; Han-Ping D. Shieh

Windshield-type vehicle head-up displays (HUDs) are increasingly popular and stepping forward augmented reality; however, the windshield causes an annoying problem—ghost images. By now, the maximal extent of ghost images has not been determined for an HUD with an acceptable legibility. In this paper, to find the quantitative criterion of ghost images regarding subjective perceptions, we first design an HUD using a rotatable aspheric reflector and a wedge-glass windshield. Optical design of the HUD and experimental results of high-quality images are discussed. Next, eight different disparity angles between the primary and ghost images are controllably generated by rotating the aspheric reflector. Based on this HUD platform, human factor experiments utilizing a simulative automotive driving environment are conducted, in which the eight disparity angles are provided to 12 subjects under three ambient contrast ratios (ACRs). The human factor experiments demonstrate that the maximal disparity angles for an acceptable legibility under the ACRs of 3.90, 2.49, and 1.80 are 0.006°, 0.017°, and 0.024°, respectively. These maximal acceptable disparity angles are important references for quantitatively and efficiently evaluating the optomechanical system of an HUD. In addition, they can help to make vehicle laws and regulations for HUDs.


SID Symposium Digest of Technical Papers | 2016

31‐2: See‐through Image Blurring of Transparent OLED Display: Diffraction Analysis and OLED Pixel Optimization

Zong Qin; Yen-Wei Yeh; Yu‐Hsiang Tsai; Wei‐Yuan Cheng; Yi-Pai Huang; Han-Ping D. Shieh


ieee photonics conference | 2015

High ambient contrast ratio OLED display with microlens array and cruciform black matrices

Zong Qin; Yen-Wei Yeh; Yi-Pai Huang; Han-Ping D. Shieh; Kuo-Ch 'ang Lee


Journal of The Society for Information Display | 2018

Image content adaptive color breakup index for field sequential color displays using a dominant visual saliency method: Image content adaptive color breakup index

Zong Qin; Ying-Ju Lin; Fang-Cheng Lin; Chia-Wei Kuo; Ching-Huan Lin; Norio Sugiura; Han-Ping D. Shieh; Yi-Pai Huang


SID Symposium Digest of Technical Papers | 2018

85-3: Distinguished Student Paper: Image-Content-Adaptive Color Breakup Index for Field-Sequential-Color Displays Using Dominant Visual Saliency Method

Ying-Ju Lin; Zong Qin; Fang-Cheng Lin; Han-Ping D. Shieh; Yi-Pai Huang


SID Symposium Digest of Technical Papers | 2018

48-3: Distinguished Student Paper: Ambient-Light-Adaptive Image Quality Enhancement for Full-Color E-Paper Displays Using Saturation-Based Tone-Mapping Method

Yi-Wen Chen; Zong Qin; Fang-Cheng Lin; Han-Ping D. Shieh; Yi-Pai Huang

Collaboration


Dive into the Zong Qin's collaboration.

Top Co-Authors

Avatar

Yi-Pai Huang

National Chiao Tung University

View shared research outputs
Top Co-Authors

Avatar

Han-Ping D. Shieh

National Chiao Tung University

View shared research outputs
Top Co-Authors

Avatar

Fang-Cheng Lin

National Chiao Tung University

View shared research outputs
Top Co-Authors

Avatar

Yen-Wei Yeh

National Chiao Tung University

View shared research outputs
Top Co-Authors

Avatar

Yu-Cheng Chang

National Chiao Tung University

View shared research outputs
Top Co-Authors

Avatar

Yu-Jie Su

National Chiao Tung University

View shared research outputs
Top Co-Authors

Avatar

Yi-Wen Chen

National Chiao Tung University

View shared research outputs
Top Co-Authors

Avatar

Ying-Ju Lin

National Chiao Tung University

View shared research outputs
Top Co-Authors

Avatar

Yu‐Hsiang Tsai

Industrial Technology Research Institute

View shared research outputs
Top Co-Authors

Avatar

Chi-Tang Hsieh

National Chiao Tung University

View shared research outputs
Researchain Logo
Decentralizing Knowledge