Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zaiqing Chen is active.

Publication


Featured researches published by Zaiqing Chen.


Optical Engineering | 2012

Stereoscopic depth perception varies with hues

Zaiqing Chen; Junsheng Shi; Yonghang Tai; Lijun Yun

Abstract. The contribution of color information to stereopsis is controversial, and whether the stereoscopic depth perception varies with chromaticity is ambiguous. This study examined the changes in depth perception caused by hue variations. Based on the fact that a greater disparity range indicates more efficient stereoscopic perception, the effect of hue variations on depth perception was evaluated through the disparity range with random-dot stereogram stimuli. The disparity range was obtained by constant-stimulus method for eight chromaticity points sampled from the CIE 1931 chromaticity diagram. Eight sample points include four main color hues: red, yellow, green, and blue at two levels of chroma. The results show that the disparity range for the yellow hue is greater than the red hue, the latter being greater than the blue hue and the disparity range for green hue is smallest. We conclude that the perceived depth is not the same for different hues for a given size of disparity. We suggest that the stereoscopic depth perception can vary with chromaticity.


IEEE\/OSA Journal of Display Technology | 2014

Visual Comfort Modeling for Disparity in 3D Contents Based on Weber–Fechner's Law

Zaiqing Chen; Junsheng Shi; Xiaoqiao Huang; Lijun Yun; Yonghan Tai

Visual discomfort when looking at stereoscopic three-dimensional (3D) displays has been described as the number one health issue for 3D industry. This paper focuses on the assessment of visual comfort in 3D contents and proposes an objective model to assess the visual comfort induced by possible factors. Based on the fact that visual comfort is a subjective sensation that accompanies the changes of 3D characteristics, a general model with a form of power series of logarithm, was presented to predict the level of visual comfort for candidate factors based on Weber-Fechners Law. In particular, a psychophysical experiment was conducted to subjectively measure the degree of visual comfort induced by disparity factor in motion 3D contents. The results of experiment show that visual comfort decreases as the disparity increases on both crossed and uncrossed directions, and the comfort disparity is in the range of -120 to 115 minute of arc. Then the proposed model was fitted based on psychophysical experiment data to find the optimal value of coefficients. The coefficients of determinations ( R2) were examined to investigate the goodness-of-fit of the model for different powers, and the Weber-Fechners form of the model was discussed.


Holography, Diffractive Optics, and Applications V | 2012

An experimental study on the relationship between maximum disparity and comfort disparity in stereoscopic video

Zaiqing Chen; Junsheng Shi; Yonghang Tai

It is well known that some viewers experience visual discomfort when looking at stereoscopic displays. The disparity is one of the key factors that affect visual comfort in 3D contents, and there is a comfortable disparity range for a person. In this paper, we explore the comfort disparity, which correlates with the optimal viewing distance, as a function of the maximum disparity for an individual. Firstly, the individual maximum disparities of 14 subjects were measured. Then the subjects were asked to rate the comfort scores for a 3D video sequence with different disparity, to evaluate the individual comfort disparity. The results show that as the individual maximum disparity increased, the corresponding comfort disparity increased, and we found a correlation coefficient of r=0.946. The average ratio between the comfort disparity and the maximum disparity was approximately 0.72, and this ratio can be used for one to determine the optimal 3D viewing distance by a rapid method.


Optoelectronic Imaging and Multimedia Technology III | 2014

Illuminant spectrum estimation using a digital color camera and a color chart

Junsheng Shi; Hongfei Yu; Xiaoqiao Huang; Zaiqing Chen; Yonghang Tai

Illumination estimation is the main step in color constancy processing, also an important prerequisite for digital color image reproduction and many computer vision applications. In this paper, a method for estimating illuminant spectrum is investigated using a digital color camera and a color chart under the situation when the spectral reflectance of the chart is known. The method is based on measuring CIEXYZ of the chart using the camera. The first step of the method is to gain camera′s color correction matrix and gamma values by taking a photo of the chart under a standard illuminant. The second step is to take a photo of the chart under an estimated illuminant, and the camera′s inherent RGB values are converted to the standard sRGB values and further converted to CIEXYZ of the chart. Based on measured CIEXYZ and known spectral reflectance of the chart, the spectral power distribution (SPD) of the illuminant is estimated using the Wiener estimation and smoothing estimation. To evaluate the performance of the method quantitatively, the goodnessfitting coefficient (GFC) was used to measure the spectral match and the CIELAB color difference metric was used to evaluate the color match between color patches under the estimated and actual SPDs. The simulated experiment was carried to estimate CIE standard illuminant D50 and C using X-rite ColorChecker 24-color chart, the actual experiment was carried to estimate daylight and illuminant A using two consumergrade cameras and the chart, and the experiment results verified feasible of the investigated method.


DEStech Transactions on Engineering and Technology Research | 2018

Development of a Patient-Specific BRDF-Based Human Lung Rendering for VATS Simulation

Cheng-Qi Xia; Xiaoqiao Huang; Zaiqing Chen; Qiong Li; Junsheng Shi; Yonghang Tai; Lei Wei; Hailing Zhou; Saeid Nahavandi; Jun Peng; Ran Cao

Video-assisted thoracoscopic surgery (VATS), referred to as the commonest minimum invasive excision for located T1 or T2 lung carcinomas, requires a steep learning curve for the novice residents to acquire highly deliberate skills to achieve surgical competence. Based on the bidirectional reflection distribution function (BRDF) physic-based rendering model, the aim of this study is to propose a virtual reality-based (VR) surgical educative simulator with realistic performance in visual rendering for human lung in VAST procedures. Patient-specific medical images and 360-degree dynamic surgical room environment are also being integrated in our training scenario. Finally, validation experiments are implemented on the virtual surgical simulator SimVATS framework, which demonstrated a high performance and distinguished immersion. This study may explore a new graphic rendering model for the VATS surgical education integrate with haptic and VR implementation. Introduction Benefit from the burgeoning improvement on computer graphics (CG) and virtual reality (VR) technologies, virtual surgery is considered as a method that can reduce the cost of surgical training for surgeons effectively. Yet, only few physics-based rendering are applicated in the medical field, especially the rendering of soft tissues in the human body. Furthermore, minimally invasive surgeries (MIS) usually accompany with a high risk due to the limitations in operating space, viewing angle, and lighting conditions. The visual properties of human soft tissues have a high dynamic range and there is individual pathological difference [1]. Therefore, virtual surgery requires higher requirements for the immersive reproduction and realistic visual effects as operation room (OR) scenes. Most current virtual surgical simulators focus on the interactions between trainees and virtual scenes and ignore realistic surgical environment and human tissue rendering. In consequence, trainees are constantly aware that they are in a fake virtual training environment, rather than the actual surgical process [2]. In this context, the physics-based rendering method is considered to be an effective way to provide more visual information. In addition, due to real training scenarios, virtual reality technology will also improve clinical effectiveness [3]. Related Works In 1999, Marschner et al. proposed image-based BRDF [4],which is based on a specific location of the light source and the camera, using a photo and any geometric shape to sample the isotropic material. In 2004, Chung et al. introduced a method of retroreflective BRDF sampling in video bronchoscopy images [5], which allows the acquisition of BRDF, where light is always collinear with the camera, and their goal is to render a view angle. In 2011, Cenydd et al. improved image-based BRDF to measure BRDF in the human brain [6]. Taking the space and time constraints into account of the surgery, they could only take five photos of the area of interest and the sampling was poor. in 2017, Qian et al. proposed a virtual reality framework for laparoscopic simulations [7], including the use of microfacet models for material rendering. VR applicated in the medical field has dramatically boosted in the field of surgical planning, surgical navigation, and rehabilitation training [8]. Although VR has made great progress in the medical field, there have been major improvements in techniques and visual effects [9, 10]. However, these applications have neglected the effect of the operation room environment on the effects of surgical training.


international conference on mechatronics | 2017

Real-Time Visuo-Haptic Surgical Simulator for Medical Education – A Review

Yonghang Tai; Junsheng Shi; Lei Wei; Xiaoqiao Huang; Zaiqing Chen; Qiong Li

Virtual surgery simulations are able to provide reliable and repeatable learning experiences in a safe environment, form acquiring basic skills, to performing full procedures. Yet, a high-fidelity, practical and immersive surgical simulation platform involves multi-disciplinary topics. Such as computer graphics, haptics, numerical calculation, imaging processing as well as mechanics etc. Depending on the detailed simulation, various surgical operations such as puncture, cut, tear, burning and suture may need to be simulated, each comes with very specific requirement on medical equipment and micro skills. In this paper, we review a number of previous simulators of haptic-enabled medical education in different surgical operations, and identify several techniques that may improve the effectiveness of the pipelines in both visual and haptic rendering. We believe that virtual surgery simulation has enormous potential in surgical training, education and planning fields of medical advancement, and we endeavor to push the boundaries in this field through this review.


international conference on mechatronics | 2017

Development of NSCLC Precise Puncture Prototype Based on CT Images Using Augmented Reality Navigation

Zhibao Qin; Yonghang Tai; Junsheng Shi; Lei Wei; Zaiqing Chen; Qiong Li; Minghui Xiao; Jie Shen

According to the gray value in CT image sequence of NSCLC, the visualization of CT images can facilitated surgeries to analyze and judge detailed characteristics inside the tumor. Through the color distribution of the heat map, it can identify a representative position for tumor puncture biopsy. CT images sequence is reconstructed three-dimensional model, including the skin, bone, lungs and tumor, the reconstructed model and the patient’s real body are registered. The four-dimensional heat map of the tumor as a reference to determine the space location of the puncture. In this paper, we conducted reconstructions of a four-dimensional heat map of region of interest and designed the best puncture path. The use of AR navigation technology to guide the puncture biopsy, which achieved the purpose of precise puncture, and facilitate the doctor to take a sample to do pathology analysis. Validations demonstrated our precise puncture system based on AR navigation and four-dimensional heat map reconstruction perform a greatly improved the accuracy of tumor puncture and the diagnoses rate of tumor biopsy


2011 International Conference on Optical Instruments and Technology: Optical Systems and Modern Optoelectronic Instruments | 2011

Design of OLED gamma correction system based on the LUT

Yonghang Tai; Lijun Yun; Junsheng Shi; Zaiqing Chen; Qiong Li

Gamma correction is an important processing in reproduce images information realizing of video source. In order to improve the image sharpness of the OLED micro-display, a Gamma correction system was established to compensate for the gray scale distortion of the micro-display which is caused by the difference between the optical and electrical characteristic property. Based on the North OLEiD Companys 0.5 inch OLED, We proposed a Gamma correction system to converts 8 bits input signal into 9 bits displayed on the OLED. It used Microchip as the MCU and the master of the I2C serial bus, Development of the hardware system measurement verified the correction of VGA and CVBS video input and the picture quality also apparently improved.


Proceedings of SPIE, the International Society for Optical Engineering | 2010

A Design of Near-Eye 3D Display Based on Dual-OLED

Zaiqing Chen; Junsheng Shi; Lijun Yun; Yonghang Tai


International Conference on Optical Instruments and Technology 2017: Optical Systems and Modern Optoelectronic Instruments | 2018

A quantitative measurement of binocular color fusion limit for different disparities

Zaiqing Chen; Junsheng Shi; Yonghan Tai; Xiaoqiao Huang; Lijun Yun; Chao Zhang; Liquan Dong; Yongtian Wang; Baohua Jia; Kimio Tatsuno

Collaboration


Dive into the Zaiqing Chen's collaboration.

Top Co-Authors

Avatar

Junsheng Shi

Yunnan Normal University

View shared research outputs
Top Co-Authors

Avatar

Lijun Yun

Yunnan Normal University

View shared research outputs
Top Co-Authors

Avatar

Xiaoqiao Huang

Yunnan Normal University

View shared research outputs
Top Co-Authors

Avatar

Yonghang Tai

Yunnan Normal University

View shared research outputs
Top Co-Authors

Avatar

Qiong Li

Yunnan Normal University

View shared research outputs
Top Co-Authors

Avatar

Yonghan Tai

Yunnan Normal University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yonghang Tai

Yunnan Normal University

View shared research outputs
Top Co-Authors

Avatar

Chao Zhang

Yunnan Normal University

View shared research outputs
Top Co-Authors

Avatar

Hongfei Yu

Yunnan Normal University

View shared research outputs
Researchain Logo
Decentralizing Knowledge