Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Xiaoqiao Huang is active.

Publication


Featured researches published by Xiaoqiao Huang.


Optoelectronic Imaging and Multimedia Technology III | 2014

Illuminant spectrum estimation using a digital color camera and a color chart

Junsheng Shi; Hongfei Yu; Xiaoqiao Huang; Zaiqing Chen; Yonghang Tai

Illumination estimation is the main step in color constancy processing, also an important prerequisite for digital color image reproduction and many computer vision applications. In this paper, a method for estimating illuminant spectrum is investigated using a digital color camera and a color chart under the situation when the spectral reflectance of the chart is known. The method is based on measuring CIEXYZ of the chart using the camera. The first step of the method is to gain camera′s color correction matrix and gamma values by taking a photo of the chart under a standard illuminant. The second step is to take a photo of the chart under an estimated illuminant, and the camera′s inherent RGB values are converted to the standard sRGB values and further converted to CIEXYZ of the chart. Based on measured CIEXYZ and known spectral reflectance of the chart, the spectral power distribution (SPD) of the illuminant is estimated using the Wiener estimation and smoothing estimation. To evaluate the performance of the method quantitatively, the goodnessfitting coefficient (GFC) was used to measure the spectral match and the CIELAB color difference metric was used to evaluate the color match between color patches under the estimated and actual SPDs. The simulated experiment was carried to estimate CIE standard illuminant D50 and C using X-rite ColorChecker 24-color chart, the actual experiment was carried to estimate daylight and illuminant A using two consumergrade cameras and the chart, and the experiment results verified feasible of the investigated method.


DEStech Transactions on Engineering and Technology Research | 2018

Development of a Patient-Specific BRDF-Based Human Lung Rendering for VATS Simulation

Cheng-Qi Xia; Xiaoqiao Huang; Zaiqing Chen; Qiong Li; Junsheng Shi; Yonghang Tai; Lei Wei; Hailing Zhou; Saeid Nahavandi; Jun Peng; Ran Cao

Video-assisted thoracoscopic surgery (VATS), referred to as the commonest minimum invasive excision for located T1 or T2 lung carcinomas, requires a steep learning curve for the novice residents to acquire highly deliberate skills to achieve surgical competence. Based on the bidirectional reflection distribution function (BRDF) physic-based rendering model, the aim of this study is to propose a virtual reality-based (VR) surgical educative simulator with realistic performance in visual rendering for human lung in VAST procedures. Patient-specific medical images and 360-degree dynamic surgical room environment are also being integrated in our training scenario. Finally, validation experiments are implemented on the virtual surgical simulator SimVATS framework, which demonstrated a high performance and distinguished immersion. This study may explore a new graphic rendering model for the VATS surgical education integrate with haptic and VR implementation. Introduction Benefit from the burgeoning improvement on computer graphics (CG) and virtual reality (VR) technologies, virtual surgery is considered as a method that can reduce the cost of surgical training for surgeons effectively. Yet, only few physics-based rendering are applicated in the medical field, especially the rendering of soft tissues in the human body. Furthermore, minimally invasive surgeries (MIS) usually accompany with a high risk due to the limitations in operating space, viewing angle, and lighting conditions. The visual properties of human soft tissues have a high dynamic range and there is individual pathological difference [1]. Therefore, virtual surgery requires higher requirements for the immersive reproduction and realistic visual effects as operation room (OR) scenes. Most current virtual surgical simulators focus on the interactions between trainees and virtual scenes and ignore realistic surgical environment and human tissue rendering. In consequence, trainees are constantly aware that they are in a fake virtual training environment, rather than the actual surgical process [2]. In this context, the physics-based rendering method is considered to be an effective way to provide more visual information. In addition, due to real training scenarios, virtual reality technology will also improve clinical effectiveness [3]. Related Works In 1999, Marschner et al. proposed image-based BRDF [4],which is based on a specific location of the light source and the camera, using a photo and any geometric shape to sample the isotropic material. In 2004, Chung et al. introduced a method of retroreflective BRDF sampling in video bronchoscopy images [5], which allows the acquisition of BRDF, where light is always collinear with the camera, and their goal is to render a view angle. In 2011, Cenydd et al. improved image-based BRDF to measure BRDF in the human brain [6]. Taking the space and time constraints into account of the surgery, they could only take five photos of the area of interest and the sampling was poor. in 2017, Qian et al. proposed a virtual reality framework for laparoscopic simulations [7], including the use of microfacet models for material rendering. VR applicated in the medical field has dramatically boosted in the field of surgical planning, surgical navigation, and rehabilitation training [8]. Although VR has made great progress in the medical field, there have been major improvements in techniques and visual effects [9, 10]. However, these applications have neglected the effect of the operation room environment on the effects of surgical training.


Concurrency and Computation: Practice and Experience | 2018

Machine learning-based haptic-enabled surgical navigation with security awareness

Yonghang Tai; Lei Wei; Hailing Zhou; Qiong Li; Xiaoqiao Huang; Junsheng Shi; Saeid Nahavandi

A novel security awareness surgical navigation system has been proposed for the accurate minimally invasive surgery with machine learning algorithms, haptic‐enabled devices, and customized surgical tools to guide the surgery with real‐time force and visual navigation. To provide a direct and simplified user interface during the operation, we combined traditional surgical guide images with AR‐based view and implemented a 3D reconstructed patient‐specific surgical environment includes with all surgical requisite details. In particular, we trained the surgical collected biomechanics haptic data by employed LSTM‐based RNN algorithm, and residual network for the intraoperative force manipulation prediction and classification, respectively. Experiments evaluation results on percutaneous therapy surgery demonstrated a higher performance and distinguished accuracy by the visual and haptic combined than the traditional navigation system. These preliminary study findings may suggested a new framework in the minimally invasive surgical navigation application and hint at the possibility integration of haptic, AR, and machine learning algorithms implementation in medical simulation. In addition, we take security into account when implementation this new framework.


international conference on mechatronics | 2017

Real-Time Visuo-Haptic Surgical Simulator for Medical Education – A Review

Yonghang Tai; Junsheng Shi; Lei Wei; Xiaoqiao Huang; Zaiqing Chen; Qiong Li

Virtual surgery simulations are able to provide reliable and repeatable learning experiences in a safe environment, form acquiring basic skills, to performing full procedures. Yet, a high-fidelity, practical and immersive surgical simulation platform involves multi-disciplinary topics. Such as computer graphics, haptics, numerical calculation, imaging processing as well as mechanics etc. Depending on the detailed simulation, various surgical operations such as puncture, cut, tear, burning and suture may need to be simulated, each comes with very specific requirement on medical equipment and micro skills. In this paper, we review a number of previous simulators of haptic-enabled medical education in different surgical operations, and identify several techniques that may improve the effectiveness of the pipelines in both visual and haptic rendering. We believe that virtual surgery simulation has enormous potential in surgical training, education and planning fields of medical advancement, and we endeavor to push the boundaries in this field through this review.


international conference on mechatronics | 2017

Dynamic Force Modeling for Robot-Assisted Percutaneous Operation Using Intraoperative Data

Feiyan Li; Yonghang Tai; Junsheng Shi; Lei Wei; Xiaoqiao Huang; Qiong Li; Minghui Xiao; Min Zou

Percutaneous therapy is an essential approach in minimally invasive surgery, especially of the percutaneous access built procedure which without represent neither visual nor tactile feedbacks through the actual operation. In this paper, we constructed a dynamic percutaneous biomechanics experiment architecture, as well as a corresponding validation framework in surgery room with clinical trials designed to facilitate the accurate modeling of the puncture force. It is the first time to propose an intraoperative data based dynamic force modeling and introduce the idea of continuations modeling of percutaneous force. The result demonstrates that the force modeling of dynamic puncture we proposed based on our experimental architecture obtained is not only has a higher fitting degree with the biological tissue data than previous algorithms, but also yields a high coincidence with the intraoperative clinic data. Further proves that dynamic puncture modeling algorithm has a higher similarity with the medical percutaneous surgery, which will provide more precise and reliable applications in the robot-assisted surgery.


Automation, Control and Intelligent Systems | 2016

Barcode Recognizable System Implementing Based on AM5728

Xicai Li; Junsheng Shi; Xiaoqiao Huang; Yonghang Tai; Chongde Zi; Huan Yang; Xingyu Yang; Zhiwei Deng; Feiyan Li

To refine the implementation of industrial camera requirements in terms of barcode identification, speeding the barcode image acquisition and processing challenges, as well as the defect of low accuracy. We proposed a barcode recognition framework based on AM5728 embedded system, which employed industrial CCD to scan the barcode image, moreover, integrated with AM5728 visual development platform to manipulate the collected images. After that, decoding information is yielded from series of algorithms refer to convolution filtering, barcode positioning as well as recognition facilitated by AM5728 visual development platform. Experimental outcomes validated that the accuracy of our system recognition rate can reach up to satisfied 100% in the threshold condition, with 20 frames per second barcode images recognition rate.


computer science and software engineering | 2008

A Study on the Stability and Uniformity of LCD

Xiaoqiao Huang; Junsheng Shi

Liquid Crystal Display (LCD) plays a more and more critical role for color transformation and reproduction of cross media, but the influences upon the stability and spatial uniformity of LCD have been neglected for many users in practice. In this paper, 24 color test samples were measured and computed at 9 different places for study on the stability and spatial uniformity of LCD, the results show that the stability of LCD has a satisfying result, the best time for colorimetric characterization and other special colorimetric measurement is about 4 hours from turn-on. The spatial uniformity of LCD is illustrated by distributions of the color-differences between the ambient and the center, which are from 0.67 to 5.08 ¿E unit of CAM02-SCD.


computer science and software engineering | 2008

Investigation on Color Shifts for Different Gamma of Display System in CIECAM02-Based Uniform Color Space

Ping He; Junsheng Shi; Xiaoqiao Huang; Qiong Li


DEStech Transactions on Engineering and Technology Research | 2018

A Constitutive Model of Soft Tissue Deformation for Virtual Surgical Simulation: A Literature Review

Feiyan Li; Xiaoqiao Huang; Zhai-Qing Chen; Qiong Li; Junsheng Shi; Yonghang Tai; Lei Wei; Hailing Zhou; Saeid Nahavandi


Holography, Diffractive Optics, and Applications VII | 2016

Visual discomfort caused by color asymmetry in 3D displays

Zaiqing Chen; Xiaoqiao Huang; Yonghan Tai; Junsheng Shi; Lijun Yun

Collaboration


Dive into the Xiaoqiao Huang's collaboration.

Top Co-Authors

Avatar

Junsheng Shi

Yunnan Normal University

View shared research outputs
Top Co-Authors

Avatar

Qiong Li

Yunnan Normal University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zaiqing Chen

Yunnan Normal University

View shared research outputs
Top Co-Authors

Avatar

Yonghang Tai

Yunnan Normal University

View shared research outputs
Top Co-Authors

Avatar

Feiyan Li

Yunnan Normal University

View shared research outputs
Top Co-Authors

Avatar

Yonghang Tai

Yunnan Normal University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hongfei Yu

Yunnan Normal University

View shared research outputs
Researchain Logo
Decentralizing Knowledge