Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Junsheng Shi is active.

Publication


Featured researches published by Junsheng Shi.


Holography, Diffractive Optics, and Applications V | 2012

An experimental study on the relationship between maximum disparity and comfort disparity in stereoscopic video

Zaiqing Chen; Junsheng Shi; Yonghang Tai

It is well known that some viewers experience visual discomfort when looking at stereoscopic displays. The disparity is one of the key factors that affect visual comfort in 3D contents, and there is a comfortable disparity range for a person. In this paper, we explore the comfort disparity, which correlates with the optimal viewing distance, as a function of the maximum disparity for an individual. Firstly, the individual maximum disparities of 14 subjects were measured. Then the subjects were asked to rate the comfort scores for a 3D video sequence with different disparity, to evaluate the individual comfort disparity. The results show that as the individual maximum disparity increased, the corresponding comfort disparity increased, and we found a correlation coefficient of r=0.946. The average ratio between the comfort disparity and the maximum disparity was approximately 0.72, and this ratio can be used for one to determine the optimal 3D viewing distance by a rapid method.


systems, man and cybernetics | 2016

Tissue and force modelling on multi-layered needle puncture for percutaneous surgery training

Yonghang Tai; Lei Wei; Hailing Zhou; Saeid Nahavandi; Junsheng Shi

Percutaneous surgery is a typical minimally invasive surgery. Featuring minimization in trauma and infection rate as well as rapid recovery time to patients, percutaneous therapy has replaced various traditional open surgery approaches and has become an essential approach for a series of clinic operations over the past decades. However, the practice and training for such a vocational manual skill is both difficult and expensive, which imposes negative impacts on its further advances. In this paper, we conducted an immersive needle insertion simulator for percutaneous surgery through visuo-haptic rendering. Multi-layered deformable tissue model with human anatomic textures are simulated and rendered. Mass-spring based force model and algorithm are also employed for realistic trocar needle insertion. Last but not least, a highly immersive virtual training scenario, integrated with a desktop haptic device is implemented to facilitate perceptive and hands-on experiences. Medical professional and trainees have also been invited to practice on the training scenario and provide subjective opinions in refining our implementation.


Optoelectronic Imaging and Multimedia Technology III | 2014

Illuminant spectrum estimation using a digital color camera and a color chart

Junsheng Shi; Hongfei Yu; Xiaoqiao Huang; Zaiqing Chen; Yonghang Tai

Illumination estimation is the main step in color constancy processing, also an important prerequisite for digital color image reproduction and many computer vision applications. In this paper, a method for estimating illuminant spectrum is investigated using a digital color camera and a color chart under the situation when the spectral reflectance of the chart is known. The method is based on measuring CIEXYZ of the chart using the camera. The first step of the method is to gain camera′s color correction matrix and gamma values by taking a photo of the chart under a standard illuminant. The second step is to take a photo of the chart under an estimated illuminant, and the camera′s inherent RGB values are converted to the standard sRGB values and further converted to CIEXYZ of the chart. Based on measured CIEXYZ and known spectral reflectance of the chart, the spectral power distribution (SPD) of the illuminant is estimated using the Wiener estimation and smoothing estimation. To evaluate the performance of the method quantitatively, the goodnessfitting coefficient (GFC) was used to measure the spectral match and the CIELAB color difference metric was used to evaluate the color match between color patches under the estimated and actual SPDs. The simulated experiment was carried to estimate CIE standard illuminant D50 and C using X-rite ColorChecker 24-color chart, the actual experiment was carried to estimate daylight and illuminant A using two consumergrade cameras and the chart, and the experiment results verified feasible of the investigated method.


DEStech Transactions on Engineering and Technology Research | 2018

Development of a Patient-Specific BRDF-Based Human Lung Rendering for VATS Simulation

Cheng-Qi Xia; Xiaoqiao Huang; Zaiqing Chen; Qiong Li; Junsheng Shi; Yonghang Tai; Lei Wei; Hailing Zhou; Saeid Nahavandi; Jun Peng; Ran Cao

Video-assisted thoracoscopic surgery (VATS), referred to as the commonest minimum invasive excision for located T1 or T2 lung carcinomas, requires a steep learning curve for the novice residents to acquire highly deliberate skills to achieve surgical competence. Based on the bidirectional reflection distribution function (BRDF) physic-based rendering model, the aim of this study is to propose a virtual reality-based (VR) surgical educative simulator with realistic performance in visual rendering for human lung in VAST procedures. Patient-specific medical images and 360-degree dynamic surgical room environment are also being integrated in our training scenario. Finally, validation experiments are implemented on the virtual surgical simulator SimVATS framework, which demonstrated a high performance and distinguished immersion. This study may explore a new graphic rendering model for the VATS surgical education integrate with haptic and VR implementation. Introduction Benefit from the burgeoning improvement on computer graphics (CG) and virtual reality (VR) technologies, virtual surgery is considered as a method that can reduce the cost of surgical training for surgeons effectively. Yet, only few physics-based rendering are applicated in the medical field, especially the rendering of soft tissues in the human body. Furthermore, minimally invasive surgeries (MIS) usually accompany with a high risk due to the limitations in operating space, viewing angle, and lighting conditions. The visual properties of human soft tissues have a high dynamic range and there is individual pathological difference [1]. Therefore, virtual surgery requires higher requirements for the immersive reproduction and realistic visual effects as operation room (OR) scenes. Most current virtual surgical simulators focus on the interactions between trainees and virtual scenes and ignore realistic surgical environment and human tissue rendering. In consequence, trainees are constantly aware that they are in a fake virtual training environment, rather than the actual surgical process [2]. In this context, the physics-based rendering method is considered to be an effective way to provide more visual information. In addition, due to real training scenarios, virtual reality technology will also improve clinical effectiveness [3]. Related Works In 1999, Marschner et al. proposed image-based BRDF [4],which is based on a specific location of the light source and the camera, using a photo and any geometric shape to sample the isotropic material. In 2004, Chung et al. introduced a method of retroreflective BRDF sampling in video bronchoscopy images [5], which allows the acquisition of BRDF, where light is always collinear with the camera, and their goal is to render a view angle. In 2011, Cenydd et al. improved image-based BRDF to measure BRDF in the human brain [6]. Taking the space and time constraints into account of the surgery, they could only take five photos of the area of interest and the sampling was poor. in 2017, Qian et al. proposed a virtual reality framework for laparoscopic simulations [7], including the use of microfacet models for material rendering. VR applicated in the medical field has dramatically boosted in the field of surgical planning, surgical navigation, and rehabilitation training [8]. Although VR has made great progress in the medical field, there have been major improvements in techniques and visual effects [9, 10]. However, these applications have neglected the effect of the operation room environment on the effects of surgical training.


Concurrency and Computation: Practice and Experience | 2018

Machine learning-based haptic-enabled surgical navigation with security awareness

Yonghang Tai; Lei Wei; Hailing Zhou; Qiong Li; Xiaoqiao Huang; Junsheng Shi; Saeid Nahavandi

A novel security awareness surgical navigation system has been proposed for the accurate minimally invasive surgery with machine learning algorithms, haptic‐enabled devices, and customized surgical tools to guide the surgery with real‐time force and visual navigation. To provide a direct and simplified user interface during the operation, we combined traditional surgical guide images with AR‐based view and implemented a 3D reconstructed patient‐specific surgical environment includes with all surgical requisite details. In particular, we trained the surgical collected biomechanics haptic data by employed LSTM‐based RNN algorithm, and residual network for the intraoperative force manipulation prediction and classification, respectively. Experiments evaluation results on percutaneous therapy surgery demonstrated a higher performance and distinguished accuracy by the visual and haptic combined than the traditional navigation system. These preliminary study findings may suggested a new framework in the minimally invasive surgical navigation application and hint at the possibility integration of haptic, AR, and machine learning algorithms implementation in medical simulation. In addition, we take security into account when implementation this new framework.


international conference on mechatronics | 2017

Real-Time Visuo-Haptic Surgical Simulator for Medical Education – A Review

Yonghang Tai; Junsheng Shi; Lei Wei; Xiaoqiao Huang; Zaiqing Chen; Qiong Li

Virtual surgery simulations are able to provide reliable and repeatable learning experiences in a safe environment, form acquiring basic skills, to performing full procedures. Yet, a high-fidelity, practical and immersive surgical simulation platform involves multi-disciplinary topics. Such as computer graphics, haptics, numerical calculation, imaging processing as well as mechanics etc. Depending on the detailed simulation, various surgical operations such as puncture, cut, tear, burning and suture may need to be simulated, each comes with very specific requirement on medical equipment and micro skills. In this paper, we review a number of previous simulators of haptic-enabled medical education in different surgical operations, and identify several techniques that may improve the effectiveness of the pipelines in both visual and haptic rendering. We believe that virtual surgery simulation has enormous potential in surgical training, education and planning fields of medical advancement, and we endeavor to push the boundaries in this field through this review.


international conference on mechatronics | 2017

Improve Communication Efficiency Between Hearing-Impaired and Hearing People - A Review

Lei Wei; Hailing Zhou; Junsheng Shi; Saeid Nahavandi

Sign languages are one of the most essential communication skills for hearing-impaired people, yet they are not easy to understand for hearing people and this situation has created communication barriers through many aspects of our society. While recruiting a sign language interpreter for each hearing-impaired people is apparently not feasible, improving the communication effectiveness through up-to-date research work in the field of haptics, motion capture and face recognition can be promising and practical. In this paper, we review a number of previous methods in sign language recognition using different approaches, and identify a few techniques that may improve the effectiveness of the communication pipeline between hearing-impaired and hearing people. These techniques can be fit into a comprehensive communication pipeline and serve as a foundation model for more research work between hearing-impaired and hearing people.


international conference on mechatronics | 2017

Dynamic Force Modeling for Robot-Assisted Percutaneous Operation Using Intraoperative Data

Feiyan Li; Yonghang Tai; Junsheng Shi; Lei Wei; Xiaoqiao Huang; Qiong Li; Minghui Xiao; Min Zou

Percutaneous therapy is an essential approach in minimally invasive surgery, especially of the percutaneous access built procedure which without represent neither visual nor tactile feedbacks through the actual operation. In this paper, we constructed a dynamic percutaneous biomechanics experiment architecture, as well as a corresponding validation framework in surgery room with clinical trials designed to facilitate the accurate modeling of the puncture force. It is the first time to propose an intraoperative data based dynamic force modeling and introduce the idea of continuations modeling of percutaneous force. The result demonstrates that the force modeling of dynamic puncture we proposed based on our experimental architecture obtained is not only has a higher fitting degree with the biological tissue data than previous algorithms, but also yields a high coincidence with the intraoperative clinic data. Further proves that dynamic puncture modeling algorithm has a higher similarity with the medical percutaneous surgery, which will provide more precise and reliable applications in the robot-assisted surgery.


international conference on mechatronics | 2017

Development of NSCLC Precise Puncture Prototype Based on CT Images Using Augmented Reality Navigation

Zhibao Qin; Yonghang Tai; Junsheng Shi; Lei Wei; Zaiqing Chen; Qiong Li; Minghui Xiao; Jie Shen

According to the gray value in CT image sequence of NSCLC, the visualization of CT images can facilitated surgeries to analyze and judge detailed characteristics inside the tumor. Through the color distribution of the heat map, it can identify a representative position for tumor puncture biopsy. CT images sequence is reconstructed three-dimensional model, including the skin, bone, lungs and tumor, the reconstructed model and the patient’s real body are registered. The four-dimensional heat map of the tumor as a reference to determine the space location of the puncture. In this paper, we conducted reconstructions of a four-dimensional heat map of region of interest and designed the best puncture path. The use of AR navigation technology to guide the puncture biopsy, which achieved the purpose of precise puncture, and facilitate the doctor to take a sample to do pathology analysis. Validations demonstrated our precise puncture system based on AR navigation and four-dimensional heat map reconstruction perform a greatly improved the accuracy of tumor puncture and the diagnoses rate of tumor biopsy


2011 International Conference on Optical Instruments and Technology: Optical Systems and Modern Optoelectronic Instruments | 2011

Design of OLED gamma correction system based on the LUT

Yonghang Tai; Lijun Yun; Junsheng Shi; Zaiqing Chen; Qiong Li

Gamma correction is an important processing in reproduce images information realizing of video source. In order to improve the image sharpness of the OLED micro-display, a Gamma correction system was established to compensate for the gray scale distortion of the micro-display which is caused by the difference between the optical and electrical characteristic property. Based on the North OLEiD Companys 0.5 inch OLED, We proposed a Gamma correction system to converts 8 bits input signal into 9 bits displayed on the OLED. It used Microchip as the MCU and the master of the I2C serial bus, Development of the hardware system measurement verified the correction of VGA and CVBS video input and the picture quality also apparently improved.

Collaboration


Dive into the Junsheng Shi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Qiong Li

Yunnan Normal University

View shared research outputs
Top Co-Authors

Avatar

Yonghang Tai

Yunnan Normal University

View shared research outputs
Top Co-Authors

Avatar

Xiaoqiao Huang

Yunnan Normal University

View shared research outputs
Top Co-Authors

Avatar

Zaiqing Chen

Yunnan Normal University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yonghang Tai

Yunnan Normal University

View shared research outputs
Top Co-Authors

Avatar

Lijun Yun

Yunnan Normal University

View shared research outputs
Top Co-Authors

Avatar

Feiyan Li

Yunnan Normal University

View shared research outputs
Researchain Logo
Decentralizing Knowledge