Shigang Wang
Jilin University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Shigang Wang.
Applied Optics | 2015
Min Guo; Yujuan Si; Yuanzhi Lyu; Shigang Wang; Fushou Jin
We present a method to generate the elemental image array (EIA) in integral imaging in this paper. The discrete viewpoint image array is captured from a discrete viewpoint pickup platform and is treated by a window interception algorithm to obtain the subimage array (SIA). The EIA can be obtained from the SIA according to the transformation relationship between the EIA and SIA. We employ the EIA to display in the integral imaging system, indicating that the proposed method can truly represent the structure of the objects.
Applied Optics | 2011
Jian Wei; Shigang Wang; Yan Zhao; Fushou Jin
We are concerned with the coding of subimage-transformed elemental images to solve the problems of data transmission and storage in three-dimensional (3D) integral imaging in this paper. First, we use the subimage transform for preprocessing of the elemental image array (EIA). Because of the similarity of correlation distributions between the subimage array (SIA) and multiview video, we present a hierarchical prediction structure for SIA coding based on the hierarchical B picture (HBP) structure for multiview video coding. Moreover, we design a multithreaded parallel implementation for the proposed structure according to inter-row prediction dependencies. Experiments are performed on both EIAs and SIAs. The results show that employing the same coding strategy, the proposed parallel implemented HBP scheme achieves not only higher image quality and better 3D effect but also lower coding delay at low bit rates compared with the previously reported Hilbert-curve-based scheme.
Signal, Image and Video Processing | 2018
Chuxi Yang; Yan Zhao; Shigang Wang
The rapid growth of image resources on the Internet makes it possible to find some highly correlated images on some Web sites when people plan to transmit an image over the Internet. This study proposes a low bit-rate cloud-based image coding scheme, which utilizes cloud resources to implement image coding. Multiple- discrete wavelet transform was adopted to decompose the input image into a low-frequency sub-band and several high-frequency sub-bands. The low-frequency sub-band image was used to retrieve highly correlated images (HCOIs) in the cloud. The highly correlated regions in the HCOIs were used to reconstruct the high-frequency sub-bands at the decoder to save bits. The final reconstructed image was generated using multiple inverse wavelet transform from a decompressed low-frequency sub-band and reconstructed high-frequency sub-bands. The experimental results showed that the coding scheme performed well, especially at low bit rates. The peak signal-to-noise ratio of the reconstructed image can gain up to 7 and 1.69xa0dB over JPEG and JPEG2000 under the same compression ratio, respectively. By utilizing the cloud resources, our coding scheme showed an obvious advantage in terms of visual quality. The details in the image can be well reconstructed compared with both JPEG, JPEG2000, and intracoding of HEVC.
Journal of Electronic Imaging | 2018
Yingyu Ji; Shigang Wang; Yang Lu; Jian Wei; Yan Zhao
Abstract. Eye and mouth state analysis is an important step in fatigue detection. An algorithm that analyzes the state of the eye and mouth by extracting contour features is proposed. First, the face area is detected in the acquired image database. Then, the eyes are located by an EyeMap algorithm through a clustering method to extract the sclera-fitting eye contour and calculate the contour aspect ratio. In addition, an effective algorithm is proposed to solve the problem of contour fitting when the human eye is affected by strabismus. Meanwhile, the value of chromatism s is defined in the RGB space, and the mouth is accurately located through lip segmentation. Based on the color difference of the lip, skin, and internal mouth, the internal mouth contour can be fitted to analyze the opening state of mouth; at the same time, another unique and effective yawning judgment mechanism is considered to determine whether the driver is tired. This paper is based on the three different databases to evaluate the performance of the proposed algorithm, and it does not need training with high calculation efficiency.
international conference on image and graphics | 2017
Bowen Jia; Shigang Wang; Wei Wu; Tianshu Li; Lizhong Zhang
Using 3DS MAX to obtain elemental image (EI) array in the virtual integral imaging system need to put large scale camera array, which is difficult to be applied to practice. To solve this problem we establish a sparse acquisition integral imaging system. In order to improve the accuracy of disparity calculation, a method using color segmentation and integral projection to calculate the average disparity value of each color object between two adjacent images is proposed. Firstly, we need to finish the establishment of virtual scene and microlens array model in 3DS MAX. According to the mapping relationship between EI and sub image (SI), we can obtain the SI by first, then calculate to the EI. The average value of the disparity from different color objects between adjacent images is acquired based on color image segmentation method and integral projection method, and then translate a rectangular window of fixed size in accordance with the average disparities to intercept the rendered output images to get the sub images (SIs). Finally, after stitching and mapping of the SIs we obtain the elemental images (EIs), put the EIs into the display device to display 3-dimensional (3D) scene. The experimental results show that we can only use 12 * 12 cameras instead of 59 * 41 cameras to obtain EIs, and the 3D display effect is obvious. The error rate of disparity calculation is 0.433% in both horizontal and vertical directions, which is obviously better than other methods with disparity error rate of 2.597% and 4.762%. The sparse acquisition integral imaging system is more accurate and more convenient which can be used for EI content acquisition for large screen 3D displaying.
international conference on image and graphics | 2017
Henan Li; Shigang Wang; Yan Zhao; Chuxi Yang; Aobo Wang
Face images have great significance in machine vision field especially like face recognition and tracking. Considering the similarity of face images, this paper proposes a face image coding scheme based on Scale Invariant Feature Transform (SIFT) descriptor. The SIFT descriptor, which is a kind of local feature descriptor, characterizes an image region invariant to scale and rotation. The facial features are combined with the SIFT descriptor to make use of the external image contents to improve the coding efficiency in this paper. We segment an image into certain regions according to the facial features and get SIFT features in these regions. Then the SIFT features are used to find the corresponding patches from a large-scale face image database. Experimental results show that the proposed image coding method provides a better visual quality of reconstructed images than the intra-frame coding in HEVC under the similar compression ratio.
international conference on audio language and image processing | 2016
Min Guo; Yujuan Si; Shigang Wang; Yuanzhi Lyu; Bowen Jia; Wei Wu
To solve the problem of rapid perception of the collected content and non-contact measurement in integral imaging, computer virtual reconstruction algorithm of a three dimensional scene is proposed. Firstly, calculate the combined disparity map of an elemental image array using the region-based iterative matching algorithm according to the distribution characteristics of the homologous pixels in the elemental image array; then calculate the spatial coordinates of the reconstructed object points in line with the triangulation principle; and at last, delete error points and reduce the data redundancy using the data simplification algorithm based on the scanning line, and then the reconstructed three dimensional scene is obtained. The experimental results indicate that the method can not only reconstruct a clear and complete three dimensional scene, restore the relative position of the objects, but also measure the objects sizes in the three dimensional scene.
Optoelectronic Imaging and Multimedia Technology IV | 2016
Wenting Zhao; Shigang Wang; Chao Liang; Wei Wu; Yang Lu
This paper aims to achieve robust behavior recognition of video object in complicated background. Features of the video object are described and modeled according to the depth information of three-dimensional video. Multi-dimensional eigen vector are constructed and used to process high-dimensional data. Stable object tracing in complex scenes can be achieved with multi-feature based behavior analysis, so as to obtain the motion trail. Subsequently, effective behavior recognition of video object is obtained according to the decision criteria. What’s more, the real-time of algorithms and accuracy of analysis are both improved greatly. The theory and method on the behavior analysis of video object in reality scenes put forward by this project have broad application prospect and important practical significance in the security, terrorism, military and many other fields.
Acta pharmaceutica Sinica | 2009
Shigang Wang; Ji Ys; Li H; Yang Sj
Optics Communications | 2018
Wei Wu; Shigang Wang; Mei-Lan Piao; Yan Zhao; Jian Wei