Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kailun Yang is active.

Publication


Featured researches published by Kailun Yang.


Sensors | 2016

Expanding the Detection of Traversable Area with RealSense for the Visually Impaired

Kailun Yang; Kaiwei Wang; Weijian Hu; Jian Bai

The introduction of RGB-Depth (RGB-D) sensors into the visually impaired people (VIP)-assisting area has stirred great interest of many researchers. However, the detection range of RGB-D sensors is limited by narrow depth field angle and sparse depth map in the distance, which hampers broader and longer traversability awareness. This paper proposes an effective approach to expand the detection of traversable area based on a RGB-D sensor, the Intel RealSense R200, which is compatible with both indoor and outdoor environments. The depth image of RealSense is enhanced with IR image large-scale matching and RGB image-guided filtering. Traversable area is obtained with RANdom SAmple Consensus (RANSAC) segmentation and surface normal vector estimation, preliminarily. A seeded growing region algorithm, combining the depth image and RGB image, enlarges the preliminary traversable area greatly. This is critical not only for avoiding close obstacles, but also for allowing superior path planning on navigation. The proposed approach has been tested on a score of indoor and outdoor scenarios. Moreover, the approach has been integrated into an assistance system, which consists of a wearable prototype and an audio interface. Furthermore, the presented approach has been proved to be useful and reliable by a field test with eight visually impaired volunteers.


Sensors | 2017

Detecting Traversable Area and Water Hazards for the Visually Impaired with a pRGB-D Sensor

Kailun Yang; Kaiwei Wang; Ruiqi Cheng; Weijian Hu; Xiao Huang; Jian Bai

The use of RGB-Depth (RGB-D) sensors for assisting visually impaired people (VIP) has been widely reported as they offer portability, function-diversity and cost-effectiveness. However, polarization cues to assist traversability awareness without precautions against stepping into water areas are weak. In this paper, a polarized RGB-Depth (pRGB-D) framework is proposed to detect traversable area and water hazards simultaneously with polarization-color-depth-attitude information to enhance safety during navigation. The approach has been tested on a pRGB-D dataset, which is built for tuning parameters and evaluating the performance. Moreover, the approach has been integrated into a wearable prototype which generates a stereo sound feedback to guide visually impaired people (VIP) follow the prioritized direction to avoid obstacles and water hazards. Furthermore, a preliminary study with ten blindfolded participants suggests its effectivity and reliability.


Sensors | 2018

Unifying Terrain Awareness for the Visually Impaired through Real-Time Semantic Segmentation

Kailun Yang; Kaiwei Wang; Luis Miguel Bergasa; Eduardo Romera; Weijian Hu; Dongming Sun; Junwei Sun; Ruiqi Cheng; Tianxue Chen; Elena López

Navigational assistance aims to help visually-impaired people to ambulate the environment safely and independently. This topic becomes challenging as it requires detecting a wide variety of scenes to provide higher level assistive awareness. Vision-based technologies with monocular detectors or depth sensors have sprung up within several years of research. These separate approaches have achieved remarkable results with relatively low processing time and have improved the mobility of impaired people to a large extent. However, running all detectors jointly increases the latency and burdens the computational resources. In this paper, we put forward seizing pixel-wise semantic segmentation to cover navigation-related perception needs in a unified way. This is critical not only for the terrain awareness regarding traversable areas, sidewalks, stairs and water hazards, but also for the avoidance of short-range obstacles, fast-approaching pedestrians and vehicles. The core of our unification proposal is a deep architecture, aimed at attaining efficient semantic understanding. We have integrated the approach in a wearable navigation system by incorporating robust depth segmentation. A comprehensive set of experiments prove the qualified accuracy over state-of-the-art methods while maintaining real-time speed. We also present a closed-loop field test involving real visually-impaired users, demonstrating the effectivity and versatility of the assistive framework.


Multimedia Tools and Applications | 2018

Real-time pedestrian crossing lights detection algorithm for the visually impaired

Ruiqi Cheng; Kaiwei Wang; Kailun Yang; Ningbo Long; Jian Bai; Dong Liu

In defect of intelligent assistant approaches, the visually impaired feel hard to cross the roads in urban environments. Aiming to tackle the problem, a real-time Pedestrian Crossing Lights (PCL) detection algorithm for the visually impaired is proposed in this paper. Different from previous works which utilize analytic image processing to detect the PCL in ideal scenarios, the proposed algorithm detects PCL using machine learning scheme in the challenging scenarios, where PCL have arbitrary sizes and locations in acquired image and suffer from the shake and movement of camera. In order to achieve the robustness and efficiency in those scenarios, the detection algorithm is designed to include three procedures: candidate extraction, candidate recognition and temporal-spatial analysis. A public dataset of PCL, which includes manually labeled ground truth data, is established for tuning parameters, training samples and evaluating the performance. The algorithm is implemented on a portable PC with color camera. The experiments carried out in various practical scenarios prove that the precision and recall of detection are both close to 100%, meanwhile the frame rate is up to 21 frames per second (FPS).


Journal of Ambient Intelligence and Smart Environments | 2017

IR stereo RealSense: Decreasing minimum range of navigational assistance for visually impaired individuals

Kailun Yang; Kaiwei Wang; Xiangdong Zhao; Ruiqi Cheng; Jian Bai; Yongying Yang; Dong Liu

Introduction of RGB-D sensors is a revolutionary force that offers a portable, versatile and cost-effective solution of navigational assistance for the visually impaired. RGB-D sensors on the market such as Microsoft Kinect, Asus Xtion and Intel RealSense are mature products, but all have a minimum detecting distance of about 800mm. This results in the loss of depth information and the omission of short-range obstacles, posing a significant risk on navigation. This paper puts forward a simple and effective approach to reduce the minimum range that enhances the reliability and safety of navigational assistance. Over-dense regions of IR speckles in two IR images are exploited as a stereo pair to generate short-range depth, as well as fusion of original depth image and RGB image to eliminate misjudgment. Besides, a seeded growing algorithm of obstacle detection with extended depth information is presented. Finally, the minimum range of Intel RealSense R200 is decreased by approximately 75%, from 650mm to 165mm. Experiment results show capacity of detecting obstacles from 165mm to more than 5000mm and improved performance of navigational assistance with expansion of detection range. The presented approach proves to be of qualified accuracy and speed for guiding the visually impaired.


international conference on computers helping people with special needs | 2018

KrNet: A Kinetic Real-Time Convolutional Neural Network for Navigational Assistance

Shufei Lin; Kaiwei Wang; Kailun Yang; Ruiqi Cheng

Over the past years, convolutional neural networks (CNN) have not only demonstrated impressive capabilities in computer vision but also created new possibilities of providing navigational assistance for people with visually impairment. In addition to obstacle avoidance and mobile localization, it is helpful for visually impaired people to perceive kinetic information of the surrounding. Road barrier, as a specific obstacle as well as a sign of entrance or exit, is an underlying hazard ubiquitously in daily environments. To address the road barrier recognition, this paper proposes a novel convolutional neural network named KrNet, which is able to execute scene classification on mobile devices in real time. The architecture of KrNet not only features depthwise separable convolution and channel shuffle operation to reduce computational cost and latency, but also takes advantage of Inception modules to maintain accuracy. Experimental results are presented to demonstrate qualified performance for the meaningful and useful applications of navigational assistance within residential and working area.


Target and Background Signatures IV | 2018

Glass detection and recognition based on the fusion of ultrasonic sensor and RGB-D sensor for the visually impaired

Zhiming Huang; Kaiwei Wang; Kailun Yang; Ruiqi Cheng; Jian Bai

With the increasing demands of visually impaired people, developing assistive technology to help them travel effectively and safely has been a research hotspot. Red, Green, Blue and Depth (RGB-D) sensor has been widely used to help visually impaired people, but the detection and recognition of glass objects is still a challenge, considering the depth information of glass cannot be obtained correctly. In order to overcome the limitation, we put forward a method to detect glass objects in natural indoor scenes in this paper, which is based on the fusion of ultrasonic sensor and RGB-D sensor on a wearable prototype. Meanwhile, the erroneous depth map of glass object computed by the RGB-D sensor could also be densely recovered. In addition, under some special circumstances, such as facing a mirror or an obstacle within the minimum detectable range of the RGB-D sensor, we use a similar processing method to regain depth information in the invalid area of the original depth map. The experimental results show that the detection range and precision of the RGB-D sensor have been significantly improved with the aid of ultrasonic sensor. The proposed method is proved to be able to detect and recognize common glass obstacles for visually impaired people in real time, which is suitable for real-world indoor navigation assistance.


Target and Background Signatures IV | 2018

Scene text detection and recognition system for visually impaired people in real world

Lei Fei; Hao Chen; Kaiwei Wang; Shufei Lin; Kailun Yang; Ruiqi Cheng

Visually Impaired (VI) people around the world have difficulties in socializing and traveling due to the limitation of traditional assistive tools. In recent years, practical assistance systems for scene text detection and recognition allow VI people to obtain text information from surrounding scenes. However, real-world scene text features complex background, low resolution, variable fonts as well as irregular arrangement which make it difficult to achieve robust scene text detection and recognition. In this paper, a scene text recognition system to help VI people is proposed. Firstly, we propose a high-performance neural network to detect and track objects, which is applied to specific scenes to obtain Regions of Interest (ROI). In order to achieve real-time detection, a light-weight deep neural network has been built using depth-wise separable convolutions that enables the system to be integrated into mobile devices with limited computational resources. Secondly, we train the neural network using the textural features to improve the precision of text detection. Our algorithm suppresses the effects of spatial transformation (including translation, scaling, rotation as well as other geometric transformations) based on the spatial transformer networks. Open-source optical character recognition (OCR) is used to train scene texts individually to improve the accuracy of text recognition. The interactive system eventually transfers the number and distance information of inbound buses to visually impaired people. Finally, a comprehensive set of experiments on several benchmark datasets demonstrates that our algorithm has achieved an extraordinary trade-off between precision and resource usage.


Sensors | 2018

Visual Localizer: Outdoor Localization Based on ConvNet Descriptor and Global Optimization for Visually Impaired Pedestrians

Shufei Lin; Ruiqi Cheng; Kaiwei Wang; Kailun Yang

Localization systems play an important role in assisted navigation. Precise localization renders visually impaired people aware of ambient environments and prevents them from coming across potential hazards. The majority of visual localization algorithms, which are applied to autonomous vehicles, are not adaptable completely to the scenarios of assisted navigation. Those vehicle-based approaches are vulnerable to viewpoint, appearance and route changes (between database and query images) caused by wearable cameras of assistive devices. Facing these practical challenges, we propose Visual Localizer, which is composed of ConvNet descriptor and global optimization, to achieve robust visual localization for assisted navigation. The performance of five prevailing ConvNets are comprehensively compared, and GoogLeNet is found to feature the best performance on environmental invariance. By concatenating two compressed convolutional layers of GoogLeNet, we use only thousands of bytes to represent image efficiently. To further improve the robustness of image matching, we utilize the network flow model as a global optimization of image matching. The extensive experiments using images captured by visually impaired volunteers illustrate that the system performs well in the context of assisted navigation.


Proceedings of the 2018 International Conference on Information Science and System - ICISS '18 | 2018

Improving RealSense by Fusing Color Stereo Vision and Infrared Stereo Vision for the Visually Impaired

Hao Chen; Kaiwei Wang; Kailun Yang

The introduction of RGB-D sensor has attracted attention from researchers majored in computer vision. With real-time depth measurement provided by RGB-D sensor, a better navigational assistance than traditional aiding tools can be offered for visually impaired people. However, nowadays RGB-D sensor usually has a limited detecting range, and fails in performing depth measurement on objects with special surfaces, such as absorbing, specular, and transparent surfaces. In this paper, a novel algorithm using two RealSense R200 simultaneously to build a short-baseline color stereo vision system is developed. This algorithm enhances depth estimation by fusing color stereo depth map and original RealSense depth map, which is obtained by infrared stereo vision. Moreover, the minimum range is decreased by up to 84.6%, from 650mm to 100mm. We anticipate out algorithm to provide better assistance for visually impaired individuals.

Collaboration


Dive into the Kailun Yang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge