Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yulong Xi is active.

Publication


Featured researches published by Yulong Xi.


Multimedia Tools and Applications | 2017

Motion-based skin region of interest detection with a real-time connected component labeling algorithm

Wei Song; Dong Wu; Yulong Xi; Yong Woon Park; Kyungeun Cho

This paper presents a motion-based skin Region of Interest (ROI) detection method using a real-time connected component labeling algorithm to provide real-time and adaptive skin ROI detection in video images. Skin pixel segmentation in video images is a pre-processing step for face and hand gesture recognition, and motion is a cue for detecting foreground objects. We define skin ROIs as pixels of skin-like color where motion takes place. In the skin color estimation phase, RGB color histograms are utilized to define the skin color distribution and specify the threshold to segment skin-like regions. A parallel computed connected component labeling algorithm is also proposed to group the segmentation results into several clusters. If a cluster covers any motion pixel, this cluster is identified as a skin ROI. The method’s results for real images are shown, and its speed is evaluated for various parameters. This technology is compatible with monitoring systems, scene understanding, and natural user interfaces.


Multimedia Tools and Applications | 2017

Real-time single camera natural user interface engine development

Wei Song; Xingquan Cai; Yulong Xi; Seoungjae Cho; Kyungeun Cho

Natural user interfaces (NUIs) provide human computer interaction (HCI) with natural and intuitive operation interfaces, such as using human gestures and voice. We have developed a real-time NUI engine architecture using a web camera as a means of implementing NUI applications. The system captures video via the web camera, implements real-time image processing using graphic processing unit (GPU) programming. This paper describes the architecture of the engine and the real-virtual environment interaction methods, such as foreground segmentation and hand gesture recognition. These methods are implemented using GPU programming in order to realize real-time image processing for HCI. To verify the efficacy of our proposed NUI engine, we utilized it in the development and implementation of several mixed reality games and touch-less operation applications, using the developed NUI engine and the DirectX SDK. Our results confirm that the methods implemented by the engine operate in real time and the interactive operations are intuitive.


ieee international conference on dependable autonomic and secure computing | 2013

Gesture-Based NUI Application for Real-Time Path Modification

Hongzhe Liu; Yulong Xi; Wei Song; Kyhyun Um; Kyungeun Cho

Since the birth of Natural User Interface (NUI) concept, the NUI has become widely used. NUI-based applications have grown rapidly, particularly those using gestures, which have come to occupy a pivotal place in technology. The ever-popular Smartphone is one of the best examples. Recently, video conferencing has also begun adopting gesture-based NUIs with augmented reality (AR) technology. The NUI and AR have greatly enriched and facilitated human experience. In addition, path planning has been a popular topic in research area. Traditional path planning uses automatic navigation to solve problems, it cannot practically interact with people. Its algorithm calculates complexly, moreover, in certain extenuating circumstances, automatic real-time processing is much less efficient than human path modification. Therefore, considering such extenuating circumstances, we present a solution that employs NUI technology for 3D path modification in real time. In our proposed solution, users can manually operate and edit their own paths. The core method is based on 3D point detection to change paths. We did a simulation experiment about city path modification. Experiment resulted that computer can accurately identify a valid gesture. By using gesture it can effectively change the path. Among other applications, this solution can be used in virtual military maps and car navigation.


computer science and its applications | 2016

A Wireless Kinect Sensor Network System for Virtual Reality Applications

Mengxuan Li; Wei Song; Liang Song; Kaisi Huang; Yulong Xi; Kyungeun Cho

Currently, Microsoft Kinect, a motion sensing input device, has been developed quickly in research for human gesture recognition. The Kinect integrating into games and Virtual Reality (VR) improves the immersion sense and natural user experience. However, the Kinect is able to accurately measure a user within five meters, while the user must face to the sensor. To solve this problem, this paper develops a wireless Kinect sensor network system to detect users at several viewports. This system utilizes multiple Kinect clients to sense user’s gesture information, which is transmitted to a VR managing server for the integration of the distributed sensing datasets. Different from the VR application with a single Kinect, our proposed system is able to support the user’s walking around no matter whether he is facing the sensors or not. Meanwhile, we developed a virtual boxing VR game with two Kinects, Samsung Gear VR and Unity3D environment, which verified the effective performance of the proposed system.


Archive | 2014

Design and Implementation of a Web Camera-Based Natural User Interface Engine

Wei Song; Yulong Xi; Warda Ikram; Seoungjae Cho; Kyungeun Cho; Kyhyun Um

Natural User Interfaces (NUIs) are a novel way to provide Human Computer Interaction (HCI) with natural and intuitive operation interfaces, such as using human gestures and voice. This paper proposes a real-time NUI engine architecture using a web camera as a means of implementing NUI applications with an inexpensive device. The engine integrates the OpenCV library, the CUDA toolkit, and the DirectX SDK. We utilize the OpenCV library to capture video via the web camera, implement real-time image processing using Graphic Processing Unit (GPU) programming, and present the NUI applications using the DirectX SDK; for example, to implant a 3D object in a captured scene and play sounds. To verify the efficacy of our proposed NUI engine, we utilized it in the development and implementation of several mixed reality games and touch-less operation applications. Our results confirm that the methods of the engine are implemented in real time and the interactive operations are intuitive.


The Journal of Supercomputing | 2016

A collaborative client participant fusion system for realistic remote conferences

Wei Song; Mingyun Wen; Yulong Xi; Phuong Minh Chu; Hoang Vu; Shokh-Jakhon Kayumiy; Kyungeun Cho

Remote conferencing systems provide a shared environment where people in different locations can communicate and collaborate in real time. Currently, remote video conferencing systems present separate video images of the individual participants. To achieve a more realistic conference experience, we enhance video conferencing by integrating the remote images into a shared virtual environment. This paper proposes a collaborative client participant fusion system using a real-time foreground segmentation method. In each client system, the foreground pixels are extracted from the participant images using a feedback background modeling method. Because the segmentation results often contain noise and holes caused by adverse environmental lighting conditions and substandard camera resolution, a Markov Random Field model is applied in the morphological operations of dilation and erosion. This foreground segmentation refining process is implemented using graphics processing unit programming, to facilitate real-time image processing. Subsequently, segmented foreground pixels are transmitted to a server, which fuses the remote images of the participants into a shared virtual environment. The fused conference scene is represented by a realistic holographic projection.


Archive | 2016

An Interactive Virtual Reality System with a Wireless Head-Mounted Display

Shujia Hao; Wei Song; Kaisi Huang; Yulong Xi; Kyungeun Cho; Kyhyun Um

Virtual reality (VR) with head-mounted display (HMD) device provides an immersive experience for novel multimedia applications. This paper develops an interactive virtual reality system with a wireless HMD to enable a natural VR operation interface. In a server, the system utilizes the Kinect as a motion detection device to estimate VR user’s location and gesture information in real time. Through a WiFi network, the user’s information is transferred to the HMD as a client, where the user controls an avatar following his motion. The controlled avatar interacts with the virtual environment in real time. The proposed system is implemented using a Samsung Gear VR, Kinect 2.0 and Unity3D environment. This system is compatible with serious games, virtual and physical collaboration, natural user interfaces, and other multimedia applications.


Archive | 2014

Surface Touch Interaction Method Using Inverted Leap Motion Device

Yulong Xi; Seoungjae Cho; Young-Sik Jeong; Kyungeun Cho; Kyhyun Um

Research and development is actively underway in the Natural User Interface (NUI)-based application program field. This is because existing input devices, including remote controllers, do not satisfy the interface interactivity demanded by users. Consequently, interfaces that are more intuitive and natural are still desired by users. This paper proposes an approach in which a touch interface is implemented by projecting a screen onto a surface using a projector. The technology required to implement the proposed approach is available at a moderate price and is easy to install. In this paper, a Leap Motion device, a ready-made tool, is incorporated to implement the proposed approach. Further, we explain how the proposed approach overcomes the disadvantages present in the Leap Motion device.


Multimedia Tools and Applications | 2017

Design and implementation of a same-user identification system in invoked reality space

Yunji Jung; Yulong Xi; Seoungjae Cho; Wei Song; Simon Fong; Kyungeun Cho

The objective of this study is to solve the problem of user data not being precisely received from sensors because of sensing region limitations in invoked reality (IR) space, distortion of colors or patterns by lighting, and blocking or overlapping of a user by other users. The sensing scope range is thus expanded using multiple sensors in the IR space. Moreover, user feature data are accurately identified by user sensing. Specifically, multiple sensors are employed when not all of user data are sensed because they overlap with data of other users. In the proposed approach, all clients share the user feature data from multiple sensors. Accordingly, each client recognizes that the user is the same individual on the basis of the shared data. Furthermore, the identification accuracy is improved by identifying the user features based on colors and patterns that are less affected by lighting. Therefore, accurate identification of the user feature data is enabled, even under lighting changes. The proposed system was implemented based on system performance analysis standards. The practicality and system performance in identifying the same person using the proposed method were verified through an experiment.


Wireless Personal Communications | 2016

Gesture Recognition Method Using Sensing Blocks

Yulong Xi; Seoungjae Cho; Simon Fong; Yong Woon Park; Kyungeun Cho

Recently, the recognition of posture and gesture has been widely used in fields such as medical treatment and human–computer interaction. Previous research into the recognition of posture and gesture has mainly used human skeletons and an RGB-D camera. The resulting recognition methods utilize models of the human skeleton, with different numbers of joints. The processing of the resulting large amounts of feature data needed to recognize a gesture leads to the recognition being delayed. To overcome this issue, we designed and developed a system for learning and recognizing postures and gestures. This paper proposes a gesture recognition method with enhanced generality and processing speed. The proposed method consists of feature collection part, feature optimization part, and a posture and gesture recognition part. We have verified the solution proposed in this paper through the learning and subsequent recognition of 29 postures and 8 gestures.

Collaboration


Dive into the Yulong Xi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wei Song

North China University of Technology

View shared research outputs
Top Co-Authors

Avatar

Kaisi Huang

North China University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yong Woon Park

Agency for Defense Development

View shared research outputs
Top Co-Authors

Avatar

Dong Wu

North China University of Technology

View shared research outputs
Top Co-Authors

Avatar

Mengxuan Li

North China University of Technology

View shared research outputs
Top Co-Authors

Avatar

Mingyun Wen

North China University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge