Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ruomei Wang is active.

Publication


Featured researches published by Ruomei Wang.


The Visual Computer | 2016

A 3D model perceptual feature metric based on global height field

Yihui Guo; Shujin Lin; Zhuo Su; Xiaonan Luo; Ruomei Wang; Yang Kang

Human visual attention system tends to be attracted to perceptual feature points on 3D model surfaces. However, purely geometric-based feature metrics may be insufficient to extract perceptual features, because they tend to detect local structure details. Intuitively, the perceptual importance degree of vertex is associated with the height of its geometry position between original model and a datum plane. So, we propose a novel and straightforward method to extract perceptually important points based on global height field. Firstly, we construct spectral domain using Laplace–Beltrami operator, and we perform spectral synthesis to reconstruct a rough approximation of the original model by adopting low-frequency coefficients, and make it as the 3D datum plane. Then, to build global height field, we calculate the Euclidean distance between vertex geometry position on original surface and the one on 3D datum plane. Finally, we set a threshold to extract perceptual feature vertices. We implement our technique on several 3D mesh models and compare our algorithm to six state-of-the-art interest points detection approaches. Experimental results demonstrate that our algorithm can accurately capture perceptually important points on arbitrary topology 3D model.


Neural Computing and Applications | 2018

Multipoint infrared laser-based detection and tracking for people counting

Hefeng Wu; Chengying Gao; Yirui Cui; Ruomei Wang

Laser devices have received increasing attention in numerous computer-aided applications such as automatic control, 3D modeling and virtual reality. In this paper, aiming at people counting, we propose a novel people detection and tracking method based on the multipoint infrared laser, which can further facilitate intelligent scene modeling and analysis. In our method, a camera with the infrared lens filter is utilized to capture the monitored scene where an array of infrared spots is produced by the multipoint infrared laser. We build a spatial background model based on locations of spots. Pedestrians are detected by clustering of foreground spots. Then, our method tracks and counts the detected pedestrians via inferring the forward–backward motion consistency. Both quantitative and qualitative evaluation and comparison are conducted, and the experimental results demonstrate that the proposed method achieves excellent performance in challenging scenarios.


acm multimedia | 2017

A Novel System for Visual Navigation of Educational Videos Using Multimodal Cues

Baoquan Zhao; Shujin Lin; Xiaonan Luo; Songhua Xu; Ruomei Wang

With recent developments and advances in distance learning and MOOCs, the amount of open educational videos on the Internet has grown dramatically in the past decade. However, most of these videos are lengthy and lack of high-quality indexing and annotations, which triggers an urgent demand for efficient and effective tools that facilitate video content navigation and exploration. In this paper, we propose a novel visual navigation system for exploring open educational videos. The system tightly integrates multimodal cues obtained from the visual, audio and textual channels of the video and presents them with a series of interactive visualization components. With the help of this system, users can explore the video content using multiple levels of details to identify content of interest with ease. Extensive experiments and comparisons against previous studies demonstrate the effectiveness of the proposed system.


IEEE Transactions on Image Processing | 2017

Distortion-Aware Correlation Tracking

Hanhui Li; Hefeng Wu; Huifang Zhang; Shujin Lin; Xiaonan Luo; Ruomei Wang

Recently, correlation filter (CF)-based tracking methods have attracted considerable attention because of their high-speed performance. However, distortion, which refers to the phenomenon that the correlation outputs of CF-based trackers are distorted, remains a major obstacle for these methods. In this paper, we propose a distortion-aware correlation filter framework, which can detect distortions and recover from tracking failures. Our framework employs a simple yet effective feature termed normed correlation response to detect distortions. Meanwhile, we introduce a competition mechanism to handle distortions, in which we build a specialized graph to formulate and handle tracking under distortion as a maximum multi clique problem. Furthermore, a global-local context model is exploited to alleviate underlying distortions during the tracking process. Extensive experiments on the Online Tracking Benchmark show that our tracker can find the optimal target trajectory during the distortion period and retrieve the possibly missing target, consequently outperforms the state-of-the-art methods and improves the performance of CF-based trackers favorably.Recently, correlation filter (CF)-based tracking methods have attracted considerable attention because of their high-speed performance. However, distortion, which refers to the phenomenon that the correlation outputs of CF-based trackers are distorted, remains a major obstacle for these methods. In this paper, we propose a distortion-aware correlation filter framework, which can detect distortions and recover from tracking failures. Our framework employs a simple yet effective feature termed normed correlation response to detect distortions. Meanwhile, we introduce a competition mechanism to handle distortions, in which we build a specialized graph to formulate and handle tracking under distortion as a maximum multi clique problem. Furthermore, a global-local context model is exploited to alleviate underlying distortions during the tracking process. Extensive experiments on the Online Tracking Benchmark show that our tracker can find the optimal target trajectory during the distortion period and retrieve the possibly missing target, consequently outperforms the state-of-the-art methods and improves the performance of CF-based trackers favorably.


Multimedia Tools and Applications | 2017

A data-driven editing framework for automatic 3D garment modeling

Li Liu; Zhuo Su; Xiaodong Fu; Lijun Liu; Ruomei Wang; Xiaonan Luo

Exploring shape variations on virtual garments is significant but challenging to the aspect of 3D garment modeling. In this paper, we propose a data-driven editing framework for automatic 3D garment modeling, which includes semantic garment segmentation, probabilistic reasoning for component suggestion, and garment component merging. The key idea in this work is to develop a simple but effective garment synthesis that utilizes a continuous style description, which can be characterized by the ratio of area and boundary length on garment components. First, a semi-supervised learning algorithm is proposed to simultaneously segment and label the components in 3D garments. Second, a set of matchable probability measurement is applied to recommend components that can be regarded as a new 3D garment. Third, a variation synthesis is developed to satisfy the garment style criteria while ensuring the realistic-looking plausibility of the results. As demonstrated by the experiments, our method is able to generate various reasonable garments with material effects to enrich existing 3D garments.


Chinese Conference on Image and Graphics Technologies | 2014

Probabilistic Model for Virtual Garment Modeling

Shan Zeng; Fan Zhou; Ruomei Wang; Xiaonan Luo

Designing 3D garments is difficult, especially when the user lacks professional knowledge of garment design. Inspired by the assemble modeling, we facilitate 3D garment modeling by combining parts extracted from a database containing a large collection of garment component. A key challenge in assembly-based garment modeling is the identifying the relevant components that needs to be presented to the user. In this paper, we propose a virtual garment modeling method based on probabilistic model. We learn a probabilistic graphic model that encodes the semantic relationship among garment components from garment images. During the garment design process, the Bayesian graphic model is used to demonstrate the garment components that are semantically compatible with the existing model. And we also propose a new part stitching method for garment components. Our experiments indicates that the learned Bayesian graphic model increase the relevance of presented components and the part stitching method generates good results.


Neural Computing and Applications | 2018

A novel approach to automatic detection of presentation slides in educational videos

Baoquan Zhao; Shujin Lin; Xin Qi; Ruomei Wang; Xiaonan Luo

Recent advancement in learning and teaching methodology experimented with virtual reality (VR)-based presentation form to create immersive learning and training environment. The quality of such educational VR applications not only relies on the virtual model, but the 2D presentation materials such as text, diagrams and figures. However, manual designing or seeking these educational resources is both labor intensive and time-consuming. In this paper, we introduce a new automatic algorithm to detect and extract presentation slides in educational videos, which will provide abundant resources for creating slide-based immersive presentation environment. The proposed approach mainly involves five core components: shot boundary detection, training instances collection, shot classification, slide region detection and slide transition detection. We conducted comparison experiment to evaluate the performance of the proposed method. The results indicate that, in comparison with peer method, the proposed method improves the precision of slide detection from 81.6 to 92.6% and recall from 74.7 to 86.3% on average. With the detected slides, content analyzer can be employed to further extract reusable elements, which can be used for developing VR-based educational applications.


Image and Vision Computing | 2018

Learning deep similarity models with focus ranking for fabric image retrieval

Daiguo Deng; Ruomei Wang; Hefeng Wu; Huayong He; Qi Li; Xiaonan Luo

Abstract Fabric image retrieval is beneficial to many applications including clothing searching, online shopping and cloth modeling. Learning pairwise image similarity is of great importance to an image retrieval task. With the resurgence of Convolutional Neural Networks (CNNs), recent works have achieved significant progresses via deep representation learning with metric embedding, which drives similar examples close to each other in a feature space, and dissimilar ones apart from each other. In this paper, we propose a novel embedding method termed focus ranking that can be easily unified into a CNN for jointly learning image representations and metrics in the context of fine-grained fabric image retrieval. Focus ranking aims to rank similar examples higher than all dissimilar ones by penalizing ranking disorders via the minimization of the overall cost attributed to similar samples being ranked below dissimilar ones. At the training stage, training samples are organized into focus ranking units for efficient optimization. We build a large-scale fabric image retrieval dataset (FIRD) with about 25,000 images of 4300 fabrics, and test the proposed model on the FIRD dataset. Experimental results show the superiority of the proposed model over existing metric embedding models.


international conference on multimedia and expo | 2017

Multi-view pairwise relationship learning for sketch based 3D shape retrieval

Hanhui Li; Hefeng Wu; Xiangjian He; Shujin Lin; Ruomei Wang; Xiaonan Luo

Recent progress in sketch-based 3D shape retrieval creates a novel and user-friendly way to explore massive 3D shapes on the Internet. However, current methods on this topic rely on designing invariant features for both sketches and 3D shapes, or complex matching strategies. Therefore, they suffer from problems like arbitrary drawings and inconsistent viewpoints. To tackle this problem, we propose a probabilistic framework based on Multi-View Pairwise Relationship (MVPR) learning. Our framework includes multiple views of 3D shapes as the intermediate layer between sketches and 3D shapes, and transforms the original retrieval problem into the form of inferring pairwise relationship between sketches and views. We accomplish pairwise relationship inference by a novel MVPR net, which can automatically predict and merge the pairwise relationships between a sketch and multiple views, thus freeing us from exhaustively selecting the best view of 3D shapes. We also propose to learn robust features for sketches and views via fine-tuning pre-trained networks. Extensive experiments on a large dataset demonstrate that the proposed method can outperform state-of-the-art methods significantly.


international conference on computer graphics and interactive techniques | 2017

Automatic generation of visual-textual web video thumbnail

Baoquan Zhao; Shujin Lin; Xin Qi; Zhiquan Zhang; Xiaonan Luo; Ruomei Wang

Thumbnails provide an efficient way to perceive video content and give online viewers instant gratification of making relevance judgements. In this paper, we proposed an automatic approach to generate magazine-cover-like thumbnail using the salient visual and textual metadata extracted from video. Compared with traditional snapshot, the synthesized thumbnail is more informative and attractive, which would be helpful for online video selection.

Collaboration


Dive into the Ruomei Wang's collaboration.

Top Co-Authors

Avatar

Xiaonan Luo

Sun Yat-sen University

View shared research outputs
Top Co-Authors

Avatar

Shujin Lin

Sun Yat-sen University

View shared research outputs
Top Co-Authors

Avatar

Hefeng Wu

Guangdong University of Foreign Studies

View shared research outputs
Top Co-Authors

Avatar

Baoquan Zhao

Guilin University of Electronic Technology

View shared research outputs
Top Co-Authors

Avatar

Fan Zhou

Sun Yat-sen University

View shared research outputs
Top Co-Authors

Avatar

Hanhui Li

Sun Yat-sen University

View shared research outputs
Top Co-Authors

Avatar

Zhuo Su

Sun Yat-sen University

View shared research outputs
Top Co-Authors

Avatar

Fei Wang

Sun Yat-sen University

View shared research outputs
Top Co-Authors

Avatar

Li Liu

Sun Yat-sen University

View shared research outputs
Top Co-Authors

Avatar

Xin Qi

Sun Yat-sen University

View shared research outputs
Researchain Logo
Decentralizing Knowledge