Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Xiaoqian Liu is active.

Publication


Featured researches published by Xiaoqian Liu.


IEEE Transactions on Multimedia | 2012

Robustly Extracting Captions in Videos Based on Stroke-Like Edges and Spatio-Temporal Analysis

Xiaoqian Liu; Weiqiang Wang

This paper presents an effective and efficient approach to extracting captions from videos. The robustness of our system comes from two aspects of contributions. First, we propose a novel stroke-like edge detection method based on contours, which can effectively remove the interference of non-stroke edges in complex background so as to make the detection and localization of captions much more accurate. Second, our approach highlights the importance of temporal feature, i.e., inter-frame feature, in the task of caption extraction (detection, localization, segmentation). Instead of regarding each video frame as an independent image, through fully utilizing the temporal feature of video together with spatial analysis in the computation of caption localization, segmentation and post-processing, we demonstrate that the use of inter-frame information can effectively improve the accuracy of caption localization and caption segmentation. In the comprehensive our evaluation experiments, the experimental results on two representative datasets have shown the robustness and efficiency of our approach.


PeerJ | 2016

How smartphone usage correlates with social anxiety and loneliness

Yusong Gao; Ang Li; Tingshao Zhu; Xiaoqian Liu; Xingyun Liu

Introduction: Early detection of social anxiety and loneliness might be useful to prevent substantial impairment in personal relationships. Understanding the way people use smartphones can be beneficial for implementing an early detection of social anxiety and loneliness. This paper examines different types of smartphone usage and their relationships with people with different individual levels of social anxiety or loneliness. Methods: A total of 127 Android smartphone volunteers participated in this study, all of which have agreed to install an application (MobileSens) on their smartphones, which can record user’s smartphone usage behaviors and upload the data into the server. They were instructed to complete an online survey, including the Interaction Anxiousness Scale (IAS) and the University of California Los Angeles Loneliness Scale (UCLA-LS). We then separated participants into three groups (high, middle and low) based on their scores of IAS and UCLA-LS, respectively. Finally, we acquired digital records of smartphone usage from MobileSens and examined the differences in 105 types of smartphone usage behaviors between high-score and low-score group of IAS/UCLA-LS. Results: Individuals with different scores on social anxiety or loneliness might use smartphones in different ways. For social anxiety, compared with users in low-score group, users in high-score group had less number of phone calls (incoming and outgoing) (Mann-Whitney U = 282.50∼409.00, p < 0.05), sent and received less number of text messages in the afternoon (Mann-Whitney U = 391.50∼411.50, p < 0.05), used health & fitness apps more frequently (Mann-Whitney U = 493.00, p < 0.05) and used camera apps less frequently (Mann-Whitney U = 472.00, p < 0.05). For loneliness, users in low-score group, users in high-score group had less number of phone calls (incoming and outgoing) (Mann-Whitney U = 305.00∼407.50, p < 0.05) and used following apps more frequently: health & fitness (Mann-Whitney U = 510.00, p < 0.05), system (Mann-Whitney U = 314.00, p < 0.01), phone beautify (Mann-Whitney U = 385.00, p < 0.05), web browser (Mann-Whitney U = 416.00, p < 0.05) and social media (RenRen) (Mann-Whitney >U = 388.50, p < 0.01). Discussion: The results show that individuals with social anxiety or loneliness receive less incoming calls and use healthy applications more frequently, but they do not show differences in outgoing-call-related features. Individuals with higher levels of social anxiety also receive less SMSs and use camera apps less frequently, while lonely individuals tend to use system, beautify, browser and social media (RenRen) apps more frequently. Conclusion: This paper finds that there exists certain correlation among smartphone usage and social anxiety and loneliness. The result may be useful to improve social interaction for those who lack social interaction in daily lives and may be insightful for recognizing individual levels of social anxiety and loneliness through smartphone usage behaviors.


acm multimedia | 2010

Extracting captions from videos using temporal feature

Xiaoqian Liu; Weiqiang Wang

Captions in videos provide much useful semantic information for indexing and retrieving video contents. In this paper, we present an effective approach to extracting captions from videos. Its novelty comes from exploiting the temporal information in both localization and segmentation of captions. Since some simple features such as edges, corners and color are utilized, our approach is efficient. It involves four steps. First, we exploit the distribution of corners to spatially detect and locate the caption in a frame. Then the temporal localization for different captions in a video is performed by identifying the change of stroke directions. After that, we segment the caption pixels in a clip with a same caption based on the consistency and dominant distribution of caption color. Finally, the segmentation results are further refined. The experimental results on two representative movies have preliminarily verified the validity of our approach.


PeerJ | 2016

Deep learning for constructing microblog behavior representation to identify social media user’s personality

Xiaoqian Liu; Tingshao Zhu

1 Due to the rapid development of information technology, Internet has become part of everyday life gradually. People would like to communicate with friends to share their opinions on social networks. The diverse social network behavior is an ideal users’ personality traits reflection. Existing behavior analysis methods for personality prediction mostly extract behavior attributes with heuristic. Although they work fairly well, but it is hard to extend and maintain. In this paper, for personality prediction, we utilize deep learning algorithm to build feature learning model, which could unsupervised extract Linguistic Representation Feature Vector (LRFV) from text published on Sina Micro-blog actively. Compared with other feature extraction methods, LRFV, as an abstract representation of Micro-blog content, could describe use’s semantic information more objectively and comprehensively. In the experiments, the personality prediction model is built using linear regression algorithm, and different attributes obtained through different feature extraction methods are taken as input of prediction model respectively. The results show that LRFV performs more excellently in micro-blog behavior description and improve the performance of personality prediction model. 2


web intelligence | 2016

Emotion Detection Using Kinect 3D Facial Points

Zhan Zhang; Liqing Cui; Xiaoqian Liu; Tingshao Zhu

With the development of pattern recognition and artificial intelligence, emotion recognition based on facial expression has attracted a great deal of research interest. Facial emotion recognition are mainly based on facial images. The commonly used datasets are created artificially, with obvious facial expression on each facial images. Actually, emotion is a complicated and dynamic process. If a person is happy, probably he/she may not keep obvious happy facial expression all the time. Practically, it is important to recognize emotion correctly even if the facial expression is not clear. In this paper, we propose a new method of emotion recognition, i.e., to identify three kinds of emotion: sad, happy and neutral. We acquire 1347 3D facial points by Kinect V2.0. Key facial points are selected and feature extraction is conducted. Principal Component Analysis (PCA) is employed for feature dimensionality reduction. Several classical classifiers are used to construct emotion recognition models. The best performance of classification on all, male and female data are 70%, 77% and 80% respectively.


PeerJ | 2016

Emotion recognition based on customized smart bracelet with built-in accelerometer

Zhan Zhang; Yufei Y Song; Liqing Cui; Xiaoqian Liu; Tingshao Zhu

Background: Recently, emotion recognition has become a hot topic in human-computer interaction. If computers could understand human emotions, they could interact better with their users. This paper proposes a novel method to recognize human emotions (neutral, happy, and angry) using a smart bracelet with built-in accelerometer. Methods: In this study, a total of 123 participants were instructed to wear a customized smart bracelet with built-in accelerometer that can track and record their movements. Firstly, participants walked two minutes as normal, which served as walking behaviors in a neutral emotion condition. Participants then watched emotional film clips to elicit emotions (happy and angry). The time interval between watching two clips was more than four hours. After watching film clips, they walked for one minute, which served as walking behaviors in a happy or angry emotion condition. We collected raw data from the bracelet and extracted a few features from raw data. Based on these features, we built classification models for classifying three types of emotions (neutral, happy, and angry). Results and Discussion: For two-category classification, the classification accuracy can reach 91.3% (neutral vs. angry), 88.5% (neutral vs. happy), and 88.5% (happy vs. angry), respectively; while, for the differentiation among three types of emotions (neutral, happy, and angry), the accuracy can reach 81.2%. Conclusions: Using wearable devices, we found it is possible to recognize human emotions (neutral, happy, and angry) with fair accuracy. Results of this study may be useful to improve the performance of human-computer interaction.


international conference on pattern recognition | 2010

Extracting Captions in Complex Background from Videos

Xiaoqian Liu; Weiqiang Wang; Tingshao Zhu

Captions in videos play a significant role for automatically understanding and indexing video content, since much semantic information is associated with them. This paper presents an effective approach to extracting captions from videos, in which multiple different categories of features (edge, color, stroke etc.) are utilized, and the spatio-temporal characteristics of captions are considered. First, our method exploits the distribution of gradient directions to decompose a video into a sequence of clips temporally, so that each clip contains a caption at most, which makes the successive extraction computation more efficient and accurate. For each clip, the edge and corner information are then utilized to locate text regions. Further, text pixels are extracted based on the assumption that text pixels in text regions always have homogeneous color, and their quantity dominates the region relative to non-text pixels with different colors. Finally, the segmentation results are further refined. The encouraging experimental results on 2565 characters have preliminarily validated our approach.


Multimedia Tools and Applications | 2015

An effective graph-cut scene text localization with embedded text segmentation

Xiaoqian Liu; Weiqiang Wang

This paper presents an effective and efficient approach to extracting scene text from images. The approach first extracts the edge information by the local maximum difference filter (LMDF), and at the same time a given image is decomposed into a group of image layers by color clustering. Then, through combining the characteristics of geometric structure and spatial distribution of scene text with the edge map, the candidate text image layers are identified. Further, in character level, the candidate text connected components are identified using a set of heuristic rules. Finally, the graph-cut computation is utilized to identify and localize text lines with arbitrary directions. In the proposed approach, the segmentation of text pixels is efficiently embedded into the computation of text localization as a part. The comprehensive evaluation experiments are performed on four challenging datasets (ICDAR 2003, ICDAR 2011, MSRA-TD500 and The Street View Text (SVT)) to verify the validation of our approach. In the comparison experiments with many state-of-the-art methods, the results demonstrate that our approach can effectively handle scene text with diverse fonts, sizes, colors, different languages, as well as arbitrary orientations, and it is robust to the influence of illumination change.


international conference on human centered computing | 2014

Personality Prediction for Microblog Users with Active Learning Method

Xiaoqian Liu; Dong Nie; Shuotian Bai; Bibo Hao; Tingshao Zhu

Personality research on social media is a hot topic recently due to the rapid development of social medias well as the central importance of personality in psychology, but it is hard to acquire adequate appropriate labeled samples. Our research aims to choose the right users to be labeled to improve the accuracy of predicting. Given a set of Microblog users’ public information (e.g., number of followers) and a few labeled users, the task is to predict personality of other unlabeled users. The active learning regression algorithm has been employed to establish predicting model in this paper, and the experimental results demonstrate our method can fairly well predict the personality of Microblog users.


Journal of Medical Internet Research | 2017

Designing Microblog Direct Messages to Engage Social Media Users With Suicide Ideation: Interview and Survey Study on Weibo

Ziying Tan; Xingyun Liu; Xiaoqian Liu; Qijin Cheng; Tingshao Zhu

Background While Web-based interventions can be efficacious, engaging a target population’s attention remains challenging. We argue that strategies to draw such a population’s attention should be tailored to meet its needs. Increasing user engagement in online suicide intervention development requires feedback from this group to prevent people who have suicide ideation from seeking treatment. Objective The goal of this study was to solicit feedback on the acceptability of the content of messaging from social media users with suicide ideation. To overcome the common concern of lack of engagement in online interventions and to ensure effective learning from the message, this research employs a customized design of both content and length of the message. Methods In study 1, 17 participants suffering from suicide ideation were recruited. The first (n=8) group conversed with a professional suicide intervention doctor about its attitudes and suggestions for a direct message intervention. To ensure the reliability and consistency of the result, an identical interview was conducted for the second group (n=9). Based on the collected data, questionnaires about this intervention were formed. Study 2 recruited 4222 microblog users with suicide ideation via the Internet. Results The results of the group interviews in study 1 yielded little difference regarding the interview results; this difference may relate to the 2 groups’ varied perceptions of direct message design. However, most participants reported that they would be most drawn to an intervention where they knew that the account was reliable. Out of 4222 microblog users, we received responses from 725 with completed questionnaires; 78.62% (570/725) participants were not opposed to online suicide intervention and they valued the link for extra suicide intervention information as long as the account appeared to be trustworthy. Their attitudes toward the intervention and the account were similar to those from study 1, and 3 important elements were found pertaining to the direct message: reliability of account name, brevity of the message, and details of the phone numbers of psychological intervention centers and psychological assessment. Conclusions This paper proposed strategies for engaging target populations in online suicide interventions.

Collaboration


Dive into the Xiaoqian Liu's collaboration.

Top Co-Authors

Avatar

Tingshao Zhu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Weiqiang Wang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Ke Lu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Liqing Cui

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xingyun Liu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Zhan Zhang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Ang Li

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Bibo Hao

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Chi Zhang

Chinese Academy of Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge