Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yang Feng is active.

Publication


Featured researches published by Yang Feng.


international symposium on multimedia | 2016

Voting with Feet: Who are Leaving Hillary Clinton and Donald Trump

Yu Wang; Yang Feng; Jiebo Luo; Xiyang Zhang

From a crowded field with 17 candidates, Hillary Clinton and Donald Trump have emerged as the two presidential nominees in the 2016 U. S. presidential election. The two candidates each boast more than 7 million followers on Twitter, and at the same time both have witnessed hundreds of thousands of people leave their camps. In this paper we attempt to characterize individuals who have left Hillary Clinton and Donald Trump between September 2015 and March 2016. Our study focuses on four dimensions of social demographics: social capital, gender, age and race. Within each camp, we compare the characteristics of the current followers with former followers, i.e., individuals who have left since September 2015. We use the number of followers to measure social capital, and use profile images to infer gender, age and race. For classifying gender and race, we train a convolutional neural network (CNN). For age, we use the Face++ API. Our study shows that for both candidates followers with more social capital are more likely to leave (or switch camps). For both candidates females make up a larger presence among unfollowers than among current followers. Somewhat surprisingly, the effect is particularly pronounced for Clinton. Middle-aged individuals are more likely to leave Trump, and the young are more likely to leave Hillary Clinton. Lastly, for both candidates, African Americans make up a smaller presence among unfollowers than among followers, and the effect is particularly strong for Hillary Clinton.


international conference on pattern recognition | 2016

Learning effective Gait features using LSTM

Yang Feng; Yuncheng Li; Jiebo Luo

Human gait is an important biometric feature for person identification in surveillance videos because it can be collected at a distance without subject cooperation. Most existing gait recognition methods are based on Gait Energy Image (GEI). Although the spatial information in one gait sequence can be well represented by GEI, the temporal information is lost. To solve this problem, we propose a new feature learning method for gait recognition. Not only can the learned feature preserve temporal information in a gait sequence, but it can also be applied to cross-view gait recognition. Heatmaps extracted by a convolutional neutral network (CNN) based pose estimate method are used to describe the gait information in one frame. To model a gait sequence, the LSTM recurrent neural network is naturally adopted. Our LSTM model can be trained with unlabeled data, where the identity of the subject in a gait sequence is unknown. When labeled data are available, our LSTM works as a frame to frame view transformation model (VTM). Experiments on a gait benchmark demonstrate the efficacy of our method.


international conference on pattern recognition | 2014

Multi-group Adaptation for Event Recognition from Videos

Yang Feng; Xinxiao Wu; Han Wang; Jing Liu

Recognizing events in consumer videos is becoming increasingly important because of the enormous growth of consumer videos in recent years. Current researches mainly focus on learning from numerous labeled videos, which is time consuming and labor expensive due to labeling the consumer videos. To alleviate the labeling process, we utilize a large number of loosely labeled Web videos (e.g., from YouTube) for visual event recognition in consumer videos. Web videos are noisy and diverse, so brute force transfer of Web videos to consumer videos may hurt the performance. To address such a negative transfer problem, we propose a novel Multi-Group Adaptation (MGA) framework to divide the training Web videos into several semantic groups and seek the optimal weight of each group. Each weight represents how relative the corresponding group is to the consumer domain. The final classifier for event recognition is learned using the weighted combination of classifiers learned from Web videos and enforced to be smooth on the consumer domain. Comprehensive experiments on three real-world consumer video datasets demonstrate the effectiveness of MGA for event recognition in consumer videos.


Neurocomputing | 2016

Heterogeneous Discriminant Analysis for Cross-View Action Recognition

Wanchen Sui; Xinxiao Wu; Yang Feng; Yunde Jia

Abstract We propose an approach of cross-view action recognition, in which the samples from different views are represented by features with different dimensions. Inspired by linear discriminant analysis (LDA), we introduce a discriminative common feature space to bridge the source and target views. Two different projection matrices are learned to respectively map the action data from two different views into the common space by simultaneously maximizing the similarity of intra-class samples, minimizing the similarity of inter-class samples and reducing the mismatch between data distributions of two views. In addition, the locality information is incorporated into the discriminant analysis as a constraint to make the discriminant function smooth on the data manifold. Our method is neither restricted to the corresponding action instances in the two views nor restricted to a specific type of feature. We evaluate our approach on the IXMAS multi-view action dataset and N-UCLA dataset. The experimental results demonstrate the effectiveness of our method.


international conference on social computing | 2017

Gender Politics in the 2016 U.S. Presidential Election: A Computer Vision Approach

Yu Wang; Yang Feng; Jiebo Luo

Gender plays an important role in the 2016 U.S. presidential election, especially with Hillary Clinton becoming the first female presidential nominee and Donald Trump being frequently accused of sexism. In this paper, we introduce computer vision to the study of gender politics and present an image-driven method that can measure the effects of gender in an accurate and timely manner. We first collect all the profile images of the candidates’ Twitter followers. Then we train a convolutional neural network using images that contain gender labels. Lastly, we classify all the follower and unfollower images. Through a case study of the ‘woman card’ controversy, we demonstrate how gender is informing the 2016 presidential election. Our framework of analysis can be readily generalized to other case studies and elections.


international conference on social computing | 2017

Inferring Follower Preferences in the 2016 U.S. Presidential Primaries with Sparse Learning

Yu Wang; Yang Feng; Xiyang Zhang; Jiebo Luo

In this paper, we propose a framework to infer Twitter follower preferences for the 2016 U.S. presidential primaries. Using Twitter data collected from Sept. 2015 to Mar. 2016, we first uncover the tweeting tactics of the candidates and then exploit the variations in the number of ‘likes’ to infer followers’ preference. With sparse learning, we are able to reveal neutral topics as well as positive and negative ones.


international conference on big data | 2016

When do luxury cars hit the road? Findings by a big data approach

Yang Feng; Jiebo Luo

In this paper, we focus on a study of the timing of different kinds of cars on the road. This information will enable us to infer the life style of the car owners. The results can further be used to guide marketing towards car owners and setting auto insurance policies. Conventionally, this kind of study is carried out by sending out questionnaires, which is limited in scale and diversity. To solve this problem, we propose a fully automatic method to conduct this study at scale. Our study is based on publicly available surveillance camera data. Images from the public traffic cameras are downloaded every minute. After obtaining the images, we apply faster R-CNN (region-based convolutional neural network) to detect the cars in the downloaded images and a fine-tuned VGG16 model is used to recognize the car makes. Based on the recognition results, we present a data-driven analysis on the relationship between car makes and their appearing times, with implications on lifestyles.


Iet Computer Vision | 2016

Multi-group–multi-class domain adaptation for event recognition

Yang Feng; Xinxiao Wu; Yunde Jia

In this study, the authors propose a multi-group–multi-class domain adaptation framework to recognise events in consumer videos by leveraging a large number of web videos. The authors’ framework is extended from multi-class support vector machine by adding a novel data-dependent regulariser, which can force the event classifier to become consistent in consumer videos. To obtain web videos, they search them using several event-related keywords and refer the videos returned by one keyword search as a group. They also leverage a video representation which is the average of convolutional neural networks features of the video frames for better performance. Comprehensive experiments on the two real-world consumer video datasets demonstrate the effectiveness of their method for event recognition in consumer videos.


international conference on pattern recognition | 2014

Modeling the Relationship of Action, Object, and Scene

Jing Liu; Xinxiao Wu; Yang Feng

In the task of action recognition, object and scene can provide rich source of contextual information for analyzing human actions, as human actions often occur under particular scene settings with certain related objects. Therefore, we try to utilize the contextual object and scene for improving the performance of action recognition. Specifically, a latent structural SVM is introduced to build the co-occurrence relationship among action, object and scene, in which the object class label and scene class label are treated as latent variables. Using this framework, we can simultaneously predict action class labels, object class labels as well as scene class labels. Moreover, we use a mid-level discriminative feature to separately describe the information of action, object and scene. The feature is actually a set of decision values from the pre-learned classifiers of each class, measuring the likelihood that the input video belongs to the corresponding class. In this paper, we use SVM as action and scene pre-learned classifiers, and use deformable part-based object detector as the object pre-learned classifier, so that object location can be obtained as a by-product. Experimental results on UCF Sports, YouTube and UCF50 datasets demonstrate the effectiveness of the proposed approach.


international conference on big data | 2016

Pricing the woman card: Gender politics between hillary clinton and donald trump

Yu Wang; Yang Feng; Jiebo Luo; Xiyang Zhang

Collaboration


Dive into the Yang Feng's collaboration.

Top Co-Authors

Avatar

Jiebo Luo

University of Rochester

View shared research outputs
Top Co-Authors

Avatar

Yu Wang

University of Rochester

View shared research outputs
Top Co-Authors

Avatar

Xiyang Zhang

Beijing Normal University

View shared research outputs
Top Co-Authors

Avatar

Xinxiao Wu

Beijing Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jing Liu

Beijing Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Yunde Jia

Beijing Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Haofu Liao

University of Rochester

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ryan Berger

University of Rochester

View shared research outputs
Top Co-Authors

Avatar

Yuncheng Li

University of Rochester

View shared research outputs
Researchain Logo
Decentralizing Knowledge