Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yue Wu is active.

Publication


Featured researches published by Yue Wu.


acm multimedia | 2016

Families in the Wild (FIW): Large-Scale Kinship Image Database and Benchmarks

Joseph P. Robinson; Ming Shao; Yue Wu; Yun Fu

We present the largest kinship recognition dataset to date, Families in the Wild (FIW). Motivated by the lack of a single, unified dataset for kinship recognition, we aim to provide a dataset that captivates the interest of the research community. With only a small team, we were able to collect, organize, and label over 10,000 family photos of 1,000 families with our annotation tool designed to mark complex hierarchical relationships and local label information in a quick and efficient manner. We include several benchmarks for two image-based tasks, kinship verification and family recognition. For this, we incorporate several visual features and metric learning methods as baselines. Also, we demonstrate that a pre-trained Convolutional Neural Network (CNN) as an off-the-shelf feature extractor outperforms the other feature types. Then, results were further boosted by fine-tuning two deep CNNs on FIW data: (1) for kinship verification, a triplet loss function was learned on top of the network of pre-train weights; (2) for family recognition, a family-specific softmax classifier was added to the network.


Computer Standards & Interfaces | 2015

Adaptive Cascade Deep Convolutional Neural Networks for face alignment

Yuan Dong; Yue Wu

Abstract Deep convolutional network cascade has been successfully applied for face alignment. The configuration of each network, including the selecting strategy of local patches for training and the input range of local patches, is crucial for achieving desired performance. In this paper, we propose an adaptive cascade framework, termed Adaptive Cascade Deep Convolutional Neural Networks (ACDCNN) which adjusts the cascade structure adaptively. Gaussian distribution is utilized to bridge the successive networks. Extensive experiments demonstrate that our proposed ACDCNN achieves the state-of-the-art in accuracy, but with reduced model complexity and increased robustness.


acm multimedia | 2016

Deep Convolutional Neural Network with Independent Softmax for Large Scale Face Recognition

Yue Wu; Jun Li; Yu Kong; Yun Fu

In this paper, we present our solution to the MS-Celeb-1M Challenge. This challenge aims to recognize 100k celebrities at the same time. The huge number of celebrities is the bottleneck for training a deep convolutional neural network of which the output is equal to the number of celebrities. To solve this problem, an independent softmax model is proposed to split the single classifier into several small classifiers. Meanwhile, the training data are split into several partitions. This decomposes the large scale training procedure into several medium training procedures which can be solved separately. Besides, a large model is also trained and a simple strategy is introduced to merge the two models. Extensive experiments on the MSR-Celeb-1M dataset demonstrate the superiority of the proposed method. Our solution ranks the first and second in two tracks of the final evaluation.


acm multimedia | 2016

Deep Bi-directional Cross-triplet Embedding for Cross-Domain Clothing Retrieval

Shuhui Jiang; Yue Wu; Yun Fu

In this paper, we address two practical problems when shopping online: 1) What will I look like when wearing this clothing on the street? 2) How to find the exact same or similar clothing that other people are wearing on the street or in a movie? In this paper, we jointly solve these two problems with one bi-directional shop-to-street street-to-shop clothing retrieval framework. There are three main challenges of cross-domain clothing retrieval task. First is to learn the discrepancy (e.g., background, pose, illumination) between street domain and shop domain clothing. Second, both intra-domain and cross-domain similarity need to be considered during feature embedding. Third, there is large bias between the number of matched and non-matched street and shop pairs. To solve these challenges, in this paper, we propose a deep bi-directional cross-triplet embedding algorithm by extending the start-of-the-art triplet embedding into cross-domain retrieval scenario. Extensive experiments demonstrate the effectiveness of the proposed algorithm.


acm multimedia | 2017

Deep Face Recognition with Center Invariant Loss

Yue Wu; Hongfu Liu; Jun Li; Yun Fu

Convolutional Neural Networks (CNNs) have been widely used for face recognition and got extraordinary performance with large number of available face images of different people. However, it is hard to get uniform distributed data for all people. In most face datasets, a large proportion of people have few face images. Only a small number of people appear frequently with more face images. These people with more face images have higher impact on the feature learning than others. The imbalanced distribution leads to the difficulty to train a CNN model for feature representation that is general for each person, instead of mainly for the people with large number of face images. To address this challenge, we proposed a center invariant loss which aligns the center of each person to enforce the learned features to have a general representation for all people. The center invariant loss penalizes the difference between each center of classes. With center invariant loss, we can train a robust CNN that treats each class equally regardless the number of class samples. Extensive experiments demonstrate the effectiveness of the proposed approach. We achieve state-of-the-art results on LFW and YTF datasets.


acm multimedia | 2016

Super Resolution of the Partial Pixelated Images With Deep Convolutional Neural Network

Haiyi Mao; Yue Wu; Jun Li; Yun Fu

The problem of super resolution of partial pixelated images is considered in this paper. Partial pixelated images are more and more common in nowadays due to public safety etc. However, in some special cases, for instance criminal investigation, some images are pixelated intentionally by criminals and partial pixelate make it hard to reconstruct images even a higher resolution images. Hence, a method is proposed to handle this problem based on the deep convolutional neural network, termed depixelate super resolution CNN(DSRCNN). Given the mathematical expression pixelates, we propose a model to reconstruct the image from the pixelation and map to a higher resolution by combining the adversarial autoencoder with two depixelate layers. This model is evaluated on standard public datasets in which images are pixelated randomly and compared to the state of arts methods, shows very exciting performance.


acm multimedia | 2017

RFIW: Large-Scale Kinship Recognition Challenge

Joseph P. Robinson; Ming Shao; Handong Zhao; Yue Wu; Timothy Gillis; Yun Fu

Recognizing Families In the Wild (RFIW) is organized as a Data Challenge Workshop in conjunction with ACM MM 2017. The workshop is scheduled for the afternoon of October 27th. RFIW is the 1st large-scale kinship recognition challenge and is made up of 2 tracks, kinship verification and family classification. In total, 12 final submissions were made. This big data challenge was achieved with our FIW dataset which is, by far, the largest image collection of its kind. Potential next steps for FIW are abundant.


Proceedings of the 2017 Workshop on Recognizing Families In the Wild | 2017

Recognizing Families In the Wild (RFIW): Data Challenge Workshop in conjunction with ACM MM 2017

Joseph P. Robinson; Ming Shao; Handong Zhao; Yue Wu; Timothy Gillis; Yun Fu

Recognizing Families In the Wild (RFIW) is a large-scale, multi-track automatic kinship recognition evaluation, supporting both kinship verification and family classification on scales much larger than ever before. It was organized as a Data Challenge Workshop hosted in conjunction with ACM Multimedia 2017. This was achieved with the largest image collection that supports kin-based vision tasks. In the end, we use this manuscript to summarize evaluation protocols, progress made and some technical background and performance ratings of the algorithms used, and a discussion on promising directions for both research and engineers to be taken next in this line of work.


Image and Vision Computing | 2018

Improving face representation learning with center invariant loss

Yue Wu; Hongfu Liu; Jun Li; Yun Fu

Abstract In this paper, we address on the deep face representation learning with imbalanced data. With a large number of available face images of different people for training, Convolutional Neural Networks could learn deep face representation through classifying these people. However, uniformed distributed data for all people are hard to get. Some people come with more images but some come with less. In learning the deep face representation, the imbalanced images between people introduce the bias towards these people that have more images. Existing methods focus on the intra-class and inter-class variations but not well address the imbalanced data problem. To generate a robust and discriminative face representation for all people, we propose a center invariant loss which adds penalty to the differences between each center of classes. The center invariant loss could align the center of each person to the mean of all centers, which could force the deeply learned face features to have a good representation for all people with better generalization ability. Extensive experiments well demonstrate the effectiveness of the proposed approach. Many existing methods in learning deep face representation are further improved after adding the proposed center invariant loss.


ACM Transactions on Multimedia Computing, Communications, and Applications | 2018

Deep Bidirectional Cross-Triplet Embedding for Online Clothing Shopping

Shuhui Jiang; Yue Wu; Yun Fu

In this article, we address the cross-domain (i.e., street and shop) clothing retrieval problem and investigate its real-world applications for online clothing shopping. It is a challenging problem due to the large discrepancy between street and shop domain images. We focus on learning an effective feature-embedding model to generate robust and discriminative feature representation across domains. Existing triplet embedding models achieve promising results by finding an embedding metric in which the distance between negative pairs is larger than the distance between positive pairs plus a margin. However, existing methods do not address the challenges in the cross-domain clothing retrieval scenario sufficiently. First, the intradomain and cross-domain data relationships need to be considered simultaneously. Second, the number of matched and nonmatched cross-domain pairs are unbalanced. To address these challenges, we propose a deep cross-triplet embedding algorithm together with a cross-triplet sampling strategy. The extensive experimental evaluations demonstrate the effectiveness of the proposed algorithms well. Furthermore, we investigate two novel online shopping applications, clothing trying on and accessories recommendation, based on a unified cross-domain clothing retrieval framework.

Collaboration


Dive into the Yue Wu's collaboration.

Top Co-Authors

Avatar

Yun Fu

Northeastern University

View shared research outputs
Top Co-Authors

Avatar

Hongfu Liu

Northeastern University

View shared research outputs
Top Co-Authors

Avatar

Jun Li

Northeastern University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ming Shao

University of Massachusetts Dartmouth

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Handong Zhao

Northeastern University

View shared research outputs
Top Co-Authors

Avatar

Shuhui Jiang

Northeastern University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Haiyi Mao

Northeastern University

View shared research outputs
Researchain Logo
Decentralizing Knowledge