Joey Tianyi Zhou
Nanyang Technological University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Joey Tianyi Zhou.
european conference on computer vision | 2014
Ping Liu; Joey Tianyi Zhou; Ivor W. Tsang; Zibo Meng; Shizhong Han; Yan Tong
Studies in psychology show that not all facial regions are of importance in recognizing facial expressions and different facial regions make different contributions in various facial expressions. Motivated by this, a novel framework, named Feature Disentangling Machine (FDM), is proposed to effectively select active features characterizing facial expressions. More importantly, the FDM aims to disentangle these selected features into non-overlapped groups, in particular, common features that are shared across different expressions and expression-specific features that are discriminative only for a target expression. Specifically, the FDM integrates sparse support vector machine and multi-task learning in a unified framework, where a novel loss function and a set of constraints are formulated to precisely control the sparsity and naturally disentangle active features. Extensive experiments on two well-known facial expression databases have demonstrated that the FDM outperforms the state-of-the-art methods for facial expression analysis. More importantly, the FDM achieves an impressive performance in a cross-database validation, which demonstrates the generalization capability of the selected features.
asian conference on computer vision | 2014
Sharath Chandra Guntuku; Joey Tianyi Zhou; Sujoy Roy; Lin Weisi; Ivor W. Tsang
Automatically understanding and modeling a user’s liking for an image is a challenging problem. This is because the relationship between the images features (even semantic ones extracted by existing tools, viz. faces, objects etc.) and users’ ‘likes’ is non-linear, influenced by several subtle factors. This work presents a deep bi-modal knowledge representation of images based on their visual content and associated tags (text). A mapping step between the different levels of visual and textual representations allows for the transfer of semantic knowledge between the two modalities. It also includes feature selection before learning deep representation to identify the important features for a user to like an image. Then the proposed representation is shown to be effective in learning a model of users image ‘likes’ based on a collection of images ‘liked’ by him. On a collection of images ‘liked’ by users (from Flickr) the proposed deep representation is shown to better state-of-art low-level features used for modeling user ‘likes’ by around 15–20 %.
national conference on artificial intelligence | 2014
Joey Tianyi Zhou; Sinno Jialin Pan; Ivor W. Tsang; Yan Yan
international conference on artificial intelligence and statistics | 2014
Joey Tianyi Zhou; Ivor W. Tsang; Sinno Jialin Pan; Mingkui Tan
national conference on artificial intelligence | 2016
Joey Tianyi Zhou; Sinno Jialin Pan; Ivor W. Tsang; Shen-Shyang Ho
asian conference on machine learning | 2012
Joey Tianyi Zhou; Sinno Jialin Pan; Qi Mao; Ivor W. Tsang
international joint conference on artificial intelligence | 2016
Joey Tianyi Zhou; Xinxing Xu; Sinno Jialin Pan; Ivor W. Tsang; Zheng Qin; Rick Siow Mong Goh
IEEE Transactions on Image Processing | 2016
Sharath Chandra Guntuku; Joey Tianyi Zhou; Sujoy Roy; Weisi Lin; Ivor W. Tsang
IEEE Transactions on Neural Networks | 2018
Joey Tianyi Zhou; Heng Zhao; Xi Peng; Meng Fang; Zheng Qin; Rick Siow Mong Goh
arXiv: Learning | 2016
Xinxing Xu; Joey Tianyi Zhou; Ivor W. Tsang; Zheng Qin; Rick Siow Mong Goh; Yong Liu