Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bernardino Romera-Paredes is active.

Publication


Featured researches published by Bernardino Romera-Paredes.


international conference on computer vision | 2015

Conditional Random Fields as Recurrent Neural Networks

Shuai Zheng; Sadeep Jayasumana; Bernardino Romera-Paredes; Vibhav Vineet; Zhizhong Su; Dalong Du; Chang Huang; Philip H. S. Torr

Pixel-level labelling tasks, such as semantic segmentation, play a central role in image understanding. Recent approaches have attempted to harness the capabilities of deep learning techniques for image recognition to tackle pixel-level labelling tasks. One central issue in this methodology is the limited capacity of deep learning techniques to delineate visual objects. To solve this problem, we introduce a new form of convolutional neural network that combines the strengths of Convolutional Neural Networks (CNNs) and Conditional Random Fields (CRFs)-based probabilistic graphical modelling. To this end, we formulate Conditional Random Fields with Gaussian pairwise potentials and mean-field approximate inference as Recurrent Neural Networks. This network, called CRF-RNN, is then plugged in as a part of a CNN to obtain a deep network that has desirable properties of both CNNs and CRFs. Importantly, our system fully integrates CRF modelling with CNNs, making it possible to train the whole deep network end-to-end with the usual back-propagation algorithm, avoiding offline post-processing methods for object delineation. We apply the proposed method to the problem of semantic image segmentation, obtaining top results on the challenging Pascal VOC 2012 segmentation benchmark.


international conference on machine learning | 2015

An embarrassingly simple approach to zero-shot learning

Bernardino Romera-Paredes; Philip H. S. Torr

Zero-shot learning consists in learning how to recognise new concepts by just having a description of them. Many sophisticated approaches have been proposed to address the challenges this problem comprises. In this paper we describe a zero-shot learning approach that can be implemented in just one line of code, yet it is able to outperform state of the art approaches on standard datasets. The approach is based on a more general framework which models the relationships between features, attributes, and classes as a two linear layers network, where the weights of the top layer are not learned but are given by the environment. We further provide a learning bound on the generalisation error of this kind of approaches, by casting them as domain adaptation methods. In experiments carried out on three standard real datasets, we found that our approach is able to perform significantly better than the state of art on all of them, obtaining a ratio of improvement up to 17%.


european conference on computer vision | 2016

Recurrent Instance Segmentation

Bernardino Romera-Paredes; Philip H. S. Torr

Instance segmentation is the problem of detecting and delineating each distinct object of interest appearing in an image. Current instance segmentation approaches consist of ensembles of modules that are trained independently of each other, thus missing opportunities for joint learning. Here we propose a new instance segmentation paradigm consisting in an end-to-end method that learns how to segment instances sequentially. The model is based on a recurrent neural network that sequentially finds objects and their segmentations one at a time. This net is provided with a spatial memory that keeps track of what pixels have been explained and allows occlusion handling. In order to train the model we designed a principled loss function that accurately represents the properties of the instance segmentation problem. In the experiments carried out, we found that our method outperforms recent approaches on multiple person segmentation, and all state of the art approaches on the Plant Phenotyping dataset for leaf counting.


ieee international conference on automatic face gesture recognition | 2013

Transfer learning to account for idiosyncrasy in face and body expressions

Bernardino Romera-Paredes; Min S. H. Aung; Massimiliano Pontil; Nadia Bianchi-Berthouze; Amanda C. de C. Williams; Paul J. Watson

In this paper we investigate the use of the Transfer Learning (TL) framework to extract the commonalities across a set of subjects and also to learn the way each individual instantiates these commonalities to model idiosyncrasy. To implement this we apply three variants of Multi Task Learning, namely: Regularized Multi Task Learning (RMTL), Multi Task Feature Learning (MTFL) and Composite Multi Task Feature Learning (CMTFL). Two datasets are used; the first is a set of point based facial expressions with annotated discrete levels of pain. The second consists of full body motion capture data taken from subjects diagnosed with chronic lower back pain. A synchronized electromyographic signal from the lumbar paraspinal muscles is taken as a pain-related behavioural indicator. We compare our approaches with Ridge Regression which is a comparable model without the Transfer Learning property; as well as with a subtractive method for removing idiosyncrasy. The TL based methods show statistically significant improvements in correlation coefficients between predicted model outcomes and the target values compared to baseline models. In particular RMTL consistently outperforms all other methods; a paired t-test between RMTL and the best performing baseline method returned a maximum p-value of 2.3 × 10-4.


IEEE Transactions on Affective Computing | 2015

Perception and Automatic Recognition of Laughter from Whole-Body Motion: Continuous and Categorical Perspectives

Harry J. Griffin; Min S. H. Aung; Bernardino Romera-Paredes; Ciaran McLoughlin; William Curran; Nadia Bianchi-Berthouze

Despite its importance in social interactions, laughter remains little studied in affective computing. Intelligent virtual agents are often blind to users’ laughter and unable to produce convincing laughter themselves. Respiratory, auditory, and facial laughter signals have been investigated but laughter-related body movements have received less attention. The aim of this study is threefold. First, to probe human laughter perception by analyzing patterns of categorisations of natural laughter animated on a minimal avatar. Results reveal that a low dimensional space can describe perception of laughter “types”. Second, to investigate observers’ perception of laughter (hilarious, social, awkward, fake, and non-laughter) based on animated avatars generated from natural and acted motion-capture data. Significant differences in torso and limb movements are found between animations perceived as laughter and those perceived as non-laughter. Hilarious laughter also differs from social laughter. Different body movement features were indicative of laughter in sitting and standing avatar postures. Third, to investigate automatic recognition of laughter to the same level of certainty as observers’ perceptions. Results show recognition rates of the Random Forest model approach human rating levels. Classification comparisons and feature importance analyses indicate an improvement in recognition of social laughter when localized features and nonlinear models are used.


workshop on image analysis for multimedia interactive services | 2013

Getting RID of pain-related behaviour to improve social and self perception: A technology-based perspective

Msh Aung; Bernardino Romera-Paredes; Aneesha Singh; Soo Ling Lim; Natalie Kanakam; A. C. de C. Williams; Nadia Bianchi-Berthouze

People with chronic musculoskeletal pain can experience pain-related fear of physical activity and low confidence in their own motor capabilities. These pain-related emotions and thoughts are often communicated through communicative and protective non-verbal behaviours. Studies in clinical psychology have shown that protective behaviours affect well-being not only physically and psychologically, but also socially. These behaviours appear to be used by others to appraise not just a persons physical state but also to make inferences about their personality traits, with protective pain-related behaviour more negatively evaluated than the communicative behaviour. Unfortunately, people with chronic pain may have difficulty in controlling the triggers of protective behaviour and often are not even aware they exhibit such behaviour. New sensing technology capable of detecting such behaviour or its triggers could be used to support rehabilitation in this regard. In this paper we briefly discuss the above issues and present our approach in developing a rehabilitation system.


british machine vision conference | 2015

Prototypical Priors: From Improving Classification to Zero-Shot Learning.

Saumya Jetley; Bernardino Romera-Paredes; Sadeep Jayasumana; Philip H. S. Torr

Recent works on zero-shot learning make use of side information such as visual attributes or natural language semantics to define the relations between output visual classes and then use these relationships to draw inference on new unseen classes at test time. In a novel extension to this idea, we propose the use of visual prototypical concepts as side information. For most real-world visual object categories, it may be difficult to establish a unique prototype. However, in cases such as traffic signs, brand logos, flags, and even natural language characters, these prototypical templates are available and can be leveraged for an improved recognition performance. The present work proposes a way to incorporate this prototypical information in a deep learning framework. Using prototypes as prior information, the deepnet pipeline learns the input image projections into the prototypical embedding space subject to minimization of the final classification loss. Based on our experiments with two different datasets of traffic signs and brand logos, prototypical embeddings incorporated in a conventional convolutional neural network improve the recognition performance. Recognition accuracy on the Belga logo dataset is especially noteworthy and establishes a new state-of-the-art. In zero-shot learning scenarios, the same system can be directly deployed to draw inference on unseen classes by simply adding the prototypical information for these new classes at test time. Thus, unlike earlier approaches, testing on seen and unseen classes is handled using the same pipeline, and the system can be tuned for a trade-off of seen and unseen class performance as per task requirement. Comparison with one of the latest works in the zero-shot learning domain yields top results on the two datasets mentioned above.


international conference on multimedia and expo | 2014

Facial expression tracking from head-mounted, partially observing cameras

Bernardino Romera-Paredes; Cha Zhang; Zhengyou Zhang

Head-mounted displays (HMDs) have gained more and more interest recently. They can enable people to communicate with each other from anywhere, at anytime. However, since most HMDs today are only equipped with cameras pointing outwards, the remote party would not be able to see the user wearing the HMD. In this paper, we present a system for facial expression tracking based on head-mounted, inward looking cameras, such that the user can be represented with animated avatars at the remote party. The main challenge is that the cameras can only observe partial faces since they are very close to the face. We experiment with multiple machine learning algorithms to estimate facial expression parameters based on training data collected with the assistance of a Kinect depth sensor. Our results show that we can reliably track peoples facial expression even from very limited view angles of the cameras.


international conference on machine learning | 2013

Multilinear Multitask Learning

Bernardino Romera-Paredes; Hane Aung; Nadia Bianchi-Berthouze; Massimiliano Pontil


international conference on machine learning | 2013

Sparse coding for multitask and transfer learning

Andreas Maurer; Massimiliano Pontil; Bernardino Romera-Paredes

Collaboration


Dive into the Bernardino Romera-Paredes's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andreas Maurer

University College London

View shared research outputs
Top Co-Authors

Avatar

Aneesha Singh

University College London

View shared research outputs
Top Co-Authors

Avatar

Hongying Meng

Brunel University London

View shared research outputs
Top Co-Authors

Avatar

Msh Aung

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge