Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Darren Cosker is active.

Publication


Featured researches published by Darren Cosker.


Expert Systems With Applications | 2001

An expert system for multi-criteria decision making using Dempster Shafer theory

Malcolm James Beynon; Darren Cosker; A. David Marshall

This paper outlines a new software system we have developed that utilises the newly developed method (DS/AHP) which combines aspects of the Analytic Hierarchy Process (AHP) with Dempster–Shafer Theory for the purpose of multi-criteria decision making (MCDM). The method allows a decision maker considerably greater level of control (compared with conventional AHP methods) on the judgements made in identifying levels of favouritism towards groups of decision alternatives. More specifically, the DS/AHP analysis allows for additional analysis, including levels of uncertainty and conflict in the decisions made, for example. In this paper an expert system is introduced which enables the application of DS/AHP to MCDM. The expert system illustrates further the usability of DS/AHP, also including new aspects of analysis and representation offered through using this method. The principal application used to illustrate this expert system is that of identifying those residential properties to visit (view), from those advertised for ales through a real estate brokerage firm.


international conference on acoustics, speech, and signal processing | 2005

Video assisted speech source separation

Wenwu Wang; Darren Cosker; Yulia Hicks; S. Saneit; Jonathon A. Chambers

We investigate the problem of integrating the complementary audio and visual modalities for speech separation. Rather than using independence criteria suggested in most blind source separation (BSS) systems, we use visual features from a video signal as additional information to optimize the unmixing matrix. We achieve this by using a statistical model characterizing the nonlinear coherence between audio and visual features as a separation criterion for both instantaneous and convolutive mixtures. We acquire the model by applying the Bayesian framework to the fused feature observations based on a training corpus. We point out several key existing challenges to the success of the system. Experimental results verify the proposed approach, which outperforms the audio only separation system in a noisy environment, and also provides a solution to the permutation problem.


international conference on computer vision | 2011

A FACS valid 3D dynamic action unit database with applications to 3D dynamic morphable facial modeling

Darren Cosker; Eva Krumhuber; Adrian Hilton

This paper presents the first dynamic 3D FACS data set for facial expression research, containing 10 subjects performing between 19 and 97 different AUs both individually and in combination. In total the corpus contains 519 AU sequences. The peak expression frame of each sequence has been manually FACS coded by certified FACS experts. This provides a ground truth for 3D FACS based AU recognition systems. In order to use this data, we describe the first framework for building dynamic 3D morphable models. This includes a novel Active Appearance Model (AAM) based 3D facial registration and mesh correspondence scheme. The approach overcomes limitations in existing methods that require facial markers or are prone to optical flow drift. We provide the first quantitative assessment of such 3D facial mesh registration techniques and show how our proposed method provides more reliable correspondence.


IEEE Transactions on Visualization and Computer Graphics | 2013

Water Surface Modeling from a Single Viewpoint Video

Chuan Li; David Pickup; Thomas Saunders; Darren Cosker; A. David Marshall; Peter M. Hall; Philip J. Willis

We introduce a video-based approach for producing water surface models. Recent advances in this field output high-quality results but require dedicated capturing devices and only work in limited conditions. In contrast, our method achieves a good tradeoff between the visual quality and the production cost: It automatically produces a visually plausible animation using a single viewpoint video as the input. Our approach is based on two discoveries: first, shape from shading (SFS) is adequate to capture the appearance and dynamic behavior of the example water; second, shallow water model can be used to estimate a velocity field that produces complex surface dynamics. We will provide qualitative evaluation of our method and demonstrate its good performance across a wide range of scenes.


british machine vision conference | 2008

Facial Dynamics in Biometric Identification

Lanthao Truong-Benedikt; Vedran Kajić; Darren Cosker; Paul L. Rosin; Andrew David Marshall

This paper investigates the use of facial gestures for identity recognition. This is the first time that such a quantitative evaluation is conducted, comparing the analyses of 2D versus 3D dynamic data of verbal and nonverbal facial actions. Suitable data processing and feature extraction methods are examined, then a number of pattern matching techniques including the Fr´echet distance, Correlation Coefficients, Hidden-Markov Models, Dynamic TimeWarping and its derived forms are compared, in light of which an improved algorithm is proposed. Finally, a face recognition prototype using facial dynamics is built, achieving an Equal Error Rate EER=1.6%.


computer vision and pattern recognition | 2013

Optical Flow Estimation Using Laplacian Mesh Energy

Wenbin Li; Darren Cosker; Matthew Brown; Rui Tang

In this paper we present a novel non-rigid optical flow algorithm for dense image correspondence and non-rigid registration. The algorithm uses a unique Laplacian Mesh Energy term to encourage local smoothness whilst simultaneously preserving non-rigid deformation. Laplacian deformation approaches have become popular in graphics research as they enable mesh deformations to preserve local surface shape. In this work we propose a novel Laplacian Mesh Energy formula to ensure such sensible local deformations between image pairs. We express this wholly within the optical flow optimization, and show its application in a novel coarse-to-fine pyramidal approach. Our algorithm achieves the state-of-the-art performance in all trials on the Garg et al. dataset, and top tier performance on the Middlebury evaluation.


applied perception in graphics and visualization | 2010

Perception of linear and nonlinear motion properties using a FACS validated 3D facial model

Darren Cosker; Eva Krumhuber; Adrian Hilton

In this paper we present the first Facial Action Coding System (FACS) valid model to be based on dynamic 3D scans of human faces for use in graphics and psychological research. The model consists of FACS Action Unit (AU) based parameters and has been independently validated by FACS experts. Using this model, we explore the perceptual differences between linear facial motions -- represented by a linear blend shape approach -- and real facial motions that have been synthesized through the 3D facial model. Through numerical measures and visualizations, we show that this latter type of motion is geometrically nonlinear in terms of its vertices. In experiments, we explore the perceptual benefits of nonlinear motion for different AUs. Our results are insightful for designers of animation systems both in the entertainment industry and in scientific research. They reveal a significant overall benefit to using captured nonlinear geometric vertex motion over linear blend shape motion. However, our findings suggest that not all motions need to be animated nonlinearly. The advantage may depend on the type of facial action being produced and the phase of the movement.


workshop on applications of computer vision | 2014

Robust optical flow estimation for continuous blurred scenes using RGB-motion imaging and directional filtering

Wenbin Li; Yang Chen; JeeHang Lee; Gang Ren; Darren Cosker

Optical flow estimation is a difficult task given real-world video footage with camera and object blur. In this paper, we combine a 3D pose&position tracker with an RGB sensor allowing us to capture video footage together with 3D camera motion. We show that the additional camera motion information can be embedded into a hybrid optical flow framework by interleaving an iterative blind deconvolution and warping based minimization scheme. Such a hybrid framework significantly improves the accuracy of optical flow estimation in scenes with strong blur. Our approach yields improved overall performance against three state-of-the-art baseline methods applied to our proposed ground truth sequences, as well as in several other real-world sequences captured by our novel imaging system.


british machine vision conference | 2014

Interactive Shadow Removal and Ground Truth for Variable Scene Categories

Han Gong; Darren Cosker

We present an interactive, robust and high quality method for fast shadow removal. To perform detection we use an on-the-fly learning approach guided by two rough user inputs for the pixels of the shadow and the lit area. From this we derive a fusion image that magnifies shadow boundary intensity change due to illumination variation. After detection, we perform shadow removal by registering the penumbra to a normalised frame which allows us to efficiently estimate non-uniform shadow illumination changes, resulting in accurate and robust removal. We also present the first reliable, validated and multi-scene category ground truth for shadow removal algorithms which overcomes limitations in existing data sets -- such as inconsistencies between shadow and shadow-free images and limited variations of shadows. Using our data, we perform the most thorough comparison of state of the art shadow removal methods to date. Our algorithm outperforms the state of the art, and we supply our P-code and evaluation data and scripts to encourage future open comparisons.


acm symposium on applied perception | 2016

User, metric, and computational evaluation of foveated rendering methods

Jose A. Iglesias-Guitian; Charalampos Koniaris; Bochang Moon; Darren Cosker; Kenny Mitchell

Perceptually lossless foveated rendering methods exploit human perception by selectively rendering at different quality levels based on eye gaze (at a lower computational cost) while still maintaining the users perception of a full quality render. We consider three foveated rendering methods and propose practical rules of thumb for each method to achieve significant performance gains in real-time rendering frameworks. Additionally, we contribute a new metric for perceptual foveated rendering quality building on HDR-VDP2 that, unlike traditional metrics, considers the loss of fidelity in peripheral vision by lowering the contrast sensitivity of the model with visual eccentricity based on the Cortical Magnification Factor (CMF). The new metric is parameterized on user-test data generated in this study. Finally, we run our metric on a novel foveated rendering method for real-time immersive 360° content with motion parallax.

Collaboration


Dive into the Darren Cosker's collaboration.

Top Co-Authors

Avatar

Wenbin Li

Engineering and Physical Sciences Research Council

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eva Krumhuber

Jacobs University Bremen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Han Gong

University of East Anglia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge