Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tilo Burghardt is active.

Publication


Featured researches published by Tilo Burghardt.


Trends in Ecology and Evolution | 2013

Animal biometrics: quantifying and detecting phenotypic appearance

Hjalmar S. Kühl; Tilo Burghardt

Animal biometrics is an emerging field that develops quantified approaches for representing and detecting the phenotypic appearance of species, individuals, behaviors, and morphological traits. It operates at the intersection between pattern recognition, ecology, and information sciences, producing computerized systems for phenotypic measurement and interpretation. Animal biometrics can benefit a wide range of disciplines, including biogeography, population ecology, and behavioral research. Currently, real-world applications are gaining momentum, augmenting the quantity and quality of ecological data collection and processing. However, to advance animal biometrics will require integration of methodologies among the scientific disciplines involved. Such efforts will be worthwhile because the great potential of this approach rests with the formal abstraction of phenomics, to create tractable interfaces between different organizational levels of life.


international conference on communications | 2015

A multi-modal sensor infrastructure for healthcare in a residential environment

Przemyslaw Woznowski; Xenofon Fafoutis; Terence Song; Sion Hannuna; Massimo Camplani; Lili Tao; Adeline Paiement; Evangelos Mellios; Mo Haghighi; Ni Zhu; Geoffrey S Hilton; Dima Damen; Tilo Burghardt; Majid Mirmehdi; Robert J. Piechocki; Dritan Kaleshi; Ian J Craddock

Ambient Assisted Living (AAL) systems based on sensor technologies are seen as key enablers to an ageing society. However, most approaches in this space do not provide a truly generic ambient space - one that is not only capable of assisting people with diverse medical conditions, but can also recognise the habits of healthy habitants, as well as those with developing medical conditions. The recognition of Activities of Daily Living (ADL) is key to the understanding and provisioning of appropriate and efficient care. However, ADL recognition is particularly difficult to achieve in multi-resident spaces; especially with single-mode (albeit carefully crafted) solutions, which only have limited capabilities. To address these limitations we propose a multi-modal system architecture for AAL remote healthcare monitoring in the home, gathering information from multiple, diverse (sensor) data sources. In this paper we report on developments made to-date in various technical areas with respect to critical issues such as cost, power consumption, scalability, interoperability and privacy.


Springer US | 2017

SPHERE: A Sensor Platform for Healthcare in a Residential Environment

Pete R Woznowski; Alison Burrows; Tom Diethe; Xenofon Fafoutis; Jake Hall; Sion Hannuna; Massimo Camplani; Niall Twomey; Michal Kozlowski; Bo Tan; Ni Zhu; Atis Elsts; Antonis Vafeas; Adeline Paiement; Lili Tao; Majid Mirmehdi; Tilo Burghardt; Dima Damen; Peter A. Flach; Robert J. Piechocki; Ian J Craddock; George C. Oikonomou

It can be tempting to think about smart homes like one thinks about smart cities. On the surface, smart homes and smart cities comprise coherent systems enabled by similar sensing and interactive technologies. It can also be argued that both are broadly underpinned by shared goals of sustainable development, inclusive user engagement and improved service delivery. However, the home possesses unique characteristics that must be considered in order to develop effective smart home systems that are adopted in the real world [37].


Iet Computer Vision | 2017

Multiple Human Tracking in RGB-D Data: A Survey

Massimo Camplani; Adeline Paiement; Majid Mirmehdi; Dima Damen; Sion Hannuna; Tilo Burghardt; Lili Tao

Multiple human tracking (MHT) is a fundamental task in many computer vision applications. Appearance-based approaches, primarily formulated on RGB data, are constrained and affected by problems arising from occlusions and/or illumination variations. In recent years, the arrival of cheap RGB-depth devices has led to many new approaches to MHT, and many of these integrate colour and depth cues to improve each and every stage of the process. In this survey, the authors present the common processing pipeline of these methods and review their methodology based (a) on how they implement this pipeline and (b) on what role depth plays within each stage of it. They identify and introduce existing, publicly available, benchmark datasets and software resources that fuse colour and depth data for MHT. Finally, they present a brief comparative evaluation of the performance of those works that have applied their methods to these datasets.


international conference on e health networking application services | 2015

A comparative home activity monitoring study using visual and inertial sensors

Lili Tao; Tilo Burghardt; Sion Hannuna; Massimo Camplani; Adeline Paiement; Dima Damen; Majid Mirmehdi; Ian J Craddock

Monitoring actions at home can provide essential information for rehabilitation management. This paper presents a comparative study and a dataset for the fully automated, sample-accurate recognition of common home actions in the living room environment using commercial-grade, inexpensive inertial and visual sensors. We investigate the practical home-use of body-worn mobile phone inertial sensors together with an Asus Xmotion RGB-Depth camera to achieve monitoring of daily living scenarios. To test this setup against realistic data, we introduce the challenging SPHERE-H130 action dataset containing 130 sequences of 13 household actions recorded in a home environment. We report automatic recognition results at maximal temporal resolution, which indicate that a vision-based approach outperforms accelerometer provided by two phone-based inertial sensors by an average of 14.85% accuracy for home actions. Further, we report improved accuracy of a vision-based approach over accelerometry on particularly challenging actions as well as when generalising across subjects.


Journal of Real-time Image Processing | 2016

DS-KCF: a real-time tracker for RGB-D data

Sion Hannuna; Massimo Camplani; Jake Hall; Majid Mirmehdi; Dima Damen; Tilo Burghardt; Adeline Paiement; Lili Tao

We propose an RGB-D single-object tracker, built upon the extremely fast RGB-only KCF tracker that is able to exploit depth information to handle scale changes, occlusions, and shape changes. Despite the computational demands of the extra functionalities, we still achieve real-time performance rates of 35–43 fps in MATLAB and 187 fps in our C++ implementation. Our proposed method includes fast depth-based target object segmentation that enables, (1) efficient scale change handling within the KCF core functionality in the Fourier domain, (2) the detection of occlusions by temporal analysis of the target’s depth distribution, and (3) the estimation of a target’s change of shape through the temporal evolution of its segmented silhouette allows. Finally, we provide an in-depth analysis of the factors affecting the throughput and precision of our proposed tracker and perform extensive comparative analysis. Both the MATLAB and C++ versions of our software are available in the public domain.


American Journal of Primatology | 2017

Automated face detection for occurrence and occupancy estimation in chimpanzees

Anne Sophie Crunchant; Monika Egerer; Alexander Loos; Tilo Burghardt; Klaus Zuberbühler; Katherine Corogenes; Vera Leinert; Lars Kulik; Hjalmar S. Kühl

Surveying endangered species is necessary to evaluate conservation effectiveness. Camera trapping and biometric computer vision are recent technological advances. They have impacted on the methods applicable to field surveys and these methods have gained significant momentum over the last decade. Yet, most researchers inspect footage manually and few studies have used automated semantic processing of video trap data from the field. The particular aim of this study is to evaluate methods that incorporate automated face detection technology as an aid to estimate site use of two chimpanzee communities based on camera trapping. As a comparative baseline we employ traditional manual inspection of footage. Our analysis focuses specifically on the basic parameter of occurrence where we assess the performance and practical value of chimpanzee face detection software. We found that the semi‐automated data processing required only 2–4% of the time compared to the purely manual analysis. This is a non‐negligible increase in efficiency that is critical when assessing the feasibility of camera trap occupancy surveys. Our evaluations suggest that our methodology estimates the proportion of sites used relatively reliably. Chimpanzees are mostly detected when they are present and when videos are filmed in high‐resolution: the highest recall rate was 77%, for a false alarm rate of 2.8% for videos containing only chimpanzee frontal face views. Certainly, our study is only a first step for transferring face detection software from the lab into field application. Our results are promising and indicate that the current limitation of detecting chimpanzees in camera trap footage due to lack of suitable face views can be easily overcome on the level of field data collection, that is, by the combined placement of multiple high‐resolution cameras facing reverse directions. This will enable to routinely conduct chimpanzee occupancy surveys based on camera trapping and semi‐automated processing of footage.


international conference on image processing | 2015

Towards automating visual in-field monitoring of crop health

David P. Gibson; Tilo Burghardt; Neill W. Campbell; Nishan Canagarajah

We present an application that demonstrates a proof of concept system for automated in-the-field monitoring of disease in wheat crops. Such in-situ applications are required to be robust in the presence of clutter, provide rapid and accurate analysis and are able to operate at scale. We propose a processing pipeline that detects key wheat diseases in cluttered field imagery. First, we describe and evaluate a high dimensional texture descriptor combined with a randomised forest approach for automated primary leaf recognition. Second, we show that a combined nearest neighbour classifier and voting system applied to segmented leaf regions can robustly determine the presence and type of disease. The system has been tested on a real-world database of images of wheat leaves captured in-the-field using a standard smart phone.


International Journal of Computer Vision | 2017

Automated Visual Fin Identification of Individual Great White Sharks

Benjamin J Hughes; Tilo Burghardt

This paper discusses the automated visual identification of individual great white sharks from dorsal fin imagery. We propose a computer vision photo ID system and report recognition results over a database of thousands of unconstrained fin images. To the best of our knowledge this line of work establishes the first fully automated contour-based visual ID system in the field of animal biometrics. The approach put forward appreciates shark fins as textureless, flexible and partially occluded objects with an individually characteristic shape. In order to recover animal identities from an image we first introduce an open contour stroke model, which extends multi-scale region segmentation to achieve robust fin detection. Secondly, we show that combinatorial, scale-space selective fingerprinting can successfully encode fin individuality. We then measure the species-specific distribution of visual individuality along the fin contour via an embedding into a global ‘fin space’. Exploiting this domain, we finally propose a non-linear model for individual animal recognition and combine all approaches into a fine-grained multi-instance framework. We provide a system evaluation, compare results to prior work, and report performance and properties in detail.


international conference on image processing | 2016

Automatic individual holstein friesian cattle identification via selective local coat pattern matching in RGB-D imagery

William Andrew; Sion Hannuna; Neill W. Campbell; Tilo Burghardt

The objective of this paper is the fully automated visual identification of individual Holstein Friesian cattle from dorsal RGB-D imagery taken in real-world farm environments. Autonomous and non-intrusive cattle identification could provide an essential tool for economically-viable machinised farming analytics, social monitoring, cattle traceability, food production management and more. We contribute a dataset and propose a system that can reliably derive animal identities from top-down stills by first depth-segmenting animals in RGB-D frames, and then extracting a subset of local ASIFT coat descriptors predicted as sufficiently individually distinctive across the species. Predictions are generated by a support vector machine (SVM) using radial basis function (RBF) kernels for predictions based on the ASIFT descriptor structure. We show that learning such a species-specific ID-model is effective, and we demonstrate robustness to poor or complex input image conditions such as more than one cow present, bad depth segmentation, etc. The proposed system yields 97% identification accuracy over testing on approximately 86,000 image pair comparisons covering a herd of 40 individuals from the FriesianCattle2015 Dataset.

Collaboration


Dive into the Tilo Burghardt's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lili Tao

University of Bristol

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge