Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joris IJsselmuiden is active.

Publication


Featured researches published by Joris IJsselmuiden.


ambient intelligence | 2012

Automatic Behavior Understanding in Crisis Response Control Rooms

Joris IJsselmuiden; Ann-Kristin Grosselfinger; David Münch; Michael Arens; Rainer Stiefelhagen

This paper addresses the problem of automatic behavior understanding in smart environments. Automatic behavior understanding is defined as the generation of semantic event descriptions from machine perception. Outputs from available perception modalities can be fused into a world model with a single spatiotemporal reference frame. The fused world model can then be used as input by a reasoning engine that generates semantic event descriptions. We use a newly developed annotation tool to generate hypothetical machine perception outputs instead. The applied reasoning engine is based on fuzzy metric temporal logic (FMTL) and situation graph trees (SGTs), promising and universally applicable tools for automatic behavior understanding. The presented case study is automatic behavior report generation for staff training purposes in crisis response control rooms. Various group formations and interaction patterns are deduced from person tracks, object information, and information about gestures, body pose, and speech activity.


Computers and Electronics in Agriculture | 2016

Probabilistic localisation in repetitive environments

Bastiaan A. Vroegindeweij; Joris IJsselmuiden; Eldert J. van Henten

Probabilistic localisation of a mobile robot was tested inside an aviary poultry house.The robot can be used for collecting floor eggs and measuring climate conditions.Algorithm settings were systematically evaluated and compared on performance.Using the best settings, an accuracy of 0.37m could be reached for 95% of the time.Despite the challenging environment, the developed system proved robust. One of the problems in loose housing systems for laying hens is the laying of eggs on the floor, which need to be collected manually. In previous work, PoultryBot was presented to assist in this and other tasks. Here, probabilistic localisation with a particle filter is evaluated for use inside poultry houses. A poultry house is a challenging environment, because it is dense, with narrow static objects and many moving animals. Several methods and options were implemented and tested on data obtained with PoultryBot in a commercial poultry house. Although no animals were present, the localisation problem is still challenging here because of the repetitive nature of the poultry house interior, with its many narrow obstacles. Different parameter configurations were systematically evaluated, based on accuracy and applicability of the results. Estimated paths were quantitatively evaluated based on the Euclidian distance to a ground-truth determined with help of a total station. The presented system reached an accuracy of 0.37m for 95% of the time, with a mean error of 0.2m, making it suitable for localising PoultryBot in its future application.


Computers and Electronics in Agriculture | 2018

Data synthesis methods for semantic segmentation in agriculture : A Capsicum annuum dataset

R. Barth; Joris IJsselmuiden; J. Hemming; E.J. van Henten

Abstract This paper provides synthesis methods for large-scale semantic image segmentation datasets of agricultural scenes with the objective to bridge the gap between state-of-the art computer vision performance and that of computer vision in the agricultural robotics domain. We propose a novel methodology to generate renders of random meshes of plants based on empirical measurements, including the automated generation per-pixel class and depth labels for multiple plant parts. A running example is given of Capsicum annuum (sweet or bell pepper) in a high-tech greenhouse. A synthetic dataset of 10,500 images was rendered through Blender, using scenes with 42 procedurally generated plant models with randomised plant parameters. These parameters were based on 21 empirically measured plant properties at 115 positions on 15 plant stems. Fruit models were obtained by 3D scanning and plant part textures were gathered photographically. As reference dataset for modelling and evaluate segmentation performance, 750 empirical images of 50 plants were collected in a greenhouse from multiple angles and distances using image acquisition hardware of a sweet pepper harvest robot prototype. We hypothesised high similarity between synthetic images and empirical images, which we showed by analysing and comparing both sets qualitatively and quantitatively. The sets and models are publicly released with the intention to allow performance comparisons between agricultural computer vision methods, to obtain feedback for modelling improvements and to gain further validations on usability of synthetic bootstrapping and empirical fine-tuning. Finally, we provide a brief perspective on our hypothesis that related synthetic dataset bootstrapping and empirical fine-tuning can be used for improved learning.


ambient intelligence | 2014

Automatic understanding of group behavior using fuzzy temporal logic

Joris IJsselmuiden; David Münch; Ann-Kristin Grosselfinger; Michael Arens; Rainer Stiefelhagen

Automatic behavior understanding refers to the generation of situation descriptions from machine perception. World models created through machine perception can be used by a reasoning engine to deduce knowledge about the observed scene. For this study, the required machine perception is annotated, allowing us to focus on the reasoning problem. The applied reasoning engine is based on fuzzy metric temporal logic and situation graph trees. It is evaluated in a case study on automatic behavior report generation for staff training purposes in crisis response control rooms. The idea is to use automatically generated reports for multimedia retrieval to increase the effectiveness of learning from recorded staff exercises. To achieve automatic report generation, various group situations are deduced from annotated person tracks, object information, and annotated information about gestures, body pose, and speech activity. The contribution of this paper consists of improvements to the existing knowledge base that models the group situations, and a quantitative evaluation using a substantial set of self-developed data and ground-truth. We also describe recent improvements to the self-developed software tools for annotating and visualizing data, ground-truth, and results.


advanced video and signal based surveillance | 2017

Monitoring and mapping with robot swarms for agricultural applications

Dario Albani; Joris IJsselmuiden; Ramon Haken; Vito Trianni

Robotics is expected to play a major role in the agricultural domain, and often multi-robot systems and collaborative approaches are mentioned as potential solutions to improve efficiency and system robustness. Among the multi-robot approaches, swarm robotics stresses aspects like flexibility, scalability and robustness in solving complex tasks, and is considered very relevant for precision farming and large-scale agricultural applications. However, swarm robotics research is still confined into the lab, and no application in the field is currently available. In this paper, we describe a roadmap to bring swarm robotics to the field within the domain of weed control problems. This roadmap is implemented within the experiment SAGA, founded within the context of the ECORD++ EU Project. Together with the experiment concept, we introduce baseline results for the target scenario of monitoring and mapping weed in a field by means of a swarm of UAVs.


advanced video and signal based surveillance | 2015

Interaction analysis through fuzzy temporal logic: Extensions for clustering and parameter learning

Joris IJsselmuiden; Johannes Dornheim

Interaction analysis is defined as the generation of semantic descriptions from machine perception. This can be achieved through a combination of fuzzy metric temporal logic (FMTL) and situation graph trees (SGTs). We extended the FMTL/SGT framework with modules for clustering and parameter learning and we showed their advantages. The contributions of this paper are 1) the combination of FMTL/SGT reasoning with a customized clustering algorithm, 2) a method for learning FMTL rule parameters, 3) a new FMTL/SGT model that implements some powerful fuzzy spatiotemporal concepts, and 4) evaluation of this system in a crisis response control room setting.


KI'10 Proceedings of the 33rd annual German conference on Advances in artificial intelligence | 2010

Towards high-level human activity recognition through computer vision and temporal logic

Joris IJsselmuiden; Rainer Stiefelhagen


conference of the international speech communication association | 2011

Tue-SeA Real-Time Speech Command Detector for a Smart Control Room.

Daniel Reich; Felix Putze; Dominic Heger; Joris IJsselmuiden; Rainer Stiefelhagen; Tanja Schultz


Biosystems Engineering | 2018

Sugar beet and volunteer potato classification using Bag-of-Visual-Words model, Scale-Invariant Feature Transform, or Speeded Up Robust Feature descriptors and crop row information

Hyun K. Suh; J.W. Hofstee; Joris IJsselmuiden; Eldert J. van Henten


Biosystems Engineering | 2018

Object discrimination in poultry housing using spectral reflectivity

Bastiaan A. Vroegindeweij; Steven van Hell; Joris IJsselmuiden; Eldert J. van Henten

Collaboration


Dive into the Joris IJsselmuiden's collaboration.

Top Co-Authors

Avatar

Eldert J. van Henten

Wageningen University and Research Centre

View shared research outputs
Top Co-Authors

Avatar

Bastiaan A. Vroegindeweij

Wageningen University and Research Centre

View shared research outputs
Top Co-Authors

Avatar

Rainer Stiefelhagen

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

J. Hemming

Wageningen University and Research Centre

View shared research outputs
Top Co-Authors

Avatar

E.J. van Henten

Wageningen University and Research Centre

View shared research outputs
Top Co-Authors

Avatar

Hyun K. Suh

Wageningen University and Research Centre

View shared research outputs
Top Co-Authors

Avatar

J.W. Hofstee

Wageningen University and Research Centre

View shared research outputs
Top Co-Authors

Avatar

R. Barth

Wageningen University and Research Centre

View shared research outputs
Top Co-Authors

Avatar

Steven van Hell

Wageningen University and Research Centre

View shared research outputs
Top Co-Authors

Avatar

Johannes Dornheim

Karlsruhe University of Applied Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge