Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Camille Besse is active.

Publication


Featured researches published by Camille Besse.


international conference on robotics and automation | 2010

Solving the continuous time multiagent patrol problem

Jean-Samuel Marier; Camille Besse; Brahim Chaib-draa

This paper compares two algorithms to solve a multiagent patrol problem with uncertain durations. The first algorithm is reactive and allows adaptive and robust behavior, while the second one uses planning to maximize longterm information retrieval. Experiments suggest that on the considered instances, using a reactive and local coordination algorithm performs almost as well as planning for long-term, while using much less computation time.


intelligent robots and systems | 2009

Bayesian reinforcement learning in continuous POMDPs with gaussian processes

Patrick Dallaire; Camille Besse; Stéphane Ross; Brahim Chaib-draa

Partially Observable Markov Decision Processes (POMDPs) provide a rich mathematical model to handle real-world sequential decision processes but require a known model to be solved by most approaches. However, mainstream POMDP research focuses on the discrete case and this complicates its application to most realistic problems that are naturally modeled using continuous state spaces. In this paper, we consider the problem of optimal control in continuous and partially observable environments when the parameters of the model are unknown. We advocate the use of Gaussian Process Dynamical Models (GPDMs) so that we can learn the model through experience with the environment. Our results on the blimp problem show that the approach can learn good models of the sensors and actuators in order to maximize long-term rewards.


Neurocomputing | 2011

An approximate inference with Gaussian process to latent functions from uncertain data

Patrick Dallaire; Camille Besse; Brahim Chaib-draa

Most formulations of supervised learning are often based on the assumption that only the outputs data are uncertain. However, this assumption might be too strong for some learning tasks. This paper investigates the use of Gaussian processes to infer latent functions from a set of uncertain input-output examples. By assuming Gaussian distributions with known variances over the inputs-outputs and using the expectation of the covariance function, it is possible to analytically compute the expected covariance matrix of the data to obtain a posterior distribution over functions. The method is evaluated on a synthetic problem and on a more realistic one, which consist in learning the dynamics of a cart-pole balancing task. The results indicate an improvement of the mean squared error and the likelihood of the posterior Gaussian process when the data uncertainty is significant.


international conference on neural information processing | 2009

Quasi-Deterministic Partially Observable Markov Decision Processes

Camille Besse; Brahim Chaib-draa

We study a subclass of pomdps, called quasi-deterministic pomdps (qDet- pomdps), characterized by deterministic actions and stochastic observations. While this framework does not model the same general problems as pomdps, they still capture a number of interesting and challenging problems and, in some cases, have interesting properties. By studying the observability available in this subclass, we show that qDet- pomdps may fall many steps in the complexity classes of polynomial hierarchy.


international conference on neural information processing | 2009

A Markov Model for Multiagent Patrolling in Continuous Time

Jean-Samuel Marier; Camille Besse; Brahim Chaib-draa

We present a model for the multiagent patrolling problem with con-tinuous-time. An anytime and online algorithm is then described and extended to asynchronous multiagent decision processes. An online algorithm is also proposed for coordinating the agents. We finally compared our approach empirically to existing methods.


canadian conference on artificial intelligence | 2007

R-FRTDP: A Real-Time DP Algorithm with Tight Bounds for a Stochastic Resource Allocation Problem

Camille Besse; Pierrick Plamondon; Brahim Chaib-draa

Resource allocation is a widely studied class of problems in Operation Research and Artificial Intelligence. Specially, constrained stochastic resource allocation problems, where the assignment of a constrained resource do not automatically imply the realization of the task. This kind of problems are generally addressed with Markov Decision Processes ( mdp s). In this paper, we present efficient lower and upper bounds in the context of a constrained stochastic resource allocation problem for a heuristic search algorithm called Focused Real Time Dynamic Programming ( frtdp ). Experiments show that this algorithm is relevant for this kind of problems and that the proposed tight bounds reduce the number of backups to perform comparatively to previous existing bounds.


international conference on neural information processing | 2009

Learning Gaussian Process Models from Uncertain Data

Patrick Dallaire; Camille Besse; Brahim Chaib-draa


the florida ai research society | 2008

Parallel Rollout for Online Solution of Dec-POMDPs.

Camille Besse; Brahim Chaib-draa


adaptive agents and multi-agents systems | 2010

Quasi deterministic POMDPs and DecPOMDPs

Camille Besse; Brahim Chaib-draa


the florida ai research society | 2012

Forecasting Conflicts Using N-Grams Models.

Camille Besse; Alireza Bakhtiari; Luc Lamontagne

Collaboration


Dive into the Camille Besse's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stéphane Ross

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge