Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Raphael Canals is active.

Publication


Featured researches published by Raphael Canals.


IEEE Transactions on Industrial Electronics | 2002

A biprocessor-oriented vision-based target tracking system

Raphael Canals; Anthony Roussel; Jean-Luc Famechon; Sylvie Treuillet

The design and realization of a vision-based target tracking system is proposed. The objective is to derive the orientation of a pan-tilt camera fitting a drone in order to track a target and to maintain its position in the middle of the image. Image data and drone attitude are the only information available for the camera control to be achieved correctly. This embedded system requires low-cost hardware for surveillance or attack drone applications: a digital signal processor for the image processing, and a microcontroller for the camera control. To ensure real-time video operation, an algorithmic solution integrating a successive-step and multi-block search method is implemented, thus allowing complex target displacements. The microcontroller uses this information to manage the camera orientation. Experiments have been conducted in real, conditions and acceptable target tracking results have been obtained on the prototype hardware.


international conference on indoor positioning and indoor navigation | 2013

Indoor navigation assistance with a Smartphone camera based on vanishing points

Wael Elloumi; Kamel Guissous; Aladine Chetouani; Raphael Canals; Rémy Leconge; Bruno Emile; Sylvie Treuillet

Indoor navigation assistance is a highly challenging task that is increasingly needed in various types of applications such as visually impaired guidance, emergency intervention, tourism, etc. Many alternative techniques to GPS have been explored to deal with this challenge like pre-installed sensor networks (Wifi, Ultra Wide Band, Bluetooth, Radio Frequency IDentification etc), inertial sensors or camera. This paper presents an indoor navigation system on Smartphone that was designed taking into consideration low cost, portability and the lightweight of the used algorithm in terms of computation power and storage space. The proposed solution relies on embedded vision. Robust and fast camera orientation (3 dof) is estimated by tracking three orthogonal vanishing points in a video stream acquired with the camera of a free-handled Smartphone. The developed algorithm enables indoor pedestrian localization in two steps: an off-line learning step defines a reference path by selecting key frames along the way using saliency extraction method and computing the camera orientation in these frames. Then, in localization step, an approximate but realistic position of the walker is estimated in real time by comparing the orientation of the camera in the current image and that of reference to assist the pedestrian with navigation guidance. Unlike SLAM, this approach does not require to build 3D mapping of the environment. Online walking direction is given by Smartphone camera which advantageously replaces the compass sensor since it performs very poorly indoors due to electromagnetic noise. Experiments, executed online on Smartphone, that show the feasibility and evaluate the accuracy of the proposed positioning approach for different indoor paths.


international conference on image processing | 2012

Comparative study between color texture and shape descriptors for multi-camera pedestrians identification

Ahmed Derbel; Yousra Ben Jemaa; Raphael Canals; Bruno Emile; Sylvie Treuillet; Abdelmajid Ben Hamadou

In this paper, we propose a comparative study between different descriptors based on color, texture and shape information. In particular, our study is focused on measuring the robustness of these descriptors in terms of identifing a person in a camera network. We prove through experimental study based on VIPeR pedestrians images dataset and “Cumulative Matching Characteristic” (CMC) measurement that color descriptors are the most appropriate in multi-camera context: they are less sensitive to the highly articulated human body, changes in lighting conditions and large pose variations.


IEEE Sensors Journal | 2016

Indoor Pedestrian Localization With a Smartphone: A Comparison of Inertial and Vision-Based Methods

Wael Elloumi; Abdelhakim Latoui; Raphael Canals; Aladine Chetouani; Sylvie Treuillet

Indoor pedestrian navigation systems are increasingly needed in various types of applications. However, such systems are still face many challenges. In addition to being accurate, a pedestrian positioning system must be mobile, cheap, and lightweight. Many approaches have been explored. In this paper, we take the advantage of sensors integrated in a smartphone and their capabilities to develop and compare two low-cost, hands-free, and handheld indoor navigation systems. The first one relies on embedded vision (smartphone camera), while the second option is based on low-cost smartphone inertial sensors (magnetometer, accelerometer, and gyroscope) to provide a relative position of the pedestrian. The two associated algorithms are computationally lightweight, since their implementations take into account the restricted resources of the smartphone. In the experiment conducted, we evaluate and compare the accuracy and repeatability of the two positioning methods for different indoor paths. The results obtained demonstrate that the vision-based localization system outperforms the inertial sensor-based positioning system.


Remote Sensing | 2018

Deep Learning with Unsupervised Data Labeling for Weed Detection in Line Crops in UAV Images

M Bah; Adel Hafiane; Raphael Canals

In recent years, weeds have been responsible for most agricultural yield losses. To deal with this threat, farmers resort to spraying the fields uniformly with herbicides. This method not only requires huge quantities of herbicides but impacts the environment and human health. One way to reduce the cost and environmental impact is to allocate the right doses of herbicide to the right place and at the right time (precision agriculture). Nowadays, unmanned aerial vehicles (UAVs) are becoming an interesting acquisition system for weed localization and management due to their ability to obtain images of the entire agricultural field with a very high spatial resolution and at a low cost. However, despite significant advances in UAV acquisition systems, the automatic detection of weeds remains a challenging problem because of their strong similarity to the crops. Recently, a deep learning approach has shown impressive results in different complex classification problems. However, this approach needs a certain amount of training data, and creating large agricultural datasets with pixel-level annotations by an expert is an extremely time-consuming task. In this paper, we propose a novel fully automatic learning method using convolutional neuronal networks (CNNs) with an unsupervised training dataset collection for weed detection from UAV images. The proposed method comprises three main phases. First, we automatically detect the crop rows and use them to identify the inter-row weeds. In the second phase, inter-row weeds are used to constitute the training dataset. Finally, we perform CNNs on this dataset to build a model able to detect the crop and the weeds in the images. The results obtained are comparable to those of traditional supervised training data labeling, with differences in accuracy of 1.5% in the spinach field and 6% in the bean field.


Computers and Electronics in Agriculture | 2018

Deep leaning approach with colorimetric spaces and vegetation indices for vine diseases detection in UAV images

Mohamed Kerkech; Adel Hafiane; Raphael Canals

Abstract Detection of symptoms in grape leaves is a very important factor in preventing a serious disease. An epidemic spread in vineyards has huge economic consequences and therefore it is considered a major challenge for viticulture. Automatic detection of vine diseases can play an important role in addressing the issue of diseases management. This study deals with the problem of identifying infected areas of grapevines using Unmanned Aerial Vehicles (UAV) images in the visible domain. In this paper we propose a method based on Convolutional neural network (CNN) and color information to detect symptoms in the vine yards. We studied and compared performances of CNNs using different color spaces, vegetation indices, as well as the combination of both information. The obtained results showed that CNNs with YUV color space combined with ExGR vegetation index, and CNNs with a combination of ExG, ExR, ExGR vegetation indices yield the best results with accuracy more than 95.8 % .


biennial baltic electronics conference | 2014

Remote online testing of embedded systems using Optical BILBO

Abdelhakim Latoui; Raphael Canals

In this paper, an online testing approach that checks continuously at remote system the correctness of embedded systems in full operation is proposed. Our new method exploits optical beams produced by an Optical Built-In Logic-Block Observation (OBILBO) register based on the captured data presented on the outputs of all stages of the registers in the BILBO. Then it is possible to send the captured data to remote system equipped with optical sensors in real time way. The final captured response data can also be used for final comparison with stored data of a golden circuit. Preliminary simulation results showed that faults are concurrently detected without affecting normal system operation and without having recourse to any external or internal Automatic Test Pattern Generation (ATPG).


european signal processing conference | 2009

Occlusion-handling for improved particle filtering-based tracking

Raphael Canals; Ali Ganoun; Rémy Leconge


european signal processing conference | 2006

Tracking system using CamShift and feature points

Ali Ganoun; Nouar Ould-Dris; Raphael Canals


international conference on image processing | 2017

Weeds detection in UAV imagery using SLIC and the hough transform

M.Dian. Bah; Adel Hafiane; Raphael Canals

Collaboration


Dive into the Raphael Canals's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bruno Emile

University of Orléans

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ali Ganoun

University of Orléans

View shared research outputs
Top Co-Authors

Avatar

M.Dian. Bah

University of Orléans

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge