Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alessandro Giusti is active.

Publication


Featured researches published by Alessandro Giusti.


medical image computing and computer assisted intervention | 2013

Mitosis Detection in Breast Cancer Histology Images with Deep Neural Networks

Dan Claudio Ciresan; Alessandro Giusti; Luca Maria Gambardella; Juergen Schmidhuber

We use deep max-pooling convolutional neural networks to detect mitosis in breast histology images. The networks are trained to classify each pixel in the images, using as context a patch centered on the pixel. Simple postprocessing is then applied to the network output. Our approach won the ICPR 2012 mitosis detection competition, outperforming other contestants by a significant margin.


international conference on signal and image processing applications | 2011

Max-pooling convolutional neural networks for vision-based hand gesture recognition

Jawad Nagi; Frederick Ducatelle; Gianni A. Di Caro; Dan C. Ciresan; Ueli Meier; Alessandro Giusti; Farrukh Nagi; Jürgen Schmidhuber; Luca Maria Gambardella

Automatic recognition of gestures using computer vision is important for many real-world applications such as sign language recognition and human-robot interaction (HRI). Our goal is a real-time hand gesture-based HRI interface for mobile robots. We use a state-of-the-art big and deep neural network (NN) combining convolution and max-pooling (MPCNN) for supervised feature learning and classification of hand gestures given by humans to mobile robots using colored gloves. The hand contour is retrieved by color segmentation, then smoothened by morphological image processing which eliminates noisy edges. Our big and deep MPCNN classifies 6 gesture classes with 96% accuracy, nearly three times better than the nearest competitor. Experiments with mobile robots using an ARM 11 533MHz processor achieve real-time gesture recognition performance.


international conference on image processing | 2013

Fast image scanning with deep max-pooling convolutional neural networks

Alessandro Giusti; Dan Claudio Ciresan; Jonatan Masci; Luca Maria Gambardella; Juergen Schmidhuber

Deep Neural Networks now excel at image classification, detection and segmentation. When used to scan images by means of a sliding window, however, their high computational complexity can bring even the most powerful hardware to its knees. We show how dynamic programming can speedup the process by orders of magnitude, even when max-pooling layers are present.


international conference on robotics and automation | 2016

A Machine Learning Approach to Visual Perception of Forest Trails for Mobile Robots

Alessandro Giusti; Jerome Guzzi; Dan C. Ciresan; Fang-Lin He; Juan P. Rodriguez; Flavio Fontana; Matthias Faessler; Christian Forster; Jürgen Schmidhuber; Gianni A. Di Caro; Davide Scaramuzza; Luca Maria Gambardella

We study the problem of perceiving forest or mountain trails from a single monocular image acquired from the viewpoint of a robot traveling on the trail itself. Previous literature focused on trail segmentation, and used low-level features such as image saliency or appearance contrast; we propose a different approach based on a deep neural network used as a supervised image classifier. By operating on the whole image at once, our system outputs the main direction of the trail compared to the viewing direction. Qualitative and quantitative results computed on a large real-world dataset (which we provide for download) show that our approach outperforms alternatives, and yields an accuracy comparable to the accuracy of humans that are tested on the same image classification task. Preliminary results on using this information for quadrotor control in unseen trails are reported. To the best of our knowledge, this is the first letter that describes an approach to perceive forest trials, which is demonstrated on a quadrotor micro aerial vehicle.


Medical Image Analysis | 2015

Assessment of algorithms for mitosis detection in breast cancer histopathology images.

Mitko Veta; Paul J. van Diest; Stefan M. Willems; Haibo Wang; Anant Madabhushi; Angel Cruz-Roa; Fabio A. González; Anders Boesen Lindbo Larsen; Jacob Schack Vestergaard; Anders Bjorholm Dahl; Dan C. Ciresan; Jürgen Schmidhuber; Alessandro Giusti; Luca Maria Gambardella; F. Boray Tek; Thomas Walter; Ching-Wei Wang; Satoshi Kondo; Bogdan J. Matuszewski; Frédéric Precioso; Violet Snell; Josef Kittler; Teofilo de Campos; Adnan Mujahid Khan; Nasir M. Rajpoot; Evdokia Arkoumani; Miangela M. Lacle; Max A. Viergever; Josien P. W. Pluim

The proliferative activity of breast tumors, which is routinely estimated by counting of mitotic figures in hematoxylin and eosin stained histology sections, is considered to be one of the most important prognostic markers. However, mitosis counting is laborious, subjective and may suffer from low inter-observer agreement. With the wider acceptance of whole slide images in pathology labs, automatic image analysis has been proposed as a potential solution for these issues. In this paper, the results from the Assessment of Mitosis Detection Algorithms 2013 (AMIDA13) challenge are described. The challenge was based on a data set consisting of 12 training and 11 testing subjects, with more than one thousand annotated mitotic figures by multiple observers. Short descriptions and results from the evaluation of eleven methods are presented. The top performing method has an error rate that is comparable to the inter-observer agreement among pathologists.


international conference on image processing | 2013

A fast learning algorithm for image segmentation with max-pooling convolutional networks

Jonathan Masci; Alessandro Giusti; Dan C. Ciresan; Gabriel Fricout; Jürgen Schmidhuber

We present a fast algorithm for training MaxPooling Convolutional Networks to segment images. This type of network yields record-breaking performance in a variety of tasks, but is normally trained on a computationally expensive patch-by-patch basis. Our new method processes each training image in a single pass, which is vastly more efficient. We validate the approach in different scenarios and report a 1500-fold speed-up. In an application to automated steel defect detection and segmentation, we obtain excellent performance with short training times.


international conference on robotics and automation | 2013

Human-friendly robot navigation in dynamic environments

Jerome Guzzi; Alessandro Giusti; Luca Maria Gambardella; Guy Theraulaz; Gianni A. Di Caro

The vision-based mechanisms that pedestrians in social groups use to navigate in dynamic environments, avoiding obstacles and each others, have been subject to a large amount of research in social anthropology and biological sciences. We build on recent results in these fields to develop a novel fully-distributed algorithm for robot local navigation, which implements the same heuristics for mutual avoidance adopted by humans. The resulting trajectories are human-friendly, because they can intuitively be predicted and interpreted by humans, making the algorithm suitable for the use on robots sharing navigation spaces with humans. The algorithm is computationally light and simple to implement. We study its efficiency and safety in presence of sensing uncertainty, and demonstrate its implementation on real robots. Through extensive quantitative simulations we explore various parameters of the system and demonstrate its good properties in scenarios of different complexity. When the algorithm is implemented on robot swarms, we could observe emergent collective behaviors similar to those observed in human crowds.


human-robot interaction | 2014

Human Control of UAVs using Face Pose Estimates and Hand Gestures

Jawad Nagi; Alessandro Giusti; Gianni A. Di Caro; Luca Maria Gambardella

As a first step towards human and multiple-UAV interaction, we present a novel method for humans to interact with airborne UAVs using locally on-board video cameras. Using machine vision techniques, our approach enables human operators to command and control Parrot drones by giving them directions to move, using simple hand gestures. When a direction to move is given, the robot controller estimates the angle and distance to move with the help of a face score system and the estimated hand direction. This approach offers mobile robots the ability localize with human operators and provides UAVs/UGVs with a better perception of the environment around the human.Categories and Subject Descriptors I.2.9 [Robotics]; I.4 [Image Processing and Computer Vision]; I.5 [Pattern Recognition]: General


intelligent robots and systems | 2014

Human-swarm interaction using spatial gestures

Jawad Nagi; Alessandro Giusti; Luca Maria Gambardella; Gianni A. Di Caro

This paper presents a machine vision based approach for human operators to select individual and groups of autonomous robots from a swarm of UAVs. The angular distance between the robots and the human is estimated using measures of the detected human face, which aids to determine human and multi-UAV localization and positioning. In turn, this is exploited to effectively and naturally make the human select the spatially situated robots. Spatial gestures for selecting robots are presented by the human operator using tangible input devices (i.e., colored gloves). To select individuals and groups of robot we formulate a vocabulary of two-handed spatial pointing gestures. With the use of a Support Vector Machine (SVM) trained in a cascaded multi-binary-class configuration, the spatial gestures are effectively learned and recognized by a swarm of UAVs.


international conference on machine learning and applications | 2012

Convolutional Neural Support Vector Machines: Hybrid Visual Pattern Classifiers for Multi-robot Systems

Jawad Nagi; Gianni A. Di Caro; Alessandro Giusti; Farrukh Nagi; Luca Maria Gambardella

We introduce Convolutional Neural Support Vector Machines (CNSVMs), a combination of two heterogeneous supervised classification techniques, Convolutional Neural Networks (CNNs) and Support Vector Machines (SVMs). CNSVMs are trained using a Stochastic Gradient Descent approach, that provides the computational capability of online incremental learning and is robust for typical learning scenarios in which training samples arrive in mini-batches. This is the case for visual learning and recognition in multi-robot systems, where each robot acquires a different image of the same sample. The experimental results indicate that the CNSVM can be successfully applied to visual learning and recognition of hand gestures as well as to measure learning progress.

Collaboration


Dive into the Alessandro Giusti's collaboration.

Top Co-Authors

Avatar

Luca Maria Gambardella

Dalle Molle Institute for Artificial Intelligence Research

View shared research outputs
Top Co-Authors

Avatar

Gianni A. Di Caro

Dalle Molle Institute for Artificial Intelligence Research

View shared research outputs
Top Co-Authors

Avatar

Jerome Guzzi

Dalle Molle Institute for Artificial Intelligence Research

View shared research outputs
Top Co-Authors

Avatar

Jawad Nagi

Dalle Molle Institute for Artificial Intelligence Research

View shared research outputs
Top Co-Authors

Avatar

Jürgen Schmidhuber

Dalle Molle Institute for Artificial Intelligence Research

View shared research outputs
Top Co-Authors

Avatar

Giorgio Corani

Dalle Molle Institute for Artificial Intelligence Research

View shared research outputs
Top Co-Authors

Avatar

Dan C. Ciresan

Dalle Molle Institute for Artificial Intelligence Research

View shared research outputs
Top Co-Authors

Avatar

Andrea Emilio Rizzoli

Dalle Molle Institute for Artificial Intelligence Research

View shared research outputs
Top Co-Authors

Avatar

Matteo Salani

Dalle Molle Institute for Artificial Intelligence Research

View shared research outputs
Researchain Logo
Decentralizing Knowledge