Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Davide Valeriani is active.

Publication


Featured researches published by Davide Valeriani.


PLOS ONE | 2014

Collaborative brain-computer interface for aiding decision-making.

Riccardo Poli; Davide Valeriani; Caterina Cinel

We look at the possibility of integrating the percepts from multiple non-communicating observers as a means of achieving better joint perception and better group decisions. Our approach involves the combination of a brain-computer interface with human behavioural responses. To test ideas in controlled conditions, we asked observers to perform a simple matching task involving the rapid sequential presentation of pairs of visual patterns and the subsequent decision as whether the two patterns in a pair were the same or different. We recorded the response times of observers as well as a neural feature which predicts incorrect decisions and, thus, indirectly indicates the confidence of the decisions made by the observers. We then built a composite neuro-behavioural feature which optimally combines the two measures. For group decisions, we uses a majority rule and three rules which weigh the decisions of each observer based on response times and our neural and neuro-behavioural features. Results indicate that the integration of behavioural responses and neural features can significantly improve accuracy when compared with the majority rule. An analysis of event-related potentials indicates that substantial differences are present in the proximity of the response for correct and incorrect trials, further corroborating the idea of using hybrids of brain-computer interfaces and traditional strategies for improving decision making.


international ieee/embs conference on neural engineering | 2015

A collaborative Brain-Computer Interface to improve human performance in a visual search task

Davide Valeriani; Riccardo Poli; Caterina Cinel

In this paper we use a collaborative brain-computer interface to integrate the decision confidence of multiple non-communicating observers as a mechanism to improve group decisions. In recent research we tested this idea with the decisions associated with a simple visual matching task and found that a collaborative BCI can outperform group decisions made by a majority vote. Here we extend these initial findings in two ways. Firstly, we look at a more traditional (and more difficult) visual search task involving deciding whether a red vertical bar is present in a random set of 40 red and green, horizontal and vertical bars shown for a very short time. Secondly, to extract features from the neural signals we use spatial CSP filters instead of the spatio-temporal PCA we used in previous research, resulting in a significant reduction in the number of features and free parameters used in the system. Results obtained with 10 participants indicate that for almost all group sizes our new CSP-based collaborative BCI yields group decisions that are statistically significantly better than both traditional (majority-based) group decisions and group decisions made by a PCA-based collaborative BCI.


international ieee/embs conference on neural engineering | 2015

A collaborative Brain-Computer Interface for improving group detection of visual targets in complex natural environments

Davide Valeriani; Riccardo Poli; Caterina Cinel

Detecting a target in a complex environment can be a difficult task, both for a single individual and a group, especially if the scene is very rich of structure and there are strict time constraints. In recent research, we have demonstrated that collaborative Brain-Computer Interfaces (cBCIs) can use neural signals and response times to estimate the decision confidence of participants and use this to improve group decisions in visual-matching and visual-search tasks with artificial stimuli. This paper extends that work in two ways. Firstly, we use a much harder target detection task where observers are presented with complex natural scenes where targets are very difficult to identify. Secondly, we complement the neural and behavioural information used in our previous cBCIs with physiological features representing eye movements and eye blinks of participants in the period preceding their decisions. Results obtained with 10 participants indicate that the proposed cBCI improves decision errors by up to 3.4% (depending on group size) over group decisions made by a majority vote. Furthermore, results show that providing the system with information about eye movements and blinks further significantly improves performance over our best previously reported method.


genetic and evolutionary computation conference | 2013

Segmentation of histological images using a metaheuristic-based level set approach

Pablo Mesejo; Stefano Cagnoni; Davide Valeriani

This paper presents a two-phase method to segment the hippocampus in histological images. The first phase represents a training stage where, from a training set of manually labelled images, the hippocampus representative shape and texture are derived. The second one, the proper segmentation, uses a metaheuristic to evolve the contour of a geometric deformable model using region and texture information. Three different metaheuristics (real-coded GA, Particle Swarm Optimization and Differential Evolution) and two classical segmentation algorithms (Chan & Vese model and Geodesic Active Contours) were compared over a test set of 10 histological images. The best results were attained by the real-coded GA, achieving an average and median Dice Coefficient of 0.72 and 0.77, respectively.


computer science and electronic engineering conference | 2015

Towards a wearable device for controlling a smartphone with eye winks

Davide Valeriani; Ana Matran-Fernandez

The development of mobile technology over the last years and the consequent boom of available apps has enabled users to migrate a wide range of activities that were traditionally performed on computers to their smartphones. Despite this new freedom to work ubiquitously, there are circumstances in which operating the device becomes difficult, e.g., when the hands are not free due to driving or other activities. Even though there are voice-control alternatives for operating smartphones, these do not perform well in crowded or noisy environments. In this paper we present EyeWink: an innovative hands- and voice-free wearable device that allows users to operate the smartphones with eye winks. The system records the Electrooculography (EOG) signals on the forehead by means of two facial electrodes. Eye winks are detected by comparing the potentials recorded from the electrodes, which also helps avoid false actuations due to (unavoidable) eye blinks. The user can associate the action to perform with each eye by means of an app installed on the smartphone. The proposed device can be widely used, with customers ranging from runners to people with severe disabilities.


international ieee/embs conference on neural engineering | 2017

Augmenting group performance in target-face recognition via collaborative brain-computer interfaces for surveillance applications

Davide Valeriani; Caterina Cinel; Riccardo Poli

This paper presents a hybrid collaborative brain-computer interface (cBCI) to improve group-based recognition of target faces in crowded scenes recorded from surveillance cameras. The cBCI uses a combination of neural features extracted from EEG and response times to estimate the decision confidence of the users. Group decisions are then obtained by weighing individual responses according to these confidence estimates. Results obtained with 10 participants indicate that the proposed cBCI improves decision errors by up to 7% over traditional group decisions based on majority. Moreover, the confidence estimates obtained by the cBCI are more accurate and robust than the confidence reported by the participants after each decision. These results show that cBCIs can be an effective means of human augmentation in realistic scenarios.


Journal of Automation, Mobile Robotics and Intelligent Systems | 2014

Lessons learned in a ball fetch-and-carry robotic competition

Dario Lodi Rizzini; Stefano Caselli; Davide Valeriani; Andrea Signifredi; Isabella Salsi; Marco Patander; Federico Parisi; Marco Cigolini

Robot competitions are effective means to learn the issues of autonomous systems on the field, by solving a complex problem end-to-end. In this paper, we illustrate Red Beard Button, the robotic system that we developed for the Sick Robot Day 2012 competition, and we highlight notions about design and implementation of robotic systems acquired through this experience. The aim of the contest was to detect, fetch and carry balls with an assigned color to a dropping area, similarly to a foraging navigation task. The developed robotic system was required to perceive colored balls, to grasp and transport balls, and to localize itself and navigate to assigned areas. Through extensive experiments the team developed an initial prototype, discovered pitfalls, revised the initial assumptions and design decisions, and took advantage of the iteration process to perform successfully at the competition.


bioRxiv | 2018

Cyborg Groups Enhance Face Recognition in Crowded Environments

Davide Valeriani; Riccardo Poli

Recognizing a person in a crowded environment is a challenging, yet critical, visual-search task for both humans and machine-vision algorithms. This paper explores the possibility of combining a residual neural network (ResNet), brain-computer interfaces (BCIs) and human participants to create “cyborgs” that improve decision making. Human participants and a ResNet undertook the same face-recognition experiment. BCIs were used to decode the decision confidence of humans from their EEG signals. Different types of cyborg groups were created, including either only humans (with or without the BCI) or groups of humans and the ResNet. Cyborg groups decisions were obtained weighing individual decisions by confidence estimates. Results show that groups of cyborgs are significantly more accurate (up to 35%) than the ResNet, the average participant, and equally-sized groups of humans not assisted by technology. These results suggest that melding humans, BCI, and machine-vision technology could significantly improve decision-making in realistic scenarios.


Scientific Reports | 2017

Group Augmentation in Realistic Visual-Search Decisions via a Hybrid Brain-Computer Interface

Davide Valeriani; Caterina Cinel; Riccardo Poli

Groups have increased sensing and cognition capabilities that typically allow them to make better decisions. However, factors such as communication biases and time constraints can lead to less-than-optimal group decisions. In this study, we use a hybrid Brain-Computer Interface (hBCI) to improve the performance of groups undertaking a realistic visual-search task. Our hBCI extracts neural information from EEG signals and combines it with response times to build an estimate of the decision confidence. This is used to weigh individual responses, resulting in improved group decisions. We compare the performance of hBCI-assisted groups with the performance of non-BCI groups using standard majority voting, and non-BCI groups using weighted voting based on reported decision confidence. We also investigate the impact on group performance of a computer-mediated form of communication between members. Results across three experiments suggest that the hBCI provides significant advantages over non-BCI decision methods in all cases. We also found that our form of communication increases individual error rates by almost 50% compared to non-communicating observers, which also results in worse group performance. Communication also makes reported confidence uncorrelated with the decision correctness, thereby nullifying its value in weighing votes. In summary, best decisions are achieved by hBCI-assisted, non-communicating groups.


IEEE Transactions on Biomedical Engineering | 2017

Enhancement of Group Perception via a Collaborative Brain–Computer Interface

Davide Valeriani; Riccardo Poli; Caterina Cinel

Collaboration


Dive into the Davide Valeriani's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge