Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alberto Fornaser is active.

Publication


Featured researches published by Alberto Fornaser.


International Conference on Augmented Reality, Virtual Reality and Computer Graphics | 2017

Augmented Reality to Enhance the Clinician’s Observation During Assessment of Daily Living Activities

M. De Cecco; Alberto Fornaser; P. Tomasin; Matteo Zanetti; G. Guandalini; P. G. Ianes; F. Pilla; Giandomenico Nollo; M. Valente; T. Pisoni

In rehabilitation medicine and in occupational therapy (OT) in particular the assessment tool is essentially the human eye observing the person performing activities of daily living to evaluate his/her level of independence, efficacy, effort, and safety, in order to design an individualized treatment program. On the contrary, in other clinical settings, diagnostics have very sophisticated technological tools such as the Computed Axial Tomography, 3D ultrasound, Functional Magnetic Resonance Imaging, Positron Emission Tomography and many others. Now it is possible to fill this gap in rehabilitation using various enabling technologies currently in a phase of real explosion, through which it will be possible to provide the rehabilitator, in addition to the evidence provided by the human eye, also a large amount of data describing the person’s motion in 3D, the interaction with the environment (forces, contact pressure maps, motion parameters related to the manipulation of objects, etc.), and the ‘internal’ parameters (heart rate, blood pressure, respiratory rate, sweating, etc.). This amount of information can be fed back to the clinician in an animation that represents the reality augmented with all the above parameters using methodologies of Augmented Reality (AR). The main benefit of this new interaction methodology is twofold: the observed scenarios depicted in animations contain all the relevant parameters simultaneously and the related data are well defined and contextualized. This new methodology is a revolution in rehabilitative evaluation methods that allow on one hand to increase the objectivity and effectiveness of clinical observation, and on the other hand to re-define more reliable assessment scales and more effective rehabilitation programs, more user-centered.


International Conference on Augmented Reality, Virtual Reality and Computer Graphics | 2016

Development of Innovative HMI Strategies for Eye Controlled Wheelchairs in Virtual Reality

Luca Maule; Alberto Fornaser; Malvina Leuci; Nicola Conci; Mauro Da Lio; Mariolino De Cecco

This paper focuses on the development of a gaze-based control strategy for semiautonomous wheelchairs. Starting from the information gathered by an eye tracker, the work aims to develop a novel paradigm of Human Computer Interaction (HCI) by means of a Virtual Reality (VR) environment, where specific motion metrics are evaluated.


PROCEEDINGS OF THE 12TH INTERNATIONAL A.I.VE.LA. CONFERENCE ON VIBRATION MEASUREMENTS BY LASER AND NONCONTACT TECHNIQUES: Advances and Applications | 2016

Inter-eye: Interactive error compensation for eye-tracking devices

Mariolino De Cecco; Matteo Zanetti; Alberto Fornaser; Malvina Leuci; Nicola Conci

This paper presents a new method for systematic errors compensation in modern eye-tracking devices. Systematic errors, together with repeatability errors, reduce the possible use of eye trackers for several applications such as moving into an indoor environment and enabling the user to indicate precisely the target only with the support of his/her eyes. The new method relies on an interactive procedure that enables the system to accurately estimate the systematic effect in few seconds and thus compensate it in a fast and accurate way. Results show that the uncertainty can be dramatically decreased for a low-cost device on a 17 inches screen from 100 pixels to approximately 15 pixels.


3rd International Conference on 3D Body Scanning Technologies, Lugano, Switzerland, 16-17 October 2012 | 2012

Low-Cost Garment-Based 3D Body Scanner

Nicolò Biasi; Francesco Setti; Mattia Tavernini; Alberto Fornaser; Massimo Lunardelli; Mauro Da Lio; Mariolino De Cecco

While in the last decade Laser-based technologies became the reference for 3D body scanning, vision-based technologies became more and more important for motion capture. We are now proposing a mixed approach, based on passive vision technology and able to scan 3D human bodies, as well as capturing motion. In this paper we will present a low-cost multi-camera body scanning system based on a special wearable garment. The garment is a black suit with colored markers, about 3000, uniformly distributed on a square grid all over the body of the subject. Each colored marker is uniquely identified by an ID based on the color of the marker itself and its 8 neighbors. The sequence of 9 color was studied to be unique (no repetitions), independent to rotation (unique starting marker from which to read the color sequence) and non symmetrical (color sequence is unique read either in clockwise and anti-clockwise direction). This allows to deal with thousands of points with no matching outliers. System is composed by 2 up to 12 RGB cameras in order to perform single limb or full body scanning. System calibration is performed in a single step semi-automatic procedure. Synchronous images acquisition is guaranteed by a triggering device, avoiding problems of subject moving during acquisition. Uniform illumination is assured by 6 neon lamps with high frequency electrical ballast. With this system is possible to achieve an accuracy of 1mm on a planar surface.


Robotics and Autonomous Systems | 2017

Automatic graph based spatiotemporal extrinsic calibration of multiple Kinect V2 ToF cameras

Alberto Fornaser; P. Tomasin; M. De Cecco; Mattia Tavernini; Matteo Zanetti

Abstract This paper presents a novel technique for the automatic calibration of motion capture systems composed of multiple Time of Flight cameras (ToF) or any equivalent device able to provide 3D point clouds together with chromatic information. The calibration procedure is designed to be simple, fast and unsupervised: a colored sphere moved freely by hand is used as calibration tool. Three elements in particular distinguish the method we propose from other state of the art solutions: it takes into account and propagates measurement uncertainties, it allows the identification and recovery of systematic time delays in camera synchronization, it optimize the extrinsic parameters by means of a graph based approach. A widely used ToF, Microsoft Kinect V2, was used for the comparison with state of the art multi-ToF calibration techniques. Experimental results demonstrated that the proposed method is the most accurate both in terms of shape reconstruction and spatial consistency. Furthermore, the experimental results underlined that the application of the global graph optimization over a more standard couple-based unweighed matching achieves a higher accuracy in the assessment of the extrinsic parameters. Thus enabling the reduction of both systematic errors and data dispersion in 3D point clouds of reference shapes. The comparison among acquisition systems composed of 3 and 6 devices underlined that graph optimization becomes relevant when the number of devices grows. Taking into account several 3D configurations calibrated with the proposed method, it was assessed that the uncertainty of the extrinsic parameters is generally lower than 2 mm and 1 0 − 2 radiants, 0.6°. The resulting dense 3D reconstruction of objects, both static and in motion, achieved experimental errors lower than to 10 mm, approximately half of other state of the art methods, and, in first approximation, homogeneous for the entire monitored area. The software developed can be found at the site of the MIRo lab (Measurement Instrumentation and Robotics lab) at link 1 under the Creative Commons Attribution conditions.


Journal of Physics: Conference Series | 2017

Eye tracker uncertainty analysis and modelling in real time

Alberto Fornaser; M. De Cecco; Malvina Leuci; Nicola Conci; M. Daldoss; A. Armanini; Luca Maule; F.G.B. De Natale; M. Da Lio

Techniques for tracking the eyes took place since several decades for different applications that range from military, to education, entertainment and clinics. The existing systems are in general of two categories: precise but intrusive or comfortable but less accurate. The idea of this work is to calibrate an eye tracker of the second category. In particular we have estimated the uncertainty both in nominal and in case of variable operating conditions. We took into consideration different influencing factors such as: head movement and rotation, eyes detected, target position on the screen, illumination and objects in front of the eyes. Results proved that the 2D uncertainty can be modelled as a circular confidence interval as far as there is no stable principal directions in both the systematic and the repeatability effects. This confidence region was also modelled as a function of the current working conditions. In this way we can obtain a value of the uncertainty that is a function of the operating condition estimated in real time opening the field to new applications that reconfigure the human machine interface as a function of the operating conditions. Examples can range from option buttons reshape, local zoom dynamically adjusted, speed optimization to regulate interface responsiveness, the possibility to take into account the uncertainty associated to a particular interaction. Furthermore, in the analysis of visual scanning patterns, the resulting Point of Regard maps would be associated with proper confidence levels thus allowing to draw accurate conclusions. We conducted an experimental campaign to estimate and validate the overall modelling procedure obtaining valid results in 86% of the cases.


International Conference on Augmented Reality, Virtual Reality and Computer Graphics | 2017

Augmented Robotics for Electronic Wheelchair to Enhance Mobility in Domestic Environment

Luca Maule; Alberto Fornaser; Paolo Tomasin; Mattia Tavernini; Gabriele Minotto; Mauro Da Lio; Mariolino De Cecco

This paper focuses on the development of a novel Human Machine Interaction strategy based on Augmented Reality for the semi-autonomous navigation of a power wheelchair. The final goal is the development of a shared control, combining direct control by the user with the comfort of an autonomous navigation based on augmented reality markers. A first evaluation has been performed on the real test bed.


international symposium elmar | 2016

The use of INTER-EYE for 3D eye-tracking systematic error compensation

Matteo Zanetti; Mariolino De Cecco; Alberto Fornaser; Malvina Leuci; Nicola Conci

One of the most popular techniques for eye-gaze tracking is pupil-corneal reflection (PCR). It is a low-cost and non-invasive approach, which consists of infrared lights to illuminate the users eye and a camera for capturing the resulting images. The major problems of this technology are repeatability errors and the lack of accuracy. In this paper, we propose the use of an interactive method for reducing the systematic effects of PCR devices and we evaluate how this method affects the selection of a point in a 3D scenario acquired through the use of a Microsoft Kinect. Results show that, on a 17-inch screen with a resolution of 1280×1024 pixels, the systematic effects can be reduced from 100 to approximately 15 pixels and that the uncertainty also decreases in the 3D case. This could be a good starting point for using PCR technology in applications where the accurate estimate of a certain position, also in 3D, is fundamental.


machine vision applications | 2015

Garment-based motion capture (GaMoCap): high-density capture of human shape in motion

Nicolò Biasi; Francesco Setti; Alessio Del Bue; Mattia Tavernini; Massimo Lunardelli; Alberto Fornaser; Mauro Da Lio; Mariolino De Cecco


2018 Workshop on Metrology for Industry 4.0 and IoT | 2018

An Augmented Reality Virtual Assistant to Help Mild Cognitive Impaired Users in Cooking a System Able to Recognize the User Status and Personalize the Support

J. Dagostini; L. Bonetti; A. Salee; L. Passerini; G. Fiacco; P. Lavanda; E. Motti; M. Stocco; K. T. Gashay; E.G. Abebe; S. M. Alemu; R. Haghani; A. Voltolini; C. Strobbe; N. Covre; G. Santolini; M. Armellini; T. Sacchi; D. Ronchese; C. Furlan; F. Facchinato; Luca Maule; P. Tomasin; Alberto Fornaser; M. De Cecco

Collaboration


Dive into the Alberto Fornaser's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge