Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Giovanni Bellusci is active.

Publication


Featured researches published by Giovanni Bellusci.


Sensors | 2016

Estimation of Ground Reaction Forces and Moments During Gait Using Only Inertial Motion Capture

Angelos Karatsidis; Giovanni Bellusci; H. Martin Schepers; Mark de Zee; Michael Skipper Andersen; Peter H. Veltink

Ground reaction forces and moments (GRF&M) are important measures used as input in biomechanical analysis to estimate joint kinetics, which often are used to infer information for many musculoskeletal diseases. Their assessment is conventionally achieved using laboratory-based equipment that cannot be applied in daily life monitoring. In this study, we propose a method to predict GRF&M during walking, using exclusively kinematic information from fully-ambulatory inertial motion capture (IMC). From the equations of motion, we derive the total external forces and moments. Then, we solve the indeterminacy problem during double stance using a distribution algorithm based on a smooth transition assumption. The agreement between the IMC-predicted and reference GRF&M was categorized over normal walking speed as excellent for the vertical (ρ = 0.992, rRMSE = 5.3%), anterior (ρ = 0.965, rRMSE = 9.4%) and sagittal (ρ = 0.933, rRMSE = 12.4%) GRF&M components and as strong for the lateral (ρ = 0.862, rRMSE = 13.1%), frontal (ρ = 0.710, rRMSE = 29.6%), and transverse GRF&M (ρ = 0.826, rRMSE = 18.2%). Sensitivity analysis was performed on the effect of the cut-off frequency used in the filtering of the input kinematics, as well as the threshold velocities for the gait event detection algorithm. This study was the first to use only inertial motion capture to estimate 3D GRF&M during gait, providing comparable accuracy with optical motion capture prediction. This approach enables applications that require estimation of the kinetics during walking outside the gait laboratory.


Robotics | 2014

IMU and Multiple RGB-D Camera Fusion for Assisting Indoor Stop-and-Go 3D Terrestrial Laser Scanning

Jacky C. K. Chow; Derek D. Lichti; Jeroen D. Hol; Giovanni Bellusci; Henk Luinge

Autonomous Simultaneous Localization and Mapping (SLAM) is an important topic in many engineering fields. Since stop-and-go systems are typically slow and full-kinematic systems may lack accuracy and integrity, this paper presents a novel hybrid “continuous stop-and-go” mobile mapping system called Scannect. A 3D terrestrial LiDAR system is integrated with a MEMS IMU and two Microsoft Kinect sensors to map indoor urban environments. The Kinects’ depth maps were processed using a new point-to-plane ICP that minimizes the reprojection error of the infrared camera and projector pair in an implicit iterative extended Kalman filter (IEKF). A new formulation of the 5-point visual odometry method is tightly coupled in the implicit IEKF without increasing the dimensions of the state space. The Scannect can map and navigate in areas with textureless walls and provides an effective means for mapping large areas with lots of occlusions. Mapping long corridors (total travel distance of 120 m) took approximately 30 minutes and achieved a Mean Radial Spherical Error of 17 cm before smoothing or global optimization.


Sensors | 2016

Estimation of full-body poses using only five inertial sensors: an eager or lazy learning approach?

Frank J. Wouda; Matteo Giuberti; Giovanni Bellusci; Petrus H. Veltink

Human movement analysis has become easier with the wide availability of motion capture systems. Inertial sensing has made it possible to capture human motion without external infrastructure, therefore allowing measurements in any environment. As high-quality motion capture data is available in large quantities, this creates possibilities to further simplify hardware setups, by use of data-driven methods to decrease the number of body-worn sensors. In this work, we contribute to this field by analyzing the capabilities of using either artificial neural networks (eager learning) or nearest neighbor search (lazy learning) for such a problem. Sparse orientation features, resulting from sensor fusion of only five inertial measurement units with magnetometers, are mapped to full-body poses. Both eager and lazy learning algorithms are shown to be capable of constructing this mapping. The full-body output poses are visually plausible with an average joint position error of approximately 7 cm, and average joint angle error of 7∘. Additionally, the effects of magnetic disturbances typical in orientation tracking on the estimation of full-body poses was also investigated, where nearest neighbor search showed better performance for such disturbances.


Frontiers in Physiology | 2018

Estimation of vertical ground reaction forces and sagittal knee kinematics during running using three inertial sensors

Frank J. Wouda; Matteo Giuberti; Giovanni Bellusci; Erik Maartens; Jasper Reenalda; Bernhard J.F. van Beijnum; Peter H. Veltink

Analysis of running mechanics has traditionally been limited to a gait laboratory using either force plates or an instrumented treadmill in combination with a full-body optical motion capture system. With the introduction of inertial motion capture systems, it becomes possible to measure kinematics in any environment. However, kinetic information could not be provided with such technology. Furthermore, numerous body-worn sensors are required for a full-body motion analysis. The aim of this study is to examine the validity of a method to estimate sagittal knee joint angles and vertical ground reaction forces during running using an ambulatory minimal body-worn sensor setup. Two concatenated artificial neural networks were trained (using data from eight healthy subjects) to estimate the kinematics and kinetics of the runners. The first artificial neural network maps the information (orientation and acceleration) of three inertial sensors (placed at the lower legs and pelvis) to lower-body joint angles. The estimated joint angles in combination with measured vertical accelerations are input to a second artificial neural network that estimates vertical ground reaction forces. To validate our approach, estimated joint angles were compared to both inertial and optical references, while kinetic output was compared to measured vertical ground reaction forces from an instrumented treadmill. Performance was evaluated using two scenarios: training and evaluating on a single subject and training on multiple subjects and evaluating on a different subject. The estimated kinematics and kinetics of most subjects show excellent agreement (ρ>0.99) with the reference, for single subject training. Knee flexion/extension angles are estimated with a mean RMSE <5°. Ground reaction forces are estimated with a mean RMSE < 0.27 BW. Additionaly, peak vertical ground reaction force, loading rate and maximal knee flexion during stance were compared, however, no significant differences were found. With multiple subject training the accuracy of estimating discrete and continuous outcomes decreases, however, good agreement (ρ > 0.9) is still achieved for seven of the eight different evaluated subjects. The performance of multiple subject learning depends on the diversity in the training dataset, as differences in accuracy were found for the different evaluated subjects.


Sensors | 2017

Optimization-Based Sensor Fusion of GNSS and IMU Using a Moving Horizon Approach

Fabian Girrbach; Jeroen D. Hol; Giovanni Bellusci; Moritz Diehl

The rise of autonomous systems operating close to humans imposes new challenges in terms of robustness and precision on the estimation and control algorithms. Approaches based on nonlinear optimization, such as moving horizon estimation, have been shown to improve the accuracy of the estimated solution compared to traditional filter techniques. This paper introduces an optimization-based framework for multi-sensor fusion following a moving horizon scheme. The framework is applied to the often occurring estimation problem of motion tracking by fusing measurements of a global navigation satellite system receiver and an inertial measurement unit. The resulting algorithm is used to estimate position, velocity, and orientation of a maneuvering airplane and is evaluated against an accurate reference trajectory. A detailed study of the influence of the horizon length on the quality of the solution is presented and evaluated against filter-like and batch solutions of the problem. The versatile configuration possibilities of the framework are finally used to analyze the estimated solutions at different evaluation times exposing a nearly linear behavior of the sensor fusion problem.


Journal of Neuroengineering and Rehabilitation | 2018

Validation of wearable visual feedback for retraining foot progression angle using inertial sensors and an augmented reality headset

Angelos Karatsidis; R. Richards; Jason M. Konrath; Josien C. van den Noort; H. Martin Schepers; Giovanni Bellusci; J. Harlaar; Petrus H. Veltink

BackgroundGait retraining interventions using real-time biofeedback have been proposed to alter the loading across the knee joint in patients with knee osteoarthritis. Despite the demonstrated benefits of these conservative treatments, their clinical adoption is currently obstructed by the high complexity, spatial demands, and cost of optical motion capture systems. In this study we propose and evaluate a wearable visual feedback system for gait retraining of the foot progression angle (FPA).MethodsThe primary components of the system are inertial measurement units, which track the human movement without spatial limitations, and an augmented reality headset used to project the visual feedback in the visual field. The adapted gait protocol contained five different target angles ranging from 15 degrees toe-out to 5 degrees toe-in. Eleven healthy participants walked on an instrumented treadmill, and the protocol was performed using both an established laboratory visual feedback driven by optical motion capture, and the proposed wearable system.Results and conclusionsThe wearable system tracked FPA with an accuracy of 2.4 degrees RMS and ICC=0.94 across all target angles and subjects, when compared to an optical motion capture reference. In addition, the effectiveness of the biofeedback, reflected by the number of steps with FPA value ±2 degrees from the target, was found to be around 50% in both wearable and laboratory approaches. These findings demonstrate that retraining of the FPA using wearable inertial sensing and visual feedback is feasible with effectiveness matching closely an established laboratory method. The proposed wearable setup may reduce the complexity of gait retraining applications and facilitate their transfer to routine clinical practice.


Archive | 2015

Motion Tracking with Reduced On-Body Sensors Set

Giovanni Bellusci; Hendrick J. Luinge; Jeroen D. Hol


arXiv: Medical Physics | 2018

Predicting kinetics using musculoskeletal modeling and inertial motion capture

Angelos Karatsidis; Moonki Jung; H. Martin Schepers; Giovanni Bellusci; Mark de Zee; Peter H. Veltink; Michael Skipper Andersen


IFAC-PapersOnLine | 2017

Towards robust sensor fusion for state estimation in airborne applications using GNSS and IMU

Fabian Girrbach; Jeroen D. Hol; Giovanni Bellusci; Moritz Diehl


Archive | 2016

Inertial Motion Capture Calibration

Jeroen D. Hol; Giovanni Bellusci

Collaboration


Dive into the Giovanni Bellusci's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge