Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ricardo Chavarriaga is active.

Publication


Featured researches published by Ricardo Chavarriaga.


international conference on networked sensing systems | 2010

Collecting complex activity datasets in highly rich networked sensor environments

Daniel Roggen; Alberto Calatroni; Mirco Rossi; Thomas Holleczek; Kilian Förster; Gerhard Tröster; Paul Lukowicz; David Bannach; Gerald Pirkl; Alois Ferscha; Jakob Doppler; Clemens Holzmann; Marc Kurz; Gerald Holl; Ricardo Chavarriaga; Hesam Sagha; Hamidreza Bayati; Marco Creatura; José del R. Millán

We deployed 72 sensors of 10 modalities in 15 wireless and wired networked sensor systems in the environment, in objects, and on the body to create a sensor-rich environment for the machine recognition of human activities. We acquired data from 12 subjects performing morning activities, yielding over 25 hours of sensor data. We report the number of activity occurrences observed during post-processing, and estimate that over 13000 and 14000 object and environment interactions occurred. We describe the networked sensor setup and the methodology for data acquisition, synchronization and curation. We report on the challenges and outline lessons learned and best practice for similar large scale deployments of heterogeneous networked sensor systems. We evaluate data acquisition quality for on-body and object integrated wireless sensors; there is less than 2.5% packet loss after tuning. We outline our use of the dataset to develop new sensor network self-organization principles and machine learning techniques for activity recognition in opportunistic sensor configurations. Eventually this dataset will be made public.


Pattern Recognition Letters | 2013

The Opportunity challenge: A benchmark database for on-body sensor-based activity recognition

Ricardo Chavarriaga; Hesam Sagha; Alberto Calatroni; Sundara Tejaswi Digumarti; Gerhard Tröster; José del R. Millán; Daniel Roggen

There is a growing interest on using ambient and wearable sensors for human activity recognition, fostered by several application domains and wider availability of sensing technologies. This has triggered increasing attention on the development of robust machine learning techniques that exploits multimodal sensor setups. However, unlike other applications, there are no established benchmarking problems for this field. As a matter of fact, methods are usually tested on custom datasets acquired in very specific experimental setups. Furthermore, data is seldom shared between different groups. Our goal is to address this issue by introducing a versatile human activity dataset recorded in a sensor-rich environment. This database was the basis of an open challenge on activity recognition. We report here the outcome of this challenge, as well as baseline performance using different classification techniques. We expect this benchmarking database will motivate other researchers to replicate and outperform the presented results, thus contributing to further advances in the state-of-the-art of activity recognition methods.


Journal of Neural Engineering | 2011

A hybrid brain?computer interface based on the fusion of electroencephalographic and electromyographic activities

Robert Leeb; Hesam Sagha; Ricardo Chavarriaga; José del R. Millán

Hybrid brain-computer interfaces (BCIs) are representing a recent approach to develop practical BCIs. In such a system disabled users are able to use all their remaining functionalities as control possibilities in parallel with the BCI. Sometimes these people have residual activity of their muscles. Therefore, in the presented hybrid BCI framework we want to explore the parallel usage of electroencephalographic (EEG) and electromyographic (EMG) activity, whereby the control abilities of both channels are fused. Results showed that the participants could achieve a good control of their hybrid BCI independently of their level of muscular fatigue. Thereby the multimodal fusion approach of muscular and brain activity yielded better and more stable performance compared to the single conditions. Even in the case of an increasing muscular fatigue a good control (moderate and graceful degradation of the performance compared to the non-fatigued case) and a smooth handover could be achieved. Therefore, such systems allow the users a very reliable hybrid BCI control although they are getting more and more exhausted or fatigued during the day.


IEEE Transactions on Neural Systems and Rehabilitation Engineering | 2010

Learning From EEG Error-Related Potentials in Noninvasive Brain-Computer Interfaces

Ricardo Chavarriaga; José del R. Millán

We describe error-related potentials generated while a human user monitors the performance of an external agent and discuss their use for a new type of brain-computer interaction. In this approach, single trial detection of error-related electroencephalography (EEG) potentials is used to infer the optimal agent behavior by decreasing the probability of agent decisions that elicited such potentials. Contrasting with traditional approaches, the user acts as a critic of an external autonomous system instead of continuously generating control commands. This sets a cognitive monitoring loop where the human directly provides information about the overall system performance that, in turn, can be used for its improvement. We show that it is possible to recognize erroneous and correct agent decisions from EEG (average recognition rates of 75.8% and 63.2%, respectively), and that the elicited signals are stable over long periods of time (from 50 to > 600 days). Moreover, these performances allow to infer the optimal behavior of a simple agent in a brain-computer interaction paradigm after a few trials.


Frontiers in Neuroengineering | 2012

Detection of self-paced reaching movement intention from EEG signals

Eileen Lew; Ricardo Chavarriaga; Stefano Silvoni; José del R. Millán

Future neuroprosthetic devices, in particular upper limb, will require decoding and executing not only the users intended movement type, but also when the user intends to execute the movement. This work investigates the potential use of brain signals recorded non-invasively for detecting the time before a self-paced reaching movement is initiated which could contribute to the design of practical upper limb neuroprosthetics. In particular, we show the detection of self-paced reaching movement intention in single trials using the readiness potential, an electroencephalography (EEG) slow cortical potential (SCP) computed in a narrow frequency range (0.1–1 Hz). Our experiments with 12 human volunteers, two of them stroke subjects, yield high detection rates prior to the movement onset and low detection rates during the non-movement intention period. With the proposed approach, movement intention was detected around 500 ms before actual onset, which clearly matches previous literature on readiness potentials. Interestingly, the result obtained with one of the stroke subjects is coherent with those achieved in healthy subjects, with single-trial performance of up to 92% for the paretic arm. These results suggest that, apart from contributing to our understanding of voluntary motor control for designing more advanced neuroprostheses, our work could also have a direct impact on advancing robot-assisted neurorehabilitation.


Psychological Review | 2009

Is there a geometric module for spatial orientation? Insights from a rodent navigation model

Denis Sheynikhovich; Ricardo Chavarriaga; Thomas Strösslin; Angelo Arleo; Wulfram Gerstner

Modern psychological theories of spatial cognition postulate the existence of a geometric module for reorientation. This concept is derived from experimental data showing that in rectangular arenas with distinct landmarks in the corners, disoriented rats often make diagonal errors, suggesting their preference for the geometric (arena shape) over the nongeometric (landmarks) cues. Moreover, sensitivity of hippocampal cell firing to changes in the environment layout was taken in support of the geometric module hypothesis. Using a computational model of rat navigation, the authors proposed and tested the alternative hypothesis that the influence of spatial geometry on both behavioral and neuronal levels can be explained by the properties of visual features that constitute local views of the environment. Their modeling results suggest that the pattern of diagonal errors observed in reorientation tasks can be understood by the analysis of sensory information processing that underlies the navigation strategy employed to solve the task. In particular, 2 navigation strategies were considered: (a) a place-based locale strategy that relies on a model of grid and place cells and (b) a stimulus-response taxon strategy that involves direct association of local views with action choices. The authors showed that the application of the 2 strategies in the reorientation tasks results in different patterns of diagonal errors, consistent with behavioral data. These results argue against the geometric module hypothesis by providing a simpler and biologically more plausible explanation for the related experimental data. Moreover, the same model also describes behavioral results in different types of water-maze tasks.


International Journal of Pattern Recognition and Artificial Intelligence | 2008

Non-Invasive Brain-Machine Interaction

José del R. Millán; Pierre W. Ferrez; Ferran Galán; Eileen Lew; Ricardo Chavarriaga

The promise of Brain-Computer Interfaces (BCI) technology is to augment human capabilities by enabling interaction with computers through a conscious and spontaneous modulation of the brainwaves after a short training period. Indeed, by analyzing brain electrical activity online, several groups have designed brain-actuated devices that provide alternative channels for communication, entertainment and control. Thus, a person can write messages using a virtual keyboard on a computer screen and also browse the internet. Alternatively, subjects can operate simple computer games, or brain games, and interact with educational software. Work with humans has shown that it is possible for them to move a cursor and even to drive a wheelchair. This paper briefly reviews the field of BCI, with a focus on non-invasive systems based on electroencephalogram (EEG) signals. It also describes three brain-actuated devices we have developed: a virtual keyboard, a brain game, and a wheelchair. Finally, it shortly discusses current research directions we are pursuing in order to improve the performance and robustness of our BCI system, especially for real-time control of brain actuated robots.


world of wireless mobile and multimedia networks | 2009

OPPORTUNITY: Towards opportunistic activity and context recognition systems

Daniel Roggen; Kilian Förster; Alberto Calatroni; Thomas Holleczek; Yu Fang; Gerhard Tröster; Alois Ferscha; Clemens Holzmann; Andreas Riener; Paul Lukowicz; Gerald Pirkl; David Bannach; Kai S. Kunze; Ricardo Chavarriaga; José del R. Millán

Opportunistic sensing allows to efficiently collect information about the physical world and the persons behaving in it. This may mainstream human context and activity recognition in wearable and pervasive computing by removing requirements for a specific deployed infrastructure. In this paper we introduce the newly started European research project OPPORTUNITY within which we develop mobile opportunistic activity and context recognition systems. We outline the projects objective, the approach we follow along opportunistic sensing, data processing and interpretation, and autonomous adaptation and evolution to environmental and user changes, and we outline preliminary results.


Robotics and Autonomous Systems | 2010

Brain-coupled interaction for semi-autonomous navigation of an assistive robot

Xavier Perrin; Ricardo Chavarriaga; Francis Colas; Roland Siegwart; José del R. Millán

This paper presents a novel semi-autonomous navigation strategy designed for low throughput interfaces. A mobile robot (e.g. intelligent wheelchair) proposes the most probable action, as analyzed from the environment, to a human user who can either accept or reject the proposition. In the case of refusal, the robot will propose another action, until both entities agree on what needs to be done. In an unknown environment, the robotic system first extracts features so as to recognize places of interest where a human-robot interaction should take place (e.g. crossings). Based on the local topology, relevant actions are then proposed, the user providing answers by means of a button or a brain-computer interface (BCI). Our navigation strategy is successfully tested both in simulation and with a real robot, and a feasibility study for the use of a BCI confirms the potential of such an interface.


ambient intelligence | 2008

The use of brain-computer interfacing for ambient intelligence

Gangadhar Garipelli; Ferran Galán; Ricardo Chavarriaga; Pierre W. Ferrez; Eileen Lew; José del R. Millán

This book constitutes the refereed proceedings of the workshops of the First European Conference on Ambient Intelligence, AmI 2007, held in Darmstadt, Germany, in November 2007. The papers are organized in topical sections on AI methods for ambient intelligence, evaluating ubiquitous systems with users, model driven software engineering for ambient intelligence applications, smart products, ambient assisted living, human aspects in ambient intelligence, Amigo, WASP as well as the cojoint PERSONA and SOPRANO workshops and the KDubiq workshop.

Collaboration


Dive into the Ricardo Chavarriaga's collaboration.

Top Co-Authors

Avatar

José del R. Millán

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Hesam Sagha

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Robert Leeb

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Denis Sheynikhovich

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Alois Ferscha

Johannes Kepler University of Linz

View shared research outputs
Top Co-Authors

Avatar

Huaijian Zhang

École Polytechnique Fédérale de Lausanne

View shared research outputs
Researchain Logo
Decentralizing Knowledge