Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christian I. Penaloza is active.

Publication


Featured researches published by Christian I. Penaloza.


IEEE Transactions on Automation Science and Engineering | 2014

Brain Machine Interface System Automation Considering User Preferences and Error Perception Feedback

Christian I. Penaloza; Yasushi Mae; Francisco F. Cuellar; Masaru Kojima; Tatsuo Arai

This paper addresses the problem of mental fatigue caused by prolonged use of Brain Machine Interface (BMI) Systems. We propose a system that gradually becomes autonomous by learning user preferences and by considering error perception feedback. As a particular application, we show that our system allows patients to control electronic appliances in a hospital room, and learns the correlation of room sensor data, brain states, and user control commands. Moreover, error perception feedback based on a brain potential called error related negativity (ERN) - that spontaneously occurs when the user perceives an error made by the system - was used to correct systems mistakes and improve its learning performance. Experimental results with volunteers demonstrate that our system reduces the level of mental fatigue, and achieves over 90% overall learning performance when error perception feedback is considered. Note to Practitioners - This paper suggests a new approach for designing BMI systems that incorporate learning capabilities and error perception feedback in order to gradually become autonomous. This approach consists in learning the relationship between sensing data from the environment-brain and user actions when controlling robotic devices. After the system is trained, can predict control commands on behalf of the user under similar conditions. If the system makes a mistake, users error perception feedback is considered to improve the learning performance the system. In this paper, we describe the methodologies to design and build hardware and software interfaces, acquire and process brain signals, and train the system using machine learning techniques. We then provide experimental evidence that demonstrates the effectiveness of this approach to design BMI systems that gradually become autonomous.


international conference on robotics and automation | 2013

BMI-based learning system for appliance control automation

Christian I. Penaloza; Yasushi Mae; Kenichi Ohara; Tatsuo Arai

In this research we present a non-invasive Brain-Machine Interface (BMI) system that allows patients with motor paralysis conditions to control electronic appliances in a hospital room. The novelty of our system compared to other BMI applications is that our system gradually becomes autonomous by learning user actions (i.e. turning on/off window, lights, etc.) under certain environment conditions (temperature, illumination, etc.) and brain states (i.e. awake, sleepy, etc.). By providing learning capabilities to the system, patients are relieved from mental fatigue or stress caused by continuously controlling appliances using a BMI. We present an interface that allows the user to select and control appliances using electromyogram signals (EMG) generated by muscle contractions such as eyebrow movement. Our learning approach consists in two steps: 1) monitoring user actions, input data from sensors distributed around the room, and Electroencephalogram (EEG) data from the user, and 2) using an extended version of the Bayes Point Machine approach trained with Expectation Propagation to approximate a posterior probability from previously observed user actions under a similar combination of brain states and environmental conditions. Experimental results with volunteers demonstrate that our system provides satisfactory user experience and achieves over 85% overall learning performance after only a few trials.


human-robot interaction | 2012

Children's knowledge and expectations about robots: a survey for future user-centered design of social robots

Eduardo Benítez Sandoval; Christian I. Penaloza

This paper seeks to establish a precedent for future development and design of social robots by considering the knowledge and expectations about robots of a group of 296 children. Humanrobot interaction experiments were conducted with a Teleoperated anthropomorphic robot, and surveys were taken before and after the experiments. Children were also asked to perform a drawing of a robot. An image analysis algorithm was developed to classify drawings into 4 types: Anthropomorphic Mechanic/Non Mechanic (AM/AnM) and Non-Anthropomorphic Mechanic/Non Mechanic (nAM/nAnM). Image analysis algorithm was used in combination with human classification using a 2003 (two out of three) voting scheme to find childrens strongest stereo-type about robots. Survey and image analysis results suggest that children in general have some general knowledge about robots, and some children even have a deep understanding and expectations for future robots. Moreover, childrens strongest stereotype is directed towards mechanical anthropomorphic systems.


human-robot interaction | 2011

Web-based object category learning using human-robot interaction cues

Christian I. Penaloza; Yasushi Mae; Tatsuo Arai; Kenichi Ohara; Tomohito Takubo

We present our method for learning object categories from the Internet using cues obtained through human-robot interaction. Such cues include an object model acquired by observation and the name of the object. Our learning approach emulates the natural learning process of children when they observe their environment, encounter unknown objects and ask adults the name of the object. Using this learning approach, our robot is able to discover objects in a domestic environment by observing when humans naturally move objects as part of their daily activities. Using speech interface, the robot directly asks humans the name of the object by showing an example of the acquired model. The name in text format and the previously learned model serve as input parameters to retrieve object category images from a search engine, select similar object images, and build a classifier. Preliminary results demonstrate the effectiveness of our learning approach.


Intelligent Service Robotics | 2013

Web-enhanced object category learning for domestic robots

Christian I. Penaloza; Yasushi Mae; Kenichi Ohara; Tomohito Takubo; Tatsuo Arai

We present a system architecture for domestic robots that allows them to learn object categories after one sample object was initially learned. We explore the situation in which a human teaches a robot a novel object, and the robot enhances such learning by using a large amount of image data from the Internet. The main goal of this research is to provide a robot with capabilities to enhance its learning while minimizing time and effort required for a human to train a robot. Our active learning approach consists of learning the object name using speech interface, and creating a visual object model by using a depth-based attention model adapted to the robot’s personal space. Given the object’s name (keyword), a large amount of object-related images from two main image sources (Google Images and the LabelMe website) are collected. We deal with the problem of separating good training samples from noisy images by performing two steps: (1) Similar image selection using a Simile Selector Classifier, and (2) non-real image filtering by implementing a variant of Gaussian Discriminant Analysis. After web image selection, object category classifiers are then trained and tested using different objects of the same category. Our experiments demonstrate the effectiveness of our robot learning approach.


ieee/sice international symposium on system integration | 2012

Software interface for controlling diverse robotic platforms using BMI

Christian I. Penaloza; Yasushi Mae; Kenichi Ohara; Tatsuo Arai

This paper presents a software interface that allows a user to control different types of robotic systems by using a Brain-Machine Interface. Unlike common device-specific BMI systems, our software architecture maps simple EEG-based commands to diverse functionalities depending on the robotic platform, so the user does not have learn to generate new EEG commands for different robots. The graphic user interface provides a mechanism that allows the user to navigate through menus using EMG signals (i.e. eye-blink), and then execute robot commands using EEG signals. Our software is based on a modular design that allows the integration of new robotic platforms with easy customization. Our current prototype explores the controllability of a humanoid robot, a flying robot and a pan-tilt robot using the proposed software interface. Experimental evidence shows that the system achieves good user satisfaction as well as easy controllability of different types of robotic systems.


IEEE Transactions on Neural Systems and Rehabilitation Engineering | 2018

Android Feedback-Based Training Modulates Sensorimotor Rhythms During Motor Imagery

Christian I. Penaloza; Maryam Alimardani; Shuichi Nishio

EEG-based brain computer interface (BCI) systems have demonstrated potential to assist patients with devastating motor paralysis conditions. However, there is great interest in shifting the BCI trend toward applications aimed at healthy users. Although BCI operation depends on technological factors (i.e., EEG pattern classification algorithm) and human factors (i.e., how well the person can generate good quality EEG patterns), it is the latter that is least investigated. In order to control a motor imagery-based BCI, users need to learn to modulate their sensorimotor brain rhythms by practicing motor imagery using a classical training protocol with an abstract visual feedback. In this paper, we investigate a different BCI training protocol using a human-like android robot (Geminoid HI-2) to provide realistic visual feedback. The proposed training protocol addresses deficiencies of the classical approach and takes the advantage of body-abled user capabilities. Experimental results suggest that android feedback-based BCI training improves the modulation of sensorimotor rhythms during motor imagery task. Moreover, we discuss how the influence of body ownership transfer illusion toward the android might have an effect on the modulation of event-related desynchronization/synchronization activity.


human robot interaction | 2016

Educational Robots as Promotors of Cultural Development

Francisco Cuellar; Christian I. Penaloza; Alexander Lopez

The use of robots as educational tools has demonstrated to be highly effective for attracting students to science and technology related academic fields. Although these academic fields are very important, we believe that other subjects such as language, music, arts, literature, history, etc., are also essential for future generations. For this reason, the goal of this research is to explore the potential use of robots as educational tools for non-technology related fields such as history. We discuss an alternative approach for designing robots inspired in traits and characteristics of historical figures that play an important role in the topic to be studied. We present some examples of conceptual designs of robots inspired in ancient gods of Mesoamerican and South American cultures. We discuss how some of the traits of ancient gods could serve as inspiration for the appearance design of commercial robots, and how these robots could be used in educational environments to attract the attention of students to learn about this history topic.


robot and human interactive communication | 2013

Robotics Education Initiative for Parent-Children Interaction

Francisco F. Cuellar; Christian I. Penaloza; Gustavo Kato

This paper presents a preliminary analysis of the first robotics education workshop in which parent and children interact by experimenting with concepts of robotics and develop problem solving skills. Unlike traditional robotics education workshops targeted to children only, our initiative encourages both parent and children to interact as a team and become interested in science and technology. Post-workshop analysis aims to describe the relation between the filial interaction of the participants, the level of learning and new skills obtained during the robotics workshop. With this new type workshop we expect students to gain more interest in technology and their parents to encourage their children toward engineering and science majors.


latin american robotics symposium | 2013

IREP: An Interactive Robotics Education Program for Undergraduate Students

Francisco F. Cuellar; Dante Arroyo; Eiji Onchi; Christian I. Penaloza

In this paper we present IREP, an Interactive Robotics Education Program that aims to integrate basic concepts of mechatronics, electronics and computer science for undergraduate engineering students in an interactive manner. Our methodology is inspired in the process in which project management techniques are used in order to define goals, milestone planning is used to guide the implementation, scheduling techniques are used to monitor project advances, and resource management guidelines are used to assign human and monetary resources. Initially, we present a case of study that describes the current state of the robotics program at the Pontifical Catholic University of Peru, then we identify key problems using surveys with current undergraduate students, and finally we propose potential solutions to each problem and integrate them into a new comprehensive education program. Using our new approach, students will enhance their hands-on experience with real robotics projects while performing preliminary research, documentation and oral presentations that will help them to develop their intellectual and communication skills.

Collaboration


Dive into the Christian I. Penaloza's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tatsuo Arai

Japanese Ministry of International Trade and Industry

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Francisco Cuellar

Pontifical Catholic University of Peru

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge