Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christian Balkenius is active.

Publication


Featured researches published by Christian Balkenius.


Cybernetics and Systems | 2001

EMOTIONAL LEARNING: A COMPUTATIONAL MODEL OF THE AMYGDALA

Jan Morén; Christian Balkenius

We describe work in progress with the aim of constructing a computational model of emotional learning and processing inspired by neurophysiological findings. The main brain areas modeled are the amygdala and the orbitofrontal cortex and the interaction between them. We want to show that (1) there exists enough physiological data to suggest the overall architecture of a computational model, (2) emotion plays a clear role in learning the behavior. We review neurophysiological data and present a computational model that is subsequently tested in simulation.


Journal of Medical Engineering & Technology | 2006

Myoelectric control of a computer animated hand: A new concept based on the combined use of a tree-structured artificial neural network and a data glove

Fredrik Sebelius; Lars Eriksson; Christian Balkenius; Thomas Laurell

This paper proposes a new learning set-up in the field of control systems for multifunctional hand prostheses. Two male subjects with a traumatic one-hand amputation performed simultaneous symmetric movements with the healthy and the phantom hand. A data glove on the healthy hand was used as a reference to train the system to perform natural movements. Instead of a physical prosthesis with limited degrees of freedom, a virtual (computer-animated) hand was used as the target tool. Both subjects successfully performed seven different motoric actions with the fingers and wrist. To reduce the training time for the system, a tree-structured, self-organizing, artificial neural network was designed. The training time never exceeded 30 seconds for any of the configurations used, which is three to four times faster than most currently used artificial neural network (ANN) architectures.


IEEE Transactions on Biomedical Engineering | 2009

A Novel Concept for a Prosthetic Hand With a Bidirectional Interface: A Feasibility Study

Christian Cipriani; Christian Antfolk; Christian Balkenius; Birgitta Rosén; Göran Lundborg; Maria Chiara Carrozza; Fredrik Sebelius

A conceptually novel prosthesis consisting of a mechatronic hand, an electromyographic classifier, and a tactile display has been developed and evaluated by addressing problems related to controllability in prosthetics: intention extraction, perception, and feeling of ownership. Experiments have been performed, and encouraging results for a young transradial amputee are reported.


Advanced Engineering Informatics | 2010

Ikaros: Building cognitive models for robots

Christian Balkenius; Jan Morén; Birger Johansson; Magnus Johnsson

The Ikaros project started in 2001 with the aim of developing an open infrastructure for system-level brain modeling. The system has developed into a general tool for cognitive modeling as well as robot control. Here we describe the main parts of the Ikaros system and how it has been used to implement various cognitive systems and to control a number of different robots ranging from robot arms and hands to active vision systems and mobile robots.


Scandinavian Journal of Plastic and Reconstructive Surgery and Hand Surgery | 2010

SmartHand tactile display: A new concept for providing sensory feedback in hand prostheses

Christian Antfolk; Christian Balkenius; Birgitta Rosén; Göran Lundborg; Fredrik Sebelius

Abstract A major drawback with myoelectric prostheses is that they do not provide the user with sensory feedback. Using a new principle for sensory feedback, we did a series of experiments involving 11 healthy subjects. The skin on the volar aspect of the forearm was used as the target area for sensory input. Experiments included discrimination of site of stimuli and pressure levels at a single stimulation point. A tactile display based on digital servomotors with one actuating element for each of the five fingers was used as a stimulator on the forearm. The results show that the participants were able to discriminate between three fingers with an accuracy of 97%, between five fingers with an accuracy of 82%, and between five levels with an accuracy of 79%. The tactile display may prove a helpful tool in providing amputees with sensory feedback from a prosthetic hand by transferring tactile stimuli from the prosthetic hand to the skin at forearm level.


Autonomous Robots | 1999

Dynamics of a Classical Conditioning Model

Christian Balkenius

Classical conditioning is a basic learning mechanism in animals and can be found in almost all organisms. If we want to construct robots with abilities matching those of their biological counterparts, this is one of the learning mechanisms that needs to be implemented first. This article describes a computational model of classical conditioning where the goal of learning is assumed to be the prediction of a temporally discounted reward or punishment based on the current stimulus situation.The model is well suited for robotic implementation as it models a number of classical conditioning paradigms and learning in the model is guaranteed to converge with arbitrarily complex stimulus sequences. This is an essential feature once the step is taken beyond the simple laboratory experiment with two or three stimuli to the real world where no such limitations exist. It is also demonstrated how the model can be included in a more complex system that includes various forms of sensory pre-processing and how it can handle reinforcement learning, timing of responses and function as an adaptive world model.


Robotics and Autonomous Systems | 2007

Neural network models of haptic shape perception

Magnus Johnsson; Christian Balkenius

Three different models of tactile shape perception inspired by the human haptic system were tested using an 8 d.o.f. robot hand with 45 tactile sensors. One model is based on the tensor product of different proprioceptive and tactile signals and a self-organizing map (SOM). The two other models replace the tensor product operation with a novel self-organizing neural network, the Tensor-Multiple Peak Self-Organizing Map (T-MPSOM). The two T-MPSOM models differ in the procedure employed to calculate the neural activation. The computational models were trained and tested with a set of objects consisting of hard spheres, blocks and cylinders. All the models learned to map different shapes to different areas of the SOM, and the tensor product model as well as one of the T-MPSOM models also learned to discriminate individual test objects.


Proceedings of the National Academy of Sciences of the United States of America | 2015

Biasing moral decisions by exploiting the dynamics of eye gaze

Philip Pärnamets; Petter Johansson; Lars Hall; Christian Balkenius; Michael J. Spivey; Daniel C. Richardson

Significance Where people look generally reflects and reveals their moment-by-moment thought processes. This study introduces an experimental method whereby participants’ eye gaze is monitored and information about their gaze is used to change the timing of their decisions. Answers to difficult moral questions such as “Is murder justifiable?” can be influenced toward random alternatives based on looking patterns alone. We do this without presenting different arguments or response frames, as in other techniques of persuasion. Thus, the process of arriving at a moral decision is not only reflected in a participant’s eye gaze but can also be determined by it. Eye gaze is a window onto cognitive processing in tasks such as spatial memory, linguistic processing, and decision making. We present evidence that information derived from eye gaze can be used to change the course of individuals’ decisions, even when they are reasoning about high-level, moral issues. Previous studies have shown that when an experimenter actively controls what an individual sees the experimenter can affect simple decisions with alternatives of almost equal valence. Here we show that if an experimenter passively knows when individuals move their eyes the experimenter can change complex moral decisions. This causal effect is achieved by simply adjusting the timing of the decisions. We monitored participants’ eye movements during a two-alternative forced-choice task with moral questions. One option was randomly predetermined as a target. At the moment participants had fixated the target option for a set amount of time we terminated their deliberation and prompted them to choose between the two alternatives. Although participants were unaware of this gaze-contingent manipulation, their choices were systematically biased toward the target option. We conclude that even abstract moral cognition is partly constituted by interactions with the immediate environment and is likely supported by gaze-dependent decision processes. By tracking the interplay between individuals, their sensorimotor systems, and the environment, we can influence the outcome of a decision without directly manipulating the content of the information available to them.


Archive | 1998

Neural Control of a Virtual Prosthesis

Lars Eriksson; Fredrik Sebelius; Christian Balkenius

The abilities of the currently existing hand prostheses are typically limited to opening or closing the hand. This limits the usefulness of the prosthesis considerably compared to the many degrees of freedom in an intact hand. In order to develop more advanced hand prostheses two main problems have to be solved. The first is to develop more advanced mechanical solutions that allows for more degrees of freedom. The second, that we address below, is to devise a way of controlling the additional dexterity of such a prosthesis. Before the second problem is solved, the development of more advanced prostheses will be severely hindered.


Psychological Science | 2014

Speakers’ Acceptance of Real-Time Speech Exchange Indicates That We Use Auditory Feedback to Specify the Meaning of What We Say

Andreas Lind; Lars Hall; Björn Breidegard; Christian Balkenius; Petter Johansson

Speech is usually assumed to start with a clearly defined preverbal message, which provides a benchmark for self-monitoring and a robust sense of agency for one’s utterances. However, an alternative hypothesis states that speakers often have no detailed preview of what they are about to say, and that they instead use auditory feedback to infer the meaning of their words. In the experiment reported here, participants performed a Stroop color-naming task while we covertly manipulated their auditory feedback in real time so that they said one thing but heard themselves saying something else. Under ideal timing conditions, two thirds of these semantic exchanges went undetected by the participants, and in 85% of all nondetected exchanges, the inserted words were experienced as self-produced. These findings indicate that the sense of agency for speech has a strong inferential component, and that auditory feedback of one’s own voice acts as a pathway for semantic monitoring, potentially overriding other feedback loops.

Collaboration


Dive into the Christian Balkenius's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anna Balkenius

Swedish University of Agricultural Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge