Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Veikko Surakka is active.

Publication


Featured researches published by Veikko Surakka.


International Journal of Psychophysiology | 1998

Facial and emotional reactions to Duchenne and non-Duchenne smiles

Veikko Surakka; Jari K. Hietanen

The purpose of the study was to investigate facial and emotional reactions while viewing two different types of smiles and the relation of emotional empathy to these reactions. Facial EMG was recorded from the orbicularis oculi and zygomaticus major muscle regions while subjects individually watched two blocks of stimuli. One block included posed facial expressions of the Duchenne smile (a felt smile) and a neutral face, the other block included expressions of another type of smile called non-Duchenne smile (an unfelt smile) and a neutral face. Emotional experiences were asked after each stimulus block. Finally, a measure of empathy was given. Facial EMG reactions differentiated between the neutral face and the Duchenne smile but not between the neutral face and the non-Duchenne smile. The Duchenne smile block induced experience of pleasure for the subjects who saw it as the first stimulus block. Empathy was correlated to the rated experiences of pleasure and interest after the Duchenne smile block.


human factors in computing systems | 2008

Emotional and behavioral responses to haptic stimulation

Katri Salminen; Veikko Surakka; Jani Lylykangas; Jukka Raisamo; Rami Saarinen; Roope Raisamo; Jussi Rantala; Grigori Evreinov

A prototype of friction-based horizontally rotating fingertip stimulator was used to investigate emotional experiences and behavioral responses to haptic stimulation. The rotation style of 12 different stimuli was varied by burst length (i.e., 20, 50, 100 ms), continuity (i.e., continuous and discontinuous), and direction (e.g., forward and backward). Using these stimuli 528 stimulus pairs were presented to 12 subjects who were to distinguish if stimuli in each pair were the same or different. Then they rated the stimuli using four scales measuring the pleasantness, arousal, approachability, and dominance qualities of the 12 stimuli. The results showed that continuous forward-backward rotating stimuli were rated as significantly more unpleasant, arousing, avoidable, and dominating than other types of stimulations (e.g., discontinuous forward rotation). The reaction times to these stimuli were significantly faster than reaction times to discontinuous forward and backward rotating stimuli. The results clearly suggest that even simple haptic stimulation can carry emotional information. The results can be utilized when making use of haptics in human-technology interaction.


IEEE Transactions on Haptics | 2009

Methods for Presenting Braille Characters on a Mobile Device with a Touchscreen and Tactile Feedback

Jussi Rantala; Roope Raisamo; Jani Lylykangas; Veikko Surakka; Jukka Raisamo; Katri Salminen; Toni Pakkanen; Arto Hippula

Three novel interaction methods were designed for reading six-dot Braille characters from the touchscreen of a mobile device. A prototype device with a piezoelectric actuator embedded under the touchscreen was used to create tactile feedback. The three interaction methods, scan, sweep, and rhythm, enabled users to read Braille characters one at a time either by exploring the characters dot by dot or by sensing a rhythmic pattern presented on the screen. The methods were tested with five blind Braille readers as a proof of concept. The results of the first experiment showed that all three methods can be used to convey information as the participants could accurately (91-97 percent) recognize individual characters. In the second experiment the presentation rate of the most efficient and preferred method, the rhythm, was varied. A mean recognition accuracy of 70 percent was found when the speed of presenting a single character was nearly doubled from the first experiment. The results showed that temporal tactile feedback and Braille coding can be used to transmit single-character information while further studies are still needed to evaluate the presentation of serial information, i.e., multiple Braille characters.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2006

Feature-based detection of facial landmarks from neutral and expressive facial images

Yulia Gizatdinova; Veikko Surakka

Feature-based method for detecting landmarks from facial images was designed. The method was based on extracting oriented edges and constructing edge maps at two resolution levels. Edge regions with characteristic edge pattern formed landmark candidates. The method ensured invariance to expressions while detecting eyes. Nose and mouth detection was deteriorated by happiness and disgust.


Speech Communication | 1998

McGurk effect in finish syllables, isolated works, and works in sentences: effects of word meaning and sentence context

Mikko Sams; Petri Manninen; Veikko Surakka; Pia Helin; Riitta Kättö

Abstract The “McGurk effect” is a robust illusion in which subjects perception of an acoustical syllable is modified by the view of the talkers articulation. This effect is not perceived universally but, however, is experienced by the majority of the subjects. For example, if the acoustical syllable /ba/ is presented in synchrony with a face articulating /ga/, English-speaking subjects typically perceive /da/ and less frequently /ga/. We studied the McGurk effect in Finnish syllables, isolated words, and words presented in sentence context in 65 subjects. Audiovisual combinations expected to be perceived either as meaningful words or nonwords were used. Words were also presented in various positions of three-word sentences in which the expected word could match or mismatch with the sentence context. A strong McGurk effect was obtained with each stimulus type. In addition, the strength of the McGurk effect did not appear to be influenced by word meaning or sentence context. These findings support the idea that audiovisual speech integration occurs at phonetic perceptual level before the word meaning is extracted.


Cognitive Brain Research | 1998

Modulation of human auditory information processing by emotional visual stimuli

Veikko Surakka; Mirja Tenhunen-Eskelinen; Jari K. Hietanen; Mikko Sams

Auditory event-related potentials mismatch negativity (MMN) and N100 were recorded from seven subjects while they read text and watched emotionally negative, neutral, and positive pictures varying in valence and arousal. The MMN reflects automatic detection of change in auditory stimulus stream. Functionally different N100 is triggered by onset of various auditory stimuli. The N100 was stabile during all visual conditions. The MMN was very similar during text reading, and neutral and negative slide viewing, but was significantly attenuated during viewing of positively valenced slides. We suggest that visual emotional information of high positive valence and low arousal is a signal of nonthreatening and nonappetitive environment. This kind of environment probably reduces the need for auditory change detection.


European Journal of Cognitive Psychology | 2004

Evidence for the integration of audiovisual emotional information at the perceptual level of processing

Jari K. Hietanen; Jukka M. Leppänen; Marko Illi; Veikko Surakka

The present aim was to investigate how emotional expressions presented on an unattended channel affect the recognition of the attended emotional expressions. In Experiments 1 and 2, facial and vocal expressions were simultaneously presented as stimulus combinations. The emotions (happiness, anger, or emotional neutrality) expressed by the face and voice were either congruent or incongruent. Subjects were asked to attend either to the visual (Experiment 1) or auditory (Experiment 2) channel and recognise the emotional expression. The result showed that the ignored emotional expressions significantly affected the processing of attended signals as measured by recognition accuracy and response speed. In general, attended signals were recognised more accurately and faster in congruent than in incongruent combinations. In Experiment 3, possibility for a perceptual-level integration was eliminated by presenting the response-relevant and response-irrelevant signals separated in time. In this situation, emotional information presented on the nonattended channel ceased to affect the processing of emotional signals on the attended channel. The present results are interpreted to provide evidence for the view that facial and vocal emotional signals are integrated at the perceptual level of information processing and not at the later response-selection stages.


International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 2013

Touch gestures in communicating emotional intention via vibrotactile stimulation

Jussi Rantala; Katri Salminen; Roope Raisamo; Veikko Surakka

Remote communication between people typically relies on audio and vision although current mobile devices are increasingly based on detecting different touch gestures such as swiping. These gestures could be adapted to interpersonal communication by using tactile technology capable of producing touch stimulation to a users hand. It has been suggested that such mediated social touch would allow for new forms of emotional communication. The aim was to study whether vibrotactile stimulation that imitates human touch can convey intended emotions from one person to another. For this purpose, devices were used that converted touch gestures of squeeze and finger touch to vibrotactile stimulation. When one user squeezed his device or touched it with finger(s), another user felt corresponding vibrotactile stimulation on her device via four vibrating actuators. In an experiment, participant dyads comprising a sender and receiver were to communicate variations in the affective dimensions of valence and arousal using the devices. The senders task was to create stimulation that would convey unpleasant, pleasant, relaxed, or aroused emotional intention to the receiver. Both the sender and receiver rated the stimulation using scales for valence and arousal so that the match between senders intended emotions and receivers interpretations could be measured. The results showed that squeeze was better at communicating unpleasant and aroused emotional intention, while finger touch was better at communicating pleasant and relaxed emotional intention. The results can be used in developing technology that enables people to communicate via touch by choosing touch gesture that matches the desired emotion.


Archive | 2002

Auditory Emotional Feedback Facilitates Human-Computer Interaction

Anne Aula; Veikko Surakka

The present study investigated psychophysiological responses to emotional feedback in a computerised problem-solving experiment. 40 subjects solved 27 series of mathematical tasks. After each series, a speech synthesiser gave emotionally negative, neutral, or positive feedback. Response times and error-rates to calculations following the different feedback categories were analysed. Pupil size was measured during and five seconds after each feedback category. Ratings of the feedback categories were also measured. The ratings showed that the different feedback categories were effective in eliciting congruent emotions in the subjects. The task times were significantly shorter after positive than negative feedback. Error rates were not affected by the feedback. The pupil size was significantly smaller after the feedback than during it. After positive feedback, there was a significantly faster decrease in the pupil diameter than after the other feedback categories. Thus, positive emotional feedback had beneficial effects on human behaviour and physiology in human-computer interaction.


The Mind's Eye#R##N#Cognitive and Applied Aspects of Eye Movement Research | 2003

Chapter 22 – Voluntary Eye Movements in Human—Computer Interaction

Veikko Surakka; Marko Illi; Poika Isokoski

Publisher Summary This chapter presents an overview on human–computer interaction (HCI) techniques that utilize voluntary eye movements for interaction with computers. The interaction among humans as well as interaction between human and computer can be classified as purely unimodal or multimodal. In unimodal interaction, communicative signals are transmitted through one modality only. An example of human unimodal interaction could be a discussion over the telephone, in which the sender uses speech only and the receiver uses hearing only. The chapter also explains interaction with computers. In multimodal interaction, communicative signals are transmitted through several different modalities. In comparison to human interaction, HCI is more limited in both input and output methods. Most systems allow inputs for a computer to be transmitted only through one modality and usually, this modality is the use of hands. This means that other possible input modalities that people use naturally in human–human interaction remains unused in HCI. However, it is possible to utilize other modalities in HCI too. Several different modalities, for example, speech, eye movements, and facial muscle activity, offer promising and interesting alternative methods to be used as an input for a computer. If the information from user to computer and vice versa could be transmitted through several modalities, this would conceivably result in more efficient, versatile, flexible, and eventually more natural interaction between the user and the computer, similar to human–human interaction.

Collaboration


Dive into the Veikko Surakka's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jukka Lekkala

Tampere University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ville Rantanen

Tampere University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge