Michael Kai Petersen
Technical University of Denmark
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Michael Kai Petersen.
PLOS ONE | 2014
Arkadiusz Stopczynski; Carsten Stahlhut; Jakob Eg Larsen; Michael Kai Petersen; Lars Kai Hansen
Combining low-cost wireless EEG sensors with smartphones offers novel opportunities for mobile brain imaging in an everyday context. Here we present the technical details and validation of a framework for building multi-platform, portable EEG applications with real-time 3D source reconstruction. The system – Smartphone Brain Scanner – combines an off-the-shelf neuroheadset or EEG cap with a smartphone or tablet, and as such represents the first fully portable system for real-time 3D EEG imaging. We discuss the benefits and challenges, including technical limitations as well as details of real-time reconstruction of 3D images of brain activity. We present examples of brain activity captured in a simple experiment involving imagined finger tapping, which shows that the acquired signal in a relevant brain region is similar to that obtained with standard EEG lab equipment. Although the quality of the signal in a mobile solution using an off-the-shelf consumer neuroheadset is lower than the signal obtained using high-density standard EEG equipment, we propose mobile application development may offset the disadvantages and provide completely new opportunities for neuroimaging in natural settings.
affective computing and intelligent interaction | 2011
Michael Kai Petersen; Carsten Stahlhut; Arkadiusz Stopczynski; Jakob Eg Larsen; Lars Kai Hansen
Combining a wireless EEG headset with a smartphone offers new opportunities to capture brain imaging data reflecting our everyday social behavior in a mobile context. However processing the data on a portable device will require novel approaches to analyze and interpret significant patterns in order to make them available for runtime interaction. Applying a Bayesian approach to reconstruct the neural sources we demonstrate the ability to distinguish among emotional responses reflected in different scalp potentials when viewing pleasant and unpleasant pictures compared to neutral content. Rendering the activations in a 3D brain model on a smartphone may not only facilitate differentiation of emotional responses but also provide an intuitive interface for touch based interaction, allowing for both modeling the mental state of users as well as providing a basis for novel bio-feedback applications.
International Journal of Psychophysiology | 2014
Arkadiusz Stopczynski; Carsten Stahlhut; Michael Kai Petersen; Jakob Eg Larsen; Camilla Birgitte Falk Jensen; Marieta Georgieva Ivanova; Tobias Andersen; Lars Kai Hansen
Mobile brain imaging solutions, such as the Smartphone Brain Scanner, which combines low cost wireless EEG sensors with open source software for real-time neuroimaging, may transform neuroscience experimental paradigms. Normally subject to the physical constraints in labs, neuroscience experimental paradigms can be transformed into dynamic environments allowing for the capturing of brain signals in everyday contexts. Using smartphones or tablets to access text or images may enable experimental design capable of tracing emotional responses when shopping or consuming media, incorporating sensorimotor responses reflecting our actions into brain machine interfaces, and facilitating neurofeedback training over extended periods. Even though the quality of consumer neuroheadsets is still lower than laboratory equipment and susceptible to environmental noise, we show that mobile neuroimaging solutions, like the Smartphone Brain Scanner, complemented by 3D reconstruction or source separation techniques may support a range of neuroimaging applications and thus become a valuable addition to high-end neuroimaging solutions.
affective computing and intelligent interaction | 2011
Arkadiusz Stopczynski; Jakob Eg Larsen; Carsten Stahlhut; Michael Kai Petersen; Lars Kai Hansen
We demonstrate a fully functional handheld brain scanner consisting of a low-cost 14-channel EEG headset with a wireless connection to a smartphone, enabling minimally invasive EEG monitoring in naturalistic settings. The smartphone provides a touch-based interface with real-time brain state decoding and 3D reconstruction.
international conference on universal access in human-computer interaction | 2014
Andrea Cuttone; Michael Kai Petersen; Jakob Eg Larsen
In this paper we discuss how to facilitate the process of reflection in Personal Informatics and Quantified Self systems through interactive data visualizations. Four heuristics for the design and evaluation of such systems have been identified through analysis of self-tracking devices and apps. Dashboard interface paradigms in specific self-tracking devices (Fitbit and Basis) are discussed as representative examples of state of the art in feedback and reflection support. By relating to existing work in other domains, such as event related representation of time series multivariate data in financial analytics, it is discussed how the heuristics could guide designs that would further facilitate reflection in self-tracking personal informatics systems.
european conference on interactive tv | 2007
Andrius Butkus; Michael Kai Petersen
The large amounts of TV, radio, games, music tracks or other IP based content becoming available in DVB-H mobile digital broadcast, offering more than 50 channels when adapted to the screen size of a handheld device, requires that the selection of media can be personalized according to user preferences. This paper presents an approach to model user preferences that could be used as a fundament for filtering content listed in the ESG electronic service guide, based on the TVA TV-Anytime metadata associated with the consumed content. The semantic modeling capabilities are assessed based on examples of BBC program listings using TVA classification schema vocabularies. Similarites between programs are identified using attributes from different knowledge domains, and the potential for increasing similarity knowledge through second level associations between terms belonging to separate TVA domain-specific vocabularies is demonstrated.
international conference of the ieee engineering in medicine and biology society | 2012
Carsten Stahlhut; Hagai Attias; Arkadiusz Stopczynski; Michael Kai Petersen; Jakob Eg Larsen; Lars Kai Hansen
EEG source reconstruction involves solving an inverse problem that is highly ill-posed and dependent on a generally fixed forward propagation model. In this contribution we compare a low and high density EEG setups dependence on correct forward modeling. Specifically, we examine how different forward models affect the source estimates obtained using four inverse solvers Minimum-Norm, LORETA, Minimum-Variance Adaptive Beamformer, and Sparse Bayesian Learning.
Cognitive Information Processing (CIP), 2014 4th International Workshop on | 2014
Per Bækgaard; Michael Kai Petersen; Jakob Eg Larsen
Achieving robust adaptive synchronization of multimodal biometric inputs: The recent arrival of wireless EEG headsets that enable mobile real-time 3D brain imaging on smartphones, and low cost eye trackers that provide gaze control of tablets, will radically change how biometric sensors might be integrated into next generation user interfaces. In experimental lab settings EEG neuroimaging and eye tracking data are traditionally combined using external triggers to synchronize the signals. However, with biometric sensors increasingly being applied in everyday usage scenarios, there will be a need for solutions providing a continuous alignment of signals. In the present paper we propose using spontaneous eye blinks, as a means to achieve near real-time synchronization of EEG and eye tracking. Analyzing key parameters that define eye blink signatures across the two domains, we outline a probability function based algorithm to correlate the signals. Comparing the accuracy of the method against a state of the art EYE-EEG plug-in for offline analysis of EEG and eye tracking data, we propose our approach could be applied for robust synchronization of biometric sensor data collected in a mobile context.
2010 2nd International Workshop on Cognitive Information Processing | 2010
Michael Kai Petersen; Morten Mørup; Lars Kai Hansen
Cognitive component analysis, defined as an unsupervised learning of features resembling human comprehension, suggests that the sensory structures we perceive might often be modeled by reducing dimensionality and treating objects in space and time as linear mixtures incorporating sparsity and independence. In music as well as language the patterns we come across become part of our mental workspace when the bottom-up sensory input raises above the background noise of core affect, and top-down trigger distinct feelings reflecting a shift of our attention. And as both low-level semantics and our emotional responses can be encoded in words, we propose a simplified cognitive approach to model how we perceive media. Representing song lyrics in a vector space of reduced dimensionality using LSA, we combine bottom-up defined term distances with affective adjectives, that top-down constrain the latent semantics according to the psychological dimensions of valence and arousal. Subsequently we apply a Tucker tensor decomposition combined with re-weighted l1 regularization and a Bayesian ARD automatic relevance determination approach to derive a sparse representation of complementary affective mixtures, which we suggest function as cognitive components for perceiving the underlying structure in lyrics.
computer music modeling and retrieval | 2009
Michael Kai Petersen; Lars Kai Hansen; Andrius Butkus
Outlining a high level cognitive approach to how we select media based on affective user preferences, we model the latent semantics of lyrics as patterns of emotional components. Using a selection of affective last.fm tags as top-down emotional buoys, we apply LSA latent semantic analysis to bottom-up represent the correlation of terms and song lyrics in a vector space that reflects the emotional context. Analyzing the resulting patterns of affective components, by comparing them against last.fm tag clouds describing the corresponding songs, we propose that it might be feasible to automatically generate affective user preferences based on song lyrics.