Maarten A. Hogervorst
Netherlands Organisation for Applied Scientific Research
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Maarten A. Hogervorst.
Journal of Neural Engineering | 2012
Anne-Marie Brouwer; Maarten A. Hogervorst; Jan B. F. van Erp; Tobias Heffelaar; Patrick H Zimmerman; Robert Oostenveld
Previous studies indicate that both electroencephalogram (EEG) spectral power (in particular the alpha and theta band) and event-related potentials (ERPs) (in particular the P300) can be used as a measure of mental work or memory load. We compare their ability to estimate workload level in a well-controlled task. In addition, we combine both types of measures in a single classification model to examine whether this results in higher classification accuracy than either one alone. Participants watched a sequence of visually presented letters and indicated whether or not the current letter was the same as the one (n instances) before. Workload was varied by varying n. We developed different classification models using ERP features, frequency power features or a combination (fusion). Training and testing of the models simulated an online workload estimation situation. All our ERP, power and fusion models provide classification accuracies between 80% and 90% when distinguishing between the highest and the lowest workload condition after 2 min. For 32 out of 35 participants, classification was significantly higher than chance level after 2.5 s (or one letter) as estimated by the fusion model. Differences between the models are rather small, though the fusion model performs better than the other models when only short data segments are available for estimating workload.
Frontiers in Neuroscience | 2014
Maarten A. Hogervorst; Anne-Marie Brouwer; Jan B. F. van Erp
While studies exist that compare different physiological variables with respect to their association with mental workload, it is still largely unclear which variables supply the best information about momentary workload of an individual and what is the benefit of combining them. We investigated workload using the n-back task, controlling for body movements and visual input. We recorded EEG, skin conductance, respiration, ECG, pupil size and eye blinks of 14 subjects. Various variables were extracted from these recordings and used as features in individually tuned classification models. Online classification was simulated by using the first part of the data as training set and the last part of the data for testing the models. The results indicate that EEG performs best, followed by eye related measures and peripheral physiology. Combining variables from different sensors did not significantly improve workload assessment over the best performing sensor alone. Best classification accuracy, a little over 90%, was reached for distinguishing between high and low workload on the basis of 2 min segments of EEG and eye related variables. A similar and not significantly different performance of 86% was reached using only EEG from single electrode location Pz.
Information Fusion | 2010
Maarten A. Hogervorst; Alexander Toet
We present a new method to render multi-band night-time imagery (images from sensors whose sensitive range does not necessarily coincide with the visual part of the electromagnetic spectrum, e.g. image intensifiers, thermal cameras) in natural daytime colors. The color mapping is derived from the combination of a multi-band image and a corresponding natural color daytime reference image. The mapping optimizes the match between the multi-band image and the reference image, and yields a nightvision image with a natural daytime color appearance. The lookup-table based mapping procedure is extremely simple and fast and provides object color constancy. Once it has been derived the color mapping can be deployed in real-time to different multi-band image sequences of similar scenes. Displaying night-time imagery in natural colors may help human observers to process this type of imagery faster and better, thereby improving situational awareness and reducing detection and recognition times.
Optical Engineering | 2012
Alexander Toet; Maarten A. Hogervorst
We present an overview of our recent progress and the current state-of-the-art techniques of color image fusion for night vision applications. Inspired by previously developed color opponent fusing schemes, we initially developed a simple pixel-based false color-mapping scheme that yielded fused false color images with large color contrast and preserved the identity of the input signals. This method has been successfully deployed in different areas of research. However, since this color mapping did not produce realistic colors, we continued to develop a statistical color-mapping procedure that would transfer the color distribution of a given example image to a multiband nighttime image. This procedure yields a realistic color rendering. However, it is computationally expensive and achieves no color constancy since the mapping depends on the relative amounts of the different materials in the scene. By applying the statistical mapping approach in a color look-up-table framework, we finally achieved both color constancy and computational simplicity. This sample-based color transfer method is specific for different types of materials in a scene and can be easily adapted for the intended operating theatre and the task at hand. The method can be implemented as a look-up-table transform and is highly suitable for real-time implementations.
Dasarathy, B.V., Multisensor, Multisource Information Fusion: Architectures, Algorithms, and Applications 2008 , 18 March 2008 through 20 March 2008, Orlando, FL, 72013, 6974 | 2008
Maarten A. Hogervorst; Alexander Toet
We present a fast and efficient method to derive and apply natural colors to nighttime imagery from multiband sensors. The color mapping is derived from the combination of a multiband image and a corresponding natural color reference image. The mapping optimizes the match between the multiband image and the reference image, and yields a nightvision image with colors similar to that of the daytime image. The mapping procedure is simple and fast. Once it has been derived the color mapping can be deployed in realtime. Different color schemes can be used tailored to the environment and the application. The expectation is that by displaying nighttime imagery in natural colors human observers will be able to interpret the imagery better and faster, thereby improving situational awareness and reducing reaction times.
International Journal of Psychophysiology | 2014
Anne-Marie Brouwer; Maarten A. Hogervorst; Michael Holewijn; Jan B. F. van Erp
Learning to master a task is expected to be accompanied by a decrease in effort during task execution. We examine the possibility to monitor learning using physiological measures that have been reported to reflect effort or workload. Thirty-five participants performed different difficulty levels of the n-back task while a range of physiological and performance measurements were recorded. In order to dissociate non-specific time-related effects from effects of learning, we used the easiest level as a baseline condition. This condition is expected to only reflect non-specific effects of time. Performance and subjective measures confirmed more learning for the difficult level than for the easy level. The difficulty levels affected physiological variables in the way as expected, therewith showing their sensitivity. However, while most of the physiological variables were also affected by time, time-related effects were generally the same for the easy and the difficult level. Thus, in a well-controlled experiment that enabled the dissociation of general time effects from learning we did not find physiological variables to indicate decreasing effort associated with learning. Theoretical and practical implications are discussed.
Frontiers in Neuroscience | 2014
Anne-Marie Brouwer; Maarten A. Hogervorst
We here introduce a new experimental paradigm to induce mental stress in a quick and easy way while adhering to ethical standards and controlling for potential confounds resulting from sensory input and body movements. In our Sing-a-Song Stress Test, participants are presented with neutral messages on a screen, interleaved with 1-min time intervals. The final message is that the participant should sing a song aloud after the interval has elapsed. Participants sit still during the whole procedure. We found that heart rate and skin conductance during the 1-min intervals following the sing-a-song stress message are substantially higher than during intervals following neutral messages. The order of magnitude of the rise is comparable to that achieved by the Trier Social Stress Test. Skin conductance increase correlates positively with experienced stress level as reported by participants. We also simulated stress detection in real time. When using both skin conductance and heart rate, stress is detected for 18 out of 20 participants, approximately 10 s after onset of the sing-a-song message. In conclusion, the Sing-a-Song Stress Test provides a quick, easy, controlled and potent way to induce mental stress and could be helpful in studies ranging from examining physiological effects of mental stress to evaluating interventions to reduce stress.
In B.V. Dasarathy (Eds.), Multisensor, Multisource Information Fusion: Architectures, Algorithms, and Applications 2008, SPIE-DSS08-DS16-2 / SPIE-6974. Bellingham, WA, USA: The International Society for Optical Engineering., 1-12 | 2008
Alexander Toet; Maarten A. Hogervorst
We developed a simple and fast lookup-table based method to derive and apply natural daylight colors to multi-band night-time images. The method deploys an optimal color transformation derived from a set of samples taken from a daytime color reference image. The colors in the resulting colorized multiband night-time images closely resemble the colors in the daytime color reference image. Also, object colors remain invariant under panning operations and are independent of the scene content. Here we describe the implementation of this method in two prototype portable dual band realtime night vision systems. One system provides co-aligned visual and near-infrared bands of two image intensifiers, the other provides co-aligned images from a digital image intensifier and an uncooled longwave infrared microbolometer. The co-aligned images from both systems are further processed by a notebook computer. The color mapping is implemented as a realtime lookup table transform. The resulting colorised video streams can be displayed in realtime on head mounted displays and stored on the hard disk of the notebook computer. Preliminary field trials demonstrate the potential of these systems for applications like surveillance, navigation and target detection.
Holst, g.C., Infrared Imaging Systems: Design, Analysis, Modeling, and Testing XVII, 6207 | 2006
Piet Bijl; Klamer Schutte; Maarten A. Hogervorst
Current end-to-end sensor performance measures such as the TOD, MRT, DMRT and MTDP were developed to describe Target Acquisition performance for static imaging. Recent developments in sensor technology (e.g. microscan) and image enhancement techniques (e.g. Super Resolution and Scene-Based Non-Uniformity Correction) require that a sensor performance measure can be applied to dynamic imaging as well. We evaluated the above-mentioned measures using static, dynamic (moving) and different types of enhanced imagery of thermal 4-bar and triangle tests patterns. Both theoretical and empirical evidence is provided that the bar-pattern based methods are not suited for dynamic imaging. On the other hand, the TOD method can be applied easily without adaptation to any of the above-mentioned conditions, and the resulting TOD data are in correspondence with the expectations. We conclude that the TOD is the only current end-to-end measure that is able to quantify sensor performance for dynamic imaging and dynamic image enhancement techniques.
Kadar, I., Signal Processing, Sensor Fusion, and Target Recognition XII, 5096, 552-561 | 2003
Alexander Toet; Maarten A. Hogervorst
We applied recently introduced universal image quality index Q that quantifies the distortion of a processed image relative to its original version, to assess the performance of different graylevel image fusion schemes. The method is as follows. First, we adopt an original test image as the reference image. Second, we produce several distorted versions of this reference image. The distortions in the individual images are complementary, meaning that the same distortion should not occur at the same location in all images (it should be absent in at least one image). Thus, the information content of the overall set of distorted images should equal the information content of the original test image. Third, we apply the image fusion process to the set of distorted images. Fourth, we quantify the similarity of the fused image to the reference image by computing the universal image quality index Q. The method can also be used to optimize image fusion schemes for different types of distortions, by maximizing Q through repeated application of steps two and three for different parameter settings of the fusion scheme.
Collaboration
Dive into the Maarten A. Hogervorst's collaboration.
Netherlands Organisation for Applied Scientific Research
View shared research outputs