Johan Hedström
Swedish Defence Research Agency
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Johan Hedström.
Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2008
Per-Anders Oskarsson; Lars Eriksson; Patrik Lif; Björn Lindahl; Johan Hedström
We investigated three types of display combinations for threat cueing in a simulated combat vehicle. The display combinations consisted of two bimodal combinations, a visual head-up display (HUD) combined with 3D audio; a tactile torso belt combined with 3D audio; and a multimodal combination, the HUD, tactile belt, and 3D audio combined. The participants main task was to as fast as possible align the heading of the combat vehicle with the displayed direction to a threat. To increase general task difficulty and provide a secondary measure of mental workload, the participant also was required to identify radio calls. Threat localization accuracy was highest and reaction time shortest with the use of both the HUD combined with 3D audio and with the multimodal display. Subjective ratings of perception of initial threat direction were most positive for both the tactile belt combined with 3D audio and for the multimodal display. The ratings of perceived threat direction at the final phase of threat alignment, however, were most positive for the HUD combined with 3D audio and for the multimodal display. Thus, the multimodal display with HUD, tactile belt, and 3D audio combined proved to be beneficial for all measures.
international conference on engineering psychology and cognitive ergonomics | 2007
Patrik Lif; Johan Hedström; Peter Svenmarck
There is an interest in using multiple unmanned ground vehicles (UGV). The Swedish Army Combat School has evaluated an UGV called SNOOKEN II in a number of field studies. To investigate the possibility to handle multiple vehicles in a simulated setting was set up where the operator simultaneous managed one, two, or three UGVs with limited autonomy. The task was to navigate the UGVs to designated inspection points as fast as possible. The results showed that more inspections were made with multiple UGVs (p 0.05). Analysis of use of autonomous mode, route selection, and interviews also show that the subject managed to operate two vehicles with increased performance but that a third vehicle does not provide any extra benefits.
international conference on human interface and management of information | 2011
Patrik Lif; Per-Anders Oskarsson; Björn Lindahl; Johan Hedström; Jonathan Svensson
We investigated four display configurations for threat cueing in a simulated combat vehicle. The display configurations were a tactile belt only; the tactile belt combined with 3D audio; two visual displays combined with 3D audio; and a multimodal configuration (the visual displays, the tactile belt and 3D audio combined). The tactile display was also used for navigation information. The participants main task was to drive according to the navigation information, and when threat cueing onsets occurred, as fast as possible align the heading of the combat vehicle with the displayed direction of the threat. The tactile display thus switched between navigation and treat cueing information. Performance was overall best with the multimodal display. Threat localization error was smallest with the visual and multimodal displays. The response time was somewhat longer with the tactile belt only, and especially in the front sector. This indicates interference between the two tasks, when threat cueing onsets occurred at the same position as the navigation information. This should however not be a problem in a real combat vehicle, since the sound alert will most likely not be excluded. Thus, if coded correctly tactile information may be presented for both navigation and threat cueing.
international conference on human-computer interaction | 2014
Patrik Lif; Per-Anders Oskarsson; Johan Hedström; Peter Andersson; Björn Lindahl; Christopher Palm
Brownout during helicopter landing and takeoff is a serious problem and has caused numerous accidents. Development of displays indicating drift is one part of the solution, and since the visual modality is already saturated one possibility is to use a tactile display. The main purpose in this study was to investigate how tactile displays should be coded to maintain or increase the ability to control lateral drift. Two different tactile drift display configurations were compared, each with three different onset rates to indicate the speed of lateral drift. A visual drift display was used as control condition. The results show that best performance is obtained with the basic display with slow onset, and with complex display with constant onset rate. The results also showed that performance with the best tactile drift display configurations was equal to the already validated visual display.
Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2013
Per-Anders Oskarsson; Patrik Lif; Johan Hedström; Peter Andersson; Björn Lindahl; Anna Tullberg
Helicopter landing and take-off in degraded visibility caused by blowing sand or dust (brown-out) may distort the pilot’s comprehension of the helicopter’s position. This is a serious problem that may lead to unattended lateral drift or descending rate. We have previously shown advantages of redundant tactile and multimodal information a simulated combat vehicle. In order to investigate if lateral drift in a helicopter can be reduced by use of a drift display an experiment with a simulated helicopter was performed. Three types of drift displays were tested: visual, tactile, and bimodal display and compared with the primary display that did not present lateral drift. Compared with the primary display lateral drift was reduced with all three drift display configurations. This indicates the value of a drift display in the helicopter and the possibilities of disengaging the pilot’s vision for parallel tasks by the use of tactile or bimodal drift displays.
Aviation, Space, and Environmental Medicine | 2008
Lars Eriksson; Claes von Hofsten; Arne Tribukait; Ola Eiken; Peter Andersson; Johan Hedström
INTRODUCTION The somatogravic illusion (SGI) is easily broken when the pilot looks out the aircraft window during daylight flight, but it has proven difficult to break or even reduce the SGI in non-pilots in simulators using synthetic visual scenes. Could visual-flow scenes that accommodate compensatory head movement reduce the SGI in naive subjects? METHODS We investigated the effects of visual cues on the SGI induced by a human centrifuge. The subject was equipped with a head-tracked, head-mounted display (HMD) and was seated in a fixed gondola facing the center of rotation. The angular velocity of the centrifuge increased from near zero until a 0.57-G centripetal acceleration was attained, resulting in a tilt of the gravitoinertial force vector, corresponding to a pitch-up of 30 degrees. The subject indicated perceived horizontal continuously by means of a manual adjustable-plate system. We performed two experiments with within-subjects designs. In Experiment 1, the subjects (N = 13) viewed a darkened HMD and a presentation of simple visual flow beneath a horizon. In Experiment 2, the subjects (N = 12) viewed a darkened HMD, a scene including symbology superimposed on simple visual flow and horizon, and this scene without visual flow (static). RESULTS In Experiment 1, visual flow reduced the SGI from 12.4 +/- 1.4 degrees (mean +/- SE) to 8.7 +/- 1.5 degrees. In Experiment 2, the SGI was smaller in the visual flow condition (9.3 +/- 1.8 degrees) than with the static scene (13.3 +/- 1.7 degrees) and without HMD presentation (14.5 +/- 2.3 degrees), respectively. CONCLUSION It is possible to reduce the SGI in non-pilots by means of a synthetic horizon and simple visual flow conveyed by a head-tracked HMD. This may reflect the power of a more intuitive display for reducing the SGI.
international conference on human interface and management of information | 2017
Patrik Lif; Fredrik Näsström; Gustav Tolt; Johan Hedström; Jonas Allvar
In many situations it is important to detect and identify people and vehicles. In this study the purpose was to investigate subject’s performance to detect and estimate number of stationary people on the ground. The unmanned aerial vehicle used visual- and infrared sensor, wide and narrow field of view, and ground speed 8 m/s and 12 m/s. Participants watched synthetic video sequences captured from an unmanned aerial vehicle. The results from this study demonstrated that the ability to detect people was affected by type of sensor and field of view. It took significantly longer time to detect targets with the infrared sensor than with the visual sensor, and it took significantly longer time with wider field of view than with narrow field of view. The ability to assess number of targets was affected by type of sensor and speed, the infrared sensor causing more problems than the visual sensor. Also, performance decreased at higher speed.
international conference on engineering psychology and cognitive ergonomics | 2009
Peter Svenmarck; Dennis Andersson; Björn Lindahl; Johan Hedström; Patrik Lif
This paper investigates how one operator can control a multi-robot system for tactical reconnaissance using partly autonomous UGVs. Instead of controlling individual UGVs, the operator uses supervisory control to allocate partly autonomous UGVs into suitable groups and define areas for search. A state-of-the-art pursuit-evasion algorithm then performed the detailed control of available UGVs. The supervisory control was evaluated by allowing subjects to control either six or twelve UGVs for tactical reconnaissance along the route of advance for a convoy traveling through an urban environment with mobile threats. The results show that increasing the number of UGVs improve the subjects situation awareness, increase the number of threats that are detected, and reduce the number of hits on the convoy. More importantly, these benefits were achieved without any increase in mental workload. The results support the common belief in autonomous functions as an approach to reduce the operator-to-vehicle ratio in military applications.
Archive | 2007
Charlotte Sennersten; Jens Alfredson; Martin Castor; Johan Hedström; Björn Lindahl; Craig A. Lindley; Erland Svensson
ieee symposium series on computational intelligence | 2017
Gustav Tolt; Johan Hedström; Solveig Bruvoll; Martin Asprusten