Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jaka Sodnik is active.

Publication


Featured researches published by Jaka Sodnik.


Sensors | 2014

An Analysis of the Precision and Reliability of the Leap Motion Sensor and Its Suitability for Static and Dynamic Tracking

Jože Guna; Grega Jakus; Matevž Pogačnik; Sašo Tomažič; Jaka Sodnik

We present the results of an evaluation of the performance of the Leap Motion Controller with the aid of a professional, high-precision, fast motion tracking system. A set of static and dynamic measurements was performed with different numbers of tracking objects and configurations. For the static measurements, a plastic arm model simulating a human arm was used. A set of 37 reference locations was selected to cover the controllers sensory space. For the dynamic measurements, a special V-shaped tool, consisting of two tracking objects maintaining a constant distance between them, was created to simulate two human fingers. In the static scenario, the standard deviation was less than 0.5 mm. The linear correlation revealed a significant increase in the standard deviation when moving away from the controller. The results of the dynamic scenario revealed the inconsistent performance of the controller, with a significant drop in accuracy for samples taken more than 250 mm above the controllers surface. The Leap Motion Controller undoubtedly represents a revolutionary input device for gesture-based human-computer interaction; however, due to its rather limited sensory space and inconsistent sampling frequency, in its current configuration it cannot currently be used as a professional tracking system.


Sensors | 2015

Exploring direct 3D interaction for full horizontal parallax light field displays using leap motion controller.

Vamsi Kiran Adhikarla; Jaka Sodnik; Péter Szolgay; Grega Jakus

This paper reports on the design and evaluation of direct 3D gesture interaction with a full horizontal parallax light field display. A light field display defines a visual scene using directional light beams emitted from multiple light sources as if they are emitted from scene points. Each scene point is rendered individually resulting in more realistic and accurate 3D visualization compared to other 3D displaying technologies. We propose an interaction setup combining the visualization of objects within the Field Of View (FOV) of a light field display and their selection through freehand gesture tracked by the Leap Motion Controller. The accuracy and usefulness of the proposed interaction setup was also evaluated in a user study with test subjects. The results of the study revealed high user preference for free hand interaction with light field display as well as relatively low cognitive demand of this technique. Further, our results also revealed some limitations and adjustments of the proposed setup to be addressed in future work.


Applied Ergonomics | 2015

A User Study of Auditory, Head-Up and Multi-modal Displays in Vehicles

Grega Jakus; Christina Dicke; Jaka Sodnik

This paper describes a user study on the interaction with an in-vehicle information system (IVIS). The motivation for conducting this research was to investigate the subjectively and objectively measured impact of using a single- or multi-modal IVIS while driving. A hierarchical, list-based menu was presented using a windshield projection (head-up display), auditory display and a combination of both interfaces. The users were asked to navigate a vehicle in a driving simulator and simultaneously perform a set of tasks of varying complexity. The experiment showed that the interaction with visual and audio-visual head-up displays is faster and more efficient than with the audio-only display. All the interfaces had a similar impact on the overall driving performance. There was no significant difference between the visual only and audio-visual displays in terms of their efficiency and safety; however, the majority of test subjects clearly preferred to use the multi-modal interface while driving.


International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 2011

Multiple spatial sounds in hierarchical menu navigation for visually impaired computer users

Jaka Sodnik; Grega Jakus; Saso Tomazic

This paper describes a user study on the benefits and drawbacks of simultaneous spatial sounds in auditory interfaces for visually impaired and blind computer users. Two different auditory interfaces in spatial and non-spatial condition were proposed to represent the hierarchical menu structure of a simple word processing application. In the horizontal interface, the sound sources or the menu items were located in the horizontal plane on a virtual ring surrounding the users head, while the sound sources in the vertical interface were aligned one above the other in front of the user. In the vertical interface, the central pitch of the sound sources at different elevations was changed in order to improve the otherwise relatively low localization performance in the vertical dimension. The interaction with the interfaces was based on a standard computer keyboard for input and a pair of studio headphones for output. Twelve blind or visually impaired test subjects were asked to perform ten different word processing tasks within four experiment conditions. Task completion times, navigation performance, overall satisfaction and cognitive workload were evaluated. The initial hypothesis, i.e. that the spatial auditory interfaces with multiple simultaneous sounds should prove to be faster and more efficient than non-spatial ones, was not confirmed. On the contrary-spatial auditory interfaces proved to be significantly slower due to the high cognitive workload and temporal demand. The majority of users did in fact finish tasks with less navigation and key pressing; however, they required much more time. They reported the spatial auditory interfaces to be hard to use for a longer period of time due to the high temporal and mental demand, especially with regards to the comprehension of multiple simultaneous sounds. The comparison between the horizontal and vertical interface showed no significant differences between the two. It is important to point out that all participants were novice users of the system; therefore it is possible that the overall performance could change with a more extensive use of the interfaces and an increased number of trials or experiments sets. Our interviews with visually impaired and blind computer users showed that they are used to sharing their auditory channel in order to perform multiple simultaneous tasks such as listening to the radio, talking to somebody, using the computer, etc. As the perception of multiple simultaneous sounds requires the entire capacity of the auditory channel and total concentration of the listener, it does therefore not enable such multitasking.


conference on computer as a tool | 2003

Spatial sound generation using HRTF created by the use of recursive filters

Rudolf Susnik; Jaka Sodnik; Anton Umek; Saso Tomazic

Generating spatial sound and playing it through headphones is a demanding task, since two important factors, ILD - inter-aural level difference and ITD - inter-aural time difference, need to be taken into consideration. The problem can be solved by the use of head related transfer functions (HRTF) which represent a set of empirically measured functions, one for each spatial direction. The complete reconstruction of HRTF is possible through the use of finite impulse response (FIR) filters with 512 coefficients each. Since the spectrum of HRTF consists of distinctive maximums and minimums, the spectrum could be approximated by the use of resonators and notch filters. The approximation of the complete spectrum (20 Hz - 20 kHz) could be done by the use of six resonators and one notch filter. Our approach to spatial sound generation using HRTF created by the use of recursive (IIR) filters presents a practical and computationally effective solution. It also indicates a way to uniformly model all factors connected to spatial sound perception.


advances in computer-human interaction | 2008

Spatial Auditory Interface for an Embedded Communication Device in a Car

Jaka Sodnik; Saso Tomazic; Christina Dicke; Mark Billinghurst

In this paper we evaluate the safety of the driver when using an embedded communication device while driving. As a part of our research, four different tasks were preformed with the device in order to evaluate the efficiency and safety of the drivers under three different conditions: one visual and two different auditory conditions. In the visual condition, various menu items were shown on a small LCD screen attached to the dashboard. In the auditory conditions, the same menu items were presented with spatial sounds distributed on a virtual ring around the users head. The same custom-made interaction device attached to the steering wheel was used in all three conditions, enabling simple and safe interaction with the device while driving. The auditory interface proved to be as fast as the visual one, while at the same time enabling a significantly safer driving and higher satisfaction of the users. The measured workload also appeared to be lower when using the auditory interfaces.


international conference on human-computer interaction | 2014

Evaluation of Leap Motion Controller with a High Precision Optical Tracking System

Grega Jakus; Jože Guna; Sašo Tomažič; Jaka Sodnik

The paper presents an evaluation of the performance of a Leap Motion Controller. A professional optical tracking system was used as a reference system. 37 stationary points were tracked in 3D space in order to evaluate the consistency and accuracy of the Controller’s measurements. The standard deviation of these measurements varied from 8.1 μm to 490 μm, mainly depending on the azimuth and distance from the Controller. In the second part of the experiment, a constant distance was provided between two points, which were then moved and tracked within the entire sensory space. The deviation of the measured distance changed significantly with the height above the Controller. The sampling frequency also proved to be very non-uniform. The Controller represents a revolution in the field of gesture-based human-computer interaction; however, it is currently unsuitable as a replacement for professional motion tracking systems.


Applied Ergonomics | 2018

An Analysis of the Suitability of a Low-Cost Eye Tracker for Assessing the Cognitive Load of Drivers

Kristina Stojmenova; Grega Jakus; Jaka Sodnik

This paper presents a driving simulator study in which we investigated whether the Eye Tribe eye tracker (ET) is capable of assessing changes in the cognitive load of drivers through oculography and pupillometry. In the study, participants were asked to drive a simulated vehicle and simultaneously perform a set of secondary tasks with different cognitive complexity levels. We measured changes in eye properties, such as the pupil size, blink rate and fixation time. We also performed a measurement with a Detection Response Task (DRT) to validate the results and to prove a steady increase of cognitive load with increasing secondary task difficulty. The results showed that the ET precisely recognizes an increasing pupil diameter with increasing secondary task difficulty. In addition, the ET shows increasing blink rates, decreasing fixation time and narrowing of the attention field with increasing secondary task difficulty. The results were validated with the DRT method and the secondary task performance. We conclude that the Eye Tribe ET is a suitable device for assessing a drivers cognitive load.


advances in computer-human interaction | 2010

Enhanced Synthesized Text Reader for Visually Impaired Users

Jaka Sodnik; Grega Jakus; Sao Tomaic

In this paper we propose a prototype of a spatialized text reader for visually impaired users. The basic functions of the system are reading arbitrary files, converting text into speech using different synthesized voices and spatializing synthesized speech. Visually impaired users can thus listen to the content of a file read by various synthesized voices at different spatial positions. Some metadata (e.g. pre-inserted tags) has to be added to the file before processing in order to define the voice, pitch, reading rate and originating spatial position for any part of the content. We believe such a way of electronic book reading can be a significant improvement for visually impaired users if compared to mundane and dull screen readers. The core of the system is based on Java platform using FreeTTS speech synthesizer and JOAL positioning library. The latter is improved by the external MIT Head Related Impulse Response (HRIR) library. The use of headphones is obligatory in order to perceive spatial sound correctly. The system is a work in progress and is currently under evaluation by twelve visually impaired test subjects.


advances in computer-human interaction | 2009

Spatial Auditory Interface for Word Processing Application

Jaka Sodnik; Sao Tomaic

In this paper we evaluate two different auditory interfaces for word processing application. The interfaces are in the form of hierarchical menu structures and use spatial sounds in two different spatial configurations. The first menu - AH - has a ring shaped horizontal configuration of the sound sources, whereas the second - AV - has a vertical configuration of the sound sources. Spatial sounds are used to increase the information flow between the user and the application. In this way, multiple sources (i.e. menu commands) can be played and perceived simultaneously. The main goal of the experiment was to choose the most efficient interface based on a user study with 16 test subjects. The test subjects were asked to perform five different tasks with two auditory interfaces and a normal visual GUI. The variables observed in the user study were: task completion times, navigation performance and various subjective evaluations. The AV interface proved to be the most efficient and user friendly and will therefore be used in further experiments.

Collaboration


Dive into the Jaka Sodnik's collaboration.

Top Co-Authors

Avatar

Grega Jakus

University of Ljubljana

View shared research outputs
Top Co-Authors

Avatar

Saso Tomazic

University of Ljubljana

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jože Guna

University of Ljubljana

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark Billinghurst

University of South Australia

View shared research outputs
Top Co-Authors

Avatar

Erik Dovgan

University of Ljubljana

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge