Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Grega Jakus is active.

Publication


Featured researches published by Grega Jakus.


Sensors | 2014

An Analysis of the Precision and Reliability of the Leap Motion Sensor and Its Suitability for Static and Dynamic Tracking

Jože Guna; Grega Jakus; Matevž Pogačnik; Sašo Tomažič; Jaka Sodnik

We present the results of an evaluation of the performance of the Leap Motion Controller with the aid of a professional, high-precision, fast motion tracking system. A set of static and dynamic measurements was performed with different numbers of tracking objects and configurations. For the static measurements, a plastic arm model simulating a human arm was used. A set of 37 reference locations was selected to cover the controllers sensory space. For the dynamic measurements, a special V-shaped tool, consisting of two tracking objects maintaining a constant distance between them, was created to simulate two human fingers. In the static scenario, the standard deviation was less than 0.5 mm. The linear correlation revealed a significant increase in the standard deviation when moving away from the controller. The results of the dynamic scenario revealed the inconsistent performance of the controller, with a significant drop in accuracy for samples taken more than 250 mm above the controllers surface. The Leap Motion Controller undoubtedly represents a revolutionary input device for gesture-based human-computer interaction; however, due to its rather limited sensory space and inconsistent sampling frequency, in its current configuration it cannot currently be used as a professional tracking system.


Sensors | 2015

Exploring direct 3D interaction for full horizontal parallax light field displays using leap motion controller.

Vamsi Kiran Adhikarla; Jaka Sodnik; Péter Szolgay; Grega Jakus

This paper reports on the design and evaluation of direct 3D gesture interaction with a full horizontal parallax light field display. A light field display defines a visual scene using directional light beams emitted from multiple light sources as if they are emitted from scene points. Each scene point is rendered individually resulting in more realistic and accurate 3D visualization compared to other 3D displaying technologies. We propose an interaction setup combining the visualization of objects within the Field Of View (FOV) of a light field display and their selection through freehand gesture tracked by the Leap Motion Controller. The accuracy and usefulness of the proposed interaction setup was also evaluated in a user study with test subjects. The results of the study revealed high user preference for free hand interaction with light field display as well as relatively low cognitive demand of this technique. Further, our results also revealed some limitations and adjustments of the proposed setup to be addressed in future work.


Applied Ergonomics | 2015

A User Study of Auditory, Head-Up and Multi-modal Displays in Vehicles

Grega Jakus; Christina Dicke; Jaka Sodnik

This paper describes a user study on the interaction with an in-vehicle information system (IVIS). The motivation for conducting this research was to investigate the subjectively and objectively measured impact of using a single- or multi-modal IVIS while driving. A hierarchical, list-based menu was presented using a windshield projection (head-up display), auditory display and a combination of both interfaces. The users were asked to navigate a vehicle in a driving simulator and simultaneously perform a set of tasks of varying complexity. The experiment showed that the interaction with visual and audio-visual head-up displays is faster and more efficient than with the audio-only display. All the interfaces had a similar impact on the overall driving performance. There was no significant difference between the visual only and audio-visual displays in terms of their efficiency and safety; however, the majority of test subjects clearly preferred to use the multi-modal interface while driving.


International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 2011

Multiple spatial sounds in hierarchical menu navigation for visually impaired computer users

Jaka Sodnik; Grega Jakus; Saso Tomazic

This paper describes a user study on the benefits and drawbacks of simultaneous spatial sounds in auditory interfaces for visually impaired and blind computer users. Two different auditory interfaces in spatial and non-spatial condition were proposed to represent the hierarchical menu structure of a simple word processing application. In the horizontal interface, the sound sources or the menu items were located in the horizontal plane on a virtual ring surrounding the users head, while the sound sources in the vertical interface were aligned one above the other in front of the user. In the vertical interface, the central pitch of the sound sources at different elevations was changed in order to improve the otherwise relatively low localization performance in the vertical dimension. The interaction with the interfaces was based on a standard computer keyboard for input and a pair of studio headphones for output. Twelve blind or visually impaired test subjects were asked to perform ten different word processing tasks within four experiment conditions. Task completion times, navigation performance, overall satisfaction and cognitive workload were evaluated. The initial hypothesis, i.e. that the spatial auditory interfaces with multiple simultaneous sounds should prove to be faster and more efficient than non-spatial ones, was not confirmed. On the contrary-spatial auditory interfaces proved to be significantly slower due to the high cognitive workload and temporal demand. The majority of users did in fact finish tasks with less navigation and key pressing; however, they required much more time. They reported the spatial auditory interfaces to be hard to use for a longer period of time due to the high temporal and mental demand, especially with regards to the comprehension of multiple simultaneous sounds. The comparison between the horizontal and vertical interface showed no significant differences between the two. It is important to point out that all participants were novice users of the system; therefore it is possible that the overall performance could change with a more extensive use of the interfaces and an increased number of trials or experiments sets. Our interviews with visually impaired and blind computer users showed that they are used to sharing their auditory channel in order to perform multiple simultaneous tasks such as listening to the radio, talking to somebody, using the computer, etc. As the perception of multiple simultaneous sounds requires the entire capacity of the auditory channel and total concentration of the listener, it does therefore not enable such multitasking.


international conference on telecommunication in modern satellite, cable and broadcasting services | 2009

Long term evolution: Towards 4th generation of mobile telephony and beyond

Saso Tomazic; Grega Jakus

Long term evolution (LTE) is the next major step towards 4th generation of mobile communications. LTE introduces a new radio access network with technologies that offer higher data rates, efficiency and quality of services as well as lower costs and the integration with the existing open standards.


international conference on human-computer interaction | 2014

Evaluation of Leap Motion Controller with a High Precision Optical Tracking System

Grega Jakus; Jože Guna; Sašo Tomažič; Jaka Sodnik

The paper presents an evaluation of the performance of a Leap Motion Controller. A professional optical tracking system was used as a reference system. 37 stationary points were tracked in 3D space in order to evaluate the consistency and accuracy of the Controller’s measurements. The standard deviation of these measurements varied from 8.1 μm to 490 μm, mainly depending on the azimuth and distance from the Controller. In the second part of the experiment, a constant distance was provided between two points, which were then moved and tracked within the entire sensory space. The deviation of the measured distance changed significantly with the height above the Controller. The sampling frequency also proved to be very non-uniform. The Controller represents a revolution in the field of gesture-based human-computer interaction; however, it is currently unsuitable as a replacement for professional motion tracking systems.


Applied Ergonomics | 2018

An Analysis of the Suitability of a Low-Cost Eye Tracker for Assessing the Cognitive Load of Drivers

Kristina Stojmenova; Grega Jakus; Jaka Sodnik

This paper presents a driving simulator study in which we investigated whether the Eye Tribe eye tracker (ET) is capable of assessing changes in the cognitive load of drivers through oculography and pupillometry. In the study, participants were asked to drive a simulated vehicle and simultaneously perform a set of secondary tasks with different cognitive complexity levels. We measured changes in eye properties, such as the pupil size, blink rate and fixation time. We also performed a measurement with a Detection Response Task (DRT) to validate the results and to prove a steady increase of cognitive load with increasing secondary task difficulty. The results showed that the ET precisely recognizes an increasing pupil diameter with increasing secondary task difficulty. In addition, the ET shows increasing blink rates, decreasing fixation time and narrowing of the attention field with increasing secondary task difficulty. The results were validated with the DRT method and the secondary task performance. We conclude that the Eye Tribe ET is a suitable device for assessing a drivers cognitive load.


advances in computer-human interaction | 2010

Enhanced Synthesized Text Reader for Visually Impaired Users

Jaka Sodnik; Grega Jakus; Sao Tomaic

In this paper we propose a prototype of a spatialized text reader for visually impaired users. The basic functions of the system are reading arbitrary files, converting text into speech using different synthesized voices and spatializing synthesized speech. Visually impaired users can thus listen to the content of a file read by various synthesized voices at different spatial positions. Some metadata (e.g. pre-inserted tags) has to be added to the file before processing in order to define the voice, pitch, reading rate and originating spatial position for any part of the content. We believe such a way of electronic book reading can be a significant improvement for visually impaired users if compared to mundane and dull screen readers. The core of the system is based on Java platform using FreeTTS speech synthesizer and JOAL positioning library. The latter is improved by the external MIT Head Related Impulse Response (HRIR) library. The use of headphones is obligatory in order to perceive spatial sound correctly. The system is a work in progress and is currently under evaluation by twelve visually impaired test subjects.


Archive | 2013

Trends and Outlook

Grega Jakus; Veljko Milutinovic; Sanida Omerovic; Sašo Tomažič

In the past, the field of knowledge representation already exceeded the academic and research spheres and emerged in practical use as well. Moreover, it also extended beyond the field of its origin, i.e. artificial intelligence, into other fields of computer science. One of the important factors that stimulated the thriving of ontologies in particular was World Wide Web, especially its recent evolution, the so-called Semantic Web. The idea of Semantic Web is consistent with some of the basic goals of knowledge representation. The vision of Semantic Web is to enable semantic interoperability and machine interpretability of data sets from various sources and to provide the mechanisms that enable such data to be used to support the user in an automated and intelligent way.


Multimedia Tools and Applications | 2017

A system for efficient motor learning using multimodal augmented feedback

Grega Jakus; Kristina Stojmenova; Sašo Tomažič; Jaka Sodnik

Numerous studies have established that using various forms of augmented feedback improves human motor learning. In this paper, we present a system that enables real-time analysis of motion patterns and provides users with objective information on their performance of an executed set of motions. This information can be used to identify individual segments of improper motion early in the learning process, thus preventing improperly learned motion patterns that can be difficult to correct once fully learned. The primary purpose of the proposed system is to serve as a general tool in the research on impact of different feedback modalities on the process of motor learning, for example, in sports or rehabilitation. The key advantages of the system are high-speed and high-accuracy tracking, as well as its flexibility, as it supports various types of feedback (auditory and visual, concurrent or terminal). The practical application of the proposed system is demonstrated through the example of learning a golf swing.

Collaboration


Dive into the Grega Jakus's collaboration.

Top Co-Authors

Avatar

Jaka Sodnik

University of Ljubljana

View shared research outputs
Top Co-Authors

Avatar

Saso Tomazic

University of Ljubljana

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jože Guna

University of Ljubljana

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Vamsi Kiran Adhikarla

Pázmány Péter Catholic University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge