Shahram Jalaliniya
IT University of Copenhagen
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Shahram Jalaliniya.
ubiquitous computing | 2013
Shahram Jalaliniya; Jeremiah Smith; Miguel Sousa; Lars Büthe; Thomas Pederson
Sterility restrictions in surgical settings make touch-less interaction an interesting solution for surgeons to interact directly with digital images. The HCI community has already explored several methods for touch-less interaction including those based on camera-based gesture tracking and voice control. In this paper, we present a system for gesture-based interaction with medical images based on a single wristband sensor and capacitive floor sensors, allowing for hand and foot gesture input. The first limited evaluation of the system showed an acceptable level of accuracy for 12 different hand & foot gestures; also users found that our combined hand and foot based gestures are intuitive for providing input.
human factors in computing systems | 2016
Shahram Jalaliniya; Diako Mardanbegi
EyeGrip proposes a novel and yet simple technique of analysing eye movements for automatically detecting the users objects of interest in a sequence of visual stimuli moving horizontally or vertically in front of the users view. We assess the viability of this technique in a scenario where the user looks at a sequence of images moving horizontally on the display while the users eye movements are tracked by an eye tracker. We conducted an experiment that shows the performance of the proposed approach. We also investigated the influence of the speed and maximum number of visible images in the screen, on the accuracy of EyeGrip. Based on the experiment results, we propose guidelines for designing EyeGrip-based interfaces. EyeGrip can be considered as an implicit gaze interaction technique with potential use in broad range of applications such as large screens, mobile devices and eyewear computers. In this paper, we demonstrate the rich capabilities of EyeGrip with two example applications: 1) a mind reading game, and 2) a picture selection system. Our study shows that by selecting an appropriate speed and maximum number of visible images in the screen the proposed method can be used in a fast scrolling task where the system accurately (87%) detects the moving images that are visually appealing to the user, stops the scrolling and brings the item(s) of interest back to the screen.
international symposium on wearable computers | 2015
Shahram Jalaliniya; Diako Mardanbegi; Ioannis Sintos; Daniel Garcia Garcia
In this paper we report on development and evaluation of a video-based mobile gaze tracker for eyewear computers. Unlike most of the previous work, our system performs all its processing workload on an Android device and sends the coordinates of the gaze point to an eyewear device through wireless connection. We propose a lightweight software architecture for Android to increase the efficiency of image processing needed for eye tracking. The evaluation of the system indicated an accuracy of 1.06 degrees and a battery lifetime of approximate 4.5 hours.
IEEE Pervasive Computing | 2015
Shahram Jalaliniya; Thomas Pederson
The design of general-purpose wearable computers demands particular care for how human perception, cognition, and action work and work together. The authors propose a human body-and-mind centric (egocentric as opposed to device-centric) design framework and present initial findings from deploying it in the design of a wearable personal assistant (WPA) for orthopedic surgeons. The result is a Google Glass-based prototype system aimed at facilitating touchless interaction with x-ray images, browsing of electronic patient records (EPR) when on the move, and synchronized ad hoc remote collaboration. This article is part of a special issue on digitally enhanced reality.
international symposium on wearable computers | 2014
Shahram Jalaliniya; Thomas Pederson; Steven Houben
Wearable camera and displechnology allow remote collaborators to guide activities performed by human agents located elsewhere. This kind of technology augments the range of human perception and actuation. In this paper we quantitatively determine if wearable laser pointers are viable alternatives to Head-Mounted Displays for indicating where in the physical environment the local agent should direct her/his attention. The potential benefit of the laser pointer would be reduced eye fatigue, due to the fact that the documented refocusing challenges associated with HMD use would be completely eliminated. 10 participants where asked to perform a short tele-guided pick-and drop task using both approaches. The quantitative analysis indicates that user performance in the laser pointer condition is higher than the HMD approach (P = .064, α = 0.1). While all 10 participants found the task easy in both conditions, 8 of 10 participants found the laser pointer system more convenient.
international symposium on wearable computers | 2015
Thomas Pederson; Shahram Jalaliniya
In this position paper we present our take on the possibilities that emerge from a mix of recent ideas in interaction design, wearable computers, and context-aware systems which taken together could allow us to get closer to Marc Weisers vision of calm computing. Multisensory user experience plays an important role in this approach.
Archive | 2017
Shahram Jalaliniya; Thomas Pederson
In this paper, we report on the utility of a wearable personal assistant (WPA) for orthopedic surgeons in hospitals. A prototype of the WPA was developed on the Google Glass platform for supporting surgeons in three different scenarios: (1) touch-less interaction with medical images in surgery room, (2) tele-presence colleague consultation during surgeries, and (3) mobile access to the Electronic Patient Records (EPR) during ward rounds. We evaluated the system in a simulation facility of a hospital with two real orthopedic surgeons. The results of our study showed that while the WPA can be a viable solution for touch-less interaction with medical images and remote collaborations during surgeries, using the WPA in the ward rounds can have a negative impact on social interaction between surgeons and patients.
EAI Endorsed Transactions on Pervasive Health and Technology | 2017
Shahram Jalaliniya; Thomas Pederson; Diako Mardanbegi
In this paper, we present our body-and-mind-centric approach for the design of wearable personal assistants (WPAs) motivated by the fact that such devices are likely to play an increasing role in everyday life. We also report on the utility of such a device for orthopedic surgeons in hospitals. A prototype of the WPA was developed on Google Glass for supporting surgeons in three dierent scenarios: (1) touch-less interaction with medical images, (2) tele-presence during surgeries, and (3) mobile access to Electronic Patient Records (EPR) during ward rounds. We evaluated the system in a clinical simulation facility and found that while the WPA can be a viable solution for touch-less interaction and remote collaborations during surgeries, using the WPA in the ward rounds might interfere with social interaction between clinicians and patients. Finally, we present our ongoing exploration of gaze and gesture as alternative input modalities for WPAs inspired by the hospital study.
international conference on global software engineering | 2015
Paolo Tell; Shahram Jalaliniya; Kristian S. M. Andersen; Mads D. Christensen; Anders B. Mellson; Jakob E. Bardram
Assessing the presence and availability of a remote colleague is key in coordination in global software development but is not easily done using existing computer-mediated channels. Previous research has shown that automated estimation of interruptibility is feasible and can achieve a precision closer to, or even better than, human judgment. However, existing approaches to assess interruptibility have been designed to rely on external sensors. In this paper, we present Approximator, a system that estimates the interruptibility of a user based exclusively on the sensing ability of commodity laptops. Experimental results show that the information aggregated from several activity monitors (i.e., Key-logger, mouse-logger, and face-detection) provide useful data, which, once combined with machine learning techniques, can automatically estimate the interruptibility of users with a 78% accuracy. These early but promising results represent a starting point for designing tools with support for interruptibility capable of improving distributed awareness and cooperation to be used in global software development.
international symposium on wearable computers | 2015
Shahram Jalaliniya; Diako Mardanbegi; Thomas Pederson