Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Khalad Hasan is active.

Publication


Featured researches published by Khalad Hasan.


human factors in computing systems | 2013

Ad-binning: leveraging around device space for storing, browsing and retrieving mobile device content

Khalad Hasan; David Ahlström; Pourang Irani

Exploring information content on mobile devices can be tedious and time consuming. We present Around-Device Binning, or AD-Binning, a novel mobile user interface that allows users to off-load mobile content in the space around the device. We informed our implementation of AD-Binning by exploring various design factors, such as the minimum around-device target size, suitable item selection methods, and techniques for placing content in off-screen space. In a task requiring exploration, we find that AD-Binning improves browsing efficiency by avoiding the minute selection and flicking mechanisms needed for on-screen interaction. We conclude with design guidelines for off screen content storage and browsing.


human factors in computing systems | 2011

Comet and target ghost: techniques for selecting moving targets

Khalad Hasan; Tovi Grossman; Pourang Irani

Numerous applications such as simulations, air traffic control systems, and video surveillance systems are inherently composed of spatial objects that move in a scene. In many instances, users can benefit from tools that allow them to select these targets in real-time, without having to pause the dynamic display. However, selecting moving objects is considerably more difficult and error prone than selecting stationary targets. In this paper, we evaluate the effectiveness of several techniques that assist in selecting moving targets. We present Comet, a technique that enhances targets based on their speed and direction. We also introduce Target Ghost, which allows users to select a static proxy of the target, while leaving the motion uninterrupted. We found a speed benefit for the Comet in a 1D selection task in comparison to other cursor and target enhancements. For 2D selection, Comet outperformed Bubble cursor but only when Target Ghost was not available. We conclude with guidelines for design.


human computer interaction with mobile devices and services | 2012

How to position the cursor?: an exploration of absolute and relative cursor positioning for back-of-device input

Khalad Hasan; Xing-Dong Yang; Hai-Ning Liang; Pourang Irani

Observational studies indicate that most people use one hand to interact with their mobile devices. Interaction on the back-of-devices (BoD) has been proposed to enhance one-handed input for various tasks, including selection and gesturing. However, we do not possess a good understanding of some fundamental issues related to one-handed BoD input. In this paper, we attempt to fill this gap by conducting three studies. The first study explores suitable selection techniques; the second study investigates the performance and suitability of the two main modes of cursor movement: Relative and Absolute; and the last study examines solutions to the problem of reaching the lower part of the device. Our results indicate that for BoD interaction, relative input is more efficient and accurate for cursor positioning and target selection than absolute input. Based on these findings provide guidelines for designing BoD interactions for mobile devices.


human factors in computing systems | 2012

A-coord input: coordinating auxiliary input streams for augmenting contextual pen-based interactions

Khalad Hasan; Xing-Dong Yang; Andrea Bunt; Pourang Irani

The human hand can naturally coordinate multiple finger joints, and simultaneously tilt, press and roll a pen to write or draw. For this reason, digital pens are now embedded with auxiliary input sensors to capture these actions. Prior research on auxiliary input channels has mainly investigated them in isolation of one another. In this work, we explore the coordinated use of two auxiliary channels, a class of interaction techniques we refer to as a-coord input. Through two separate experiments, we explore the design space of a-coord input. In the first study we identify if users can successfully coordinate two auxiliary channels. We found a strong degree of coordination between channels. In a second experiment, we evaluate the effectiveness of a-coord input in a task with multiple steps, such as multi-parameter selection and manipulation. We find that a-coord input facilitates coordination even with a complex, aforethought sequential task. Overall our results indicate that users can control at least two auxiliary input channels in conjunction which can facilitate a number of common tasks can on the pen.


human computer interaction with mobile devices and services | 2012

EdgeSplit: facilitating the selection of off-screen objects

Zahid Hossain; Khalad Hasan; Hai-Ning Liang; Pourang Irani

Devices with small viewports (e.g., smartphones or GPS) result in interfaces where objects of interest can easily reside outside the view, into off-screen space. Researchers have addressed this challenge and have proposed visual cues to assist users in perceptually locating off-screen objects. However, little attention has been placed on methods for selecting the objects. Current designs of off-screen cues can result in overlaps that can make it difficult to use the cues as handles through which users can select the off-screen objects they represent. In this paper, we present EdgeSplit, a technique that facilitates both the visualization and selection of off-screen objects on small devices. EdgeSplit exploits the space around the devices borders to display proxies of off-screen objects and then partitions the border regions to allow for non-overlapping areas that make selection of objects easier. We present an effective algorithm that provides such partitioning and demonstrate the effectiveness of EdgeSplit for selecting off-screen objects.


human computer interaction with mobile devices and services | 2015

SAMMI: A Spatially-Aware Multi-Mobile Interface for Analytic Map Navigation Tasks

Khalad Hasan; David Ahlström; Pourang Irani

Motivated by a rise in the variety and number of mobile devices that users carry, we investigate scenarios when operating these devices in a spatially interlinked manner can lead to interfaces that generate new advantages. Our exploration is focused on the design of SAMMI, a spatially-aware multi-device interface to assist with analytic map navigation tasks, where, in addition to browsing the workspace, the user has to make a decision based on the content embedded in the map. We focus primarily on the design space for spatially interlinking a smartphone with a smartwatch. As both smart devices are spatially tracked, the user can browse information by moving either device in the workspace. We identify several design factors for SAMMI and through a first study we explore how best to combine these for efficient map navigation. In a second study we compare SAMMI to the common Flick-&-Pinch gestures for an analytic map navigation task. Our results reveal that SAMMI is an efficient spatial navigation interface, and by means of an additional spatially tracked display, can facilitate quick information retrieval and comparisons. We finally demonstrate other potential use cases for SAMMI that extend beyond map navigation to facilitate interaction with spatial workspaces.


human factors in computing systems | 2017

AirPanes: Two-Handed Around-Device Interaction for Pane Switching on Smartphones

Khalad Hasan; David Ahlström; Junhyeok Kim; Pourang Irani

In recent years, around device input has emerged as a complement to standard touch input, albeit in limited tasks and contexts, such as for item selection or map navigation. We push the boundaries for around device interactions to facilitate an entire smartphone application: browsing through large information lists to make a decision. To this end, we present AirPanes, a novel technique that allows two-handed in-air interactions, conjointly with touch input to perform analytic tasks, such as making a purchase decision. AirPanes resolves the inefficiencies of having to switch between multiple views or panes in common smartphone applications. We explore the design factors that make AirPanes efficient. In a controlled study, we find that AirPanes is on average 50% more efficient that standard touch input for an analytic task. We offer recommendations for implementing AirPanes in a broad range of applications.


symposium on spatial user interaction | 2015

Comparing Direct Off-Screen Pointing, Peephole, and Flick & Pinch Interaction for Map Navigation

Khalad Hasan; David Ahlström; Pourang Irani

Navigating large workspaces with mobile devices often require users to access information that spatially lies beyond its viewport. To browse information on such workspaces, two prominent spatially-aware navigation techniques, peephole, and direct off-screen pointing, have been proposed as alternatives to the standard on-screen flick and pinch gestures. Previous studies have shown that both techniques can outperform on-screen gestures in various user tasks, but no prior study has compared the three techniques in a map-based analytic task. In this paper, we examine these two spatially-aware techniques and compare their efficiency to on-screen gestures in a map navigation and exploration scenario. Our study demonstrates that peephole and direct off-screen pointing allows for 30% faster navigation times between workspace locations and that on-screen flick and pinch is superior for accurate retrieval of workspace content.


user interface software and technology | 2017

SoundCraft: Enabling Spatial Interactions on Smartwatches using Hand Generated Acoustics

Teng Han; Khalad Hasan; Keisuke Nakamura; Randy Gomez; Pourang Irani

We present SoundCraft, a smartwatch prototype embedded with a microphone array, that localizes angularly, in azimuth and elevation, acoustic signatures: non-vocal acoustics that are produced using our hands. Acoustic signatures are common in our daily lives, such as when snapping or rubbing our fingers, tapping on objects or even when using an auxiliary object to generate the sound. We demonstrate that we can capture and leverage the spatial location of such naturally occurring acoustics using our prototype. We describe our algorithm, which we adopt from the MUltiple SIgnal Classification (MUSIC) technique [31], that enables robust localization and classification of the acoustics when the microphones are required to be placed at close proximity. SoundCraft enables a rich set of spatial interaction techniques, including quick access to smartwatch content, rapid command invocation, in-situ sketching, and also multi-user around device interaction. Via a series of user studies, we validate SoundCrafts localization and classification capabilities in non-noisy and noisy environments.


canadian conference on computer and robot vision | 2013

Enabling User Interactions with Video Contents

Khalad Hasan; Yang Wang; Wing Kwong; Pourang Irani

Many people spend countless hours watching videos online or on TV. Current smart TV systems provide some basic two-way communications where users can interact with some features (e.g. browsing web, accessing social media, etc) provided by service providers. We would like to move beyond such primitive interactions and explore the possibility of allowing users to interact with video contents. For example, users can select objects shown in videos and place further queries on them. We start with exploring different state-of-the-art object detection and tracking techniques to obtain an objects location in the video. Using the best performing tracking technique, we extract an objects location in each frame and allow users to interact with the object using Microsoft Kinect. Finally, we have developed and compared a set of selection techniques that assist users to select moving objects in video. We conclude with guidelines for designing such interaction systems.

Collaboration


Dive into the Khalad Hasan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Ahlström

Alpen-Adria-Universität Klagenfurt

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrea Bunt

University of Manitoba

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Teng Han

University of Manitoba

View shared research outputs
Top Co-Authors

Avatar

Hai-Ning Liang

Xi'an Jiaotong-Liverpool University

View shared research outputs
Top Co-Authors

Avatar

Ali Neshati

University of Manitoba

View shared research outputs
Top Co-Authors

Avatar

Barrett Ens

University of Manitoba

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge