Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ehud Sharlin is active.

Publication


Featured researches published by Ehud Sharlin.


ubiquitous computing | 2004

On tangible user interfaces, humans and spatiality

Ehud Sharlin; Benjamin Watson; Yoshifumi Kitamura; Fumio Kishino; Yuichi Itoh

Like the prehistoric twig and stone, tangible user interfaces (TUIs) are objects manipulated by humans. Tangible user interface success will depend on how well they exploit spatiality, the intuitive spatial skills humans have with the objects they use. In this paper, we carefully examine the relationship between humans and physical objects, and related previous research. From this examination, we distill a set of observations and turn these into heuristics for incorporation of spatiality into TUI application design, a cornerstone for their success. Following this line of thought, we identify “spatial TUIs,” the subset of TUIs that mediate interaction with shape, space and structure. We then examine several existing spatial TUIs using our heuristics.


user interface software and technology | 2005

Predictive interaction using the delphian desktop

Takeshi Asano; Ehud Sharlin; Yoshifumi Kitamura; Kazuki Takashima; Fumio Kishino

This paper details the design and evaluation of the Delphian Desktop, a mechanism for online spatial prediction of cursor movements in a Windows-Icons-Menus-Pointers (WIMP) environment. Interaction with WIMP-based interfaces often becomes a spatially challenging task when the physical interaction mediators are the common mouse and a high resolution, physically large display screen. These spatial challenges are especially evident in overly crowded Windows desktops. The Delphian Desktop integrates simple yet effective predictive spatial tracking and selection paradigms into ordinary WIMP environments in order to simplify and ease pointing tasks. Predictions are calculated by tracking cursor movements and estimating spatial intentions using a computationally inexpensive online algorithm based on estimating the movement direction and peak velocity. In testing the Delphian Desktop effectively shortened pointing time to faraway icons, and reduced the overall physical distance the mouse (and user hand) had to mechanically traverse.


human factors in computing systems | 2002

Cognitive cubes: a tangible user interface for cognitive assessment

Ehud Sharlin; Yuichi Itoh; Benjamin Watson; Yoshifumi Kitamura; Steve Sutphen; Lili Liu

Assessments of spatial, constructional ability are used widely in cognitive research and in clinical diagnosis of disease or injury. Some believe that three-dimensional (3D) forms of these assessments would be particularly sensitive, but difficulties with consistency in administration and scoring have limited their use. We describe Cognitive Cubes, a novel computerized tool for 3D constructional assessment that increases consistency and promises improvements in flexibility, reliability, sensitivity and control. Cognitive Cubes makes use of ActiveCube, a novel tangible user interface for describing 3D shape. In testing, Cognitive Cubes was sensitive to differences in cognitive ability and task, and correlated well to a standard paper-and-pencil 3D spatial assessment


robot and human interactive communication | 2011

Collocated interaction with flying robots

Wai Shan (Florence) Ng; Ehud Sharlin

We introduce a socially motivated interaction technique with collocated flying robots (a quadrotor in our current prototype). Instead of the traditional remote interaction controllers often used when interacting with flying robots and UAVs, we explore the collocated interaction space and suggest a direct interaction technique motivated by social human-robot interaction themes. Our approach is inspired by the types of interaction humans have with birds, specifically falconeering, and is facilitated by gestures-based interaction, while the user is within the field of view of the flying robot. This paper outlines our research goals, task examples, and our overall design approach. The paper also discusses our current prototyping efforts, as well as a preliminary evaluation of our approach, performed through two design critiques, studying our collocated interaction technique concept, and its potential, drawbacks and benefits for users.


virtual reality software and technology | 2008

Napkin sketch: handheld mixed reality 3D sketching

Min Xin; Ehud Sharlin; Mario Costa Sousa

This paper describes, Napkin Sketch, a 3D sketching interface which attempts to support sketch-based artistic expression in 3D, mimicking some of the qualities of conventional sketching media and tools both in terms of physical properties and interaction experience. A portable tablet PC is used as the sketching platform, and handheld mixed reality techniques are employed to allow 3D sketches to be created on top of a physical napkin. Intuitive manipulation and navigation within the 3D design space is achieved by visually tracking the tablet PC with a camera and mixed reality markers. For artistic expression using sketch input, we improve upon the projective 3D sketching approach with a one stroke sketch plane definition technique. This coupled with the hardware setup produces a natural and fluid sketching experience.


interactive tabletops and surfaces | 2009

The Haptic Tabletop Puck: tactile feedback for interactive tabletops

Nicolai Marquardt; Miguel A. Nacenta; James Everett Young; M. Sheelagh T. Carpendale; Saul Greenberg; Ehud Sharlin

In everyday life, our interactions with objects on real tables include how our fingertips feel those objects. In comparison, current digital interactive tables present a uniform touch surface that feels the same, regardless of what it presents visually. In this paper, we explore how tactile interaction can be used with digital tabletop surfaces. We present a simple and inexpensive device -- the Haptic Tabletop Puck -- that incorporates dynamic, interactive haptics into tabletop interaction. We created several applications that explore tactile feedback in the area of haptic information visualization, haptic graphical interfaces, and computer supported collaboration. In particular, we focus on how a person may interact with the friction, height, texture and malleability of digital objects.


human-robot interaction | 2007

Robot expressionism through cartooning

James Everett Young; Min Xin; Ehud Sharlin

We present a new technique for human-robot interaction called robot expressionism through cartooning. We suggest that robots utilise cartoon-art techniques such as simplified and exaggerated facial expressions, stylised text, and icons for intuitive social interaction with humans. We discuss practical mixed reality solutions that allow robots to augment themselves or their surroundings with cartoon art content. Our effort is part of what we call robot expressionism, a conceptual approach to the design and analysis of robotic interfaces that focuses on providing intuitive insight into robotic states as well as the artistic quality of interaction. Our paper discusses a variety of ways that allow robots to use cartoon art and details a test bed design, implementation, and exploratory evaluation. We describe our test bed, Jeeves, which uses a Roomba, an iRobot vacuum cleaner robot, and a mixed-reality system as a platform for rapid prototyping of cartoon-art interfaces. Finally, we present a set of interaction content scenarios which use the Jeeves prototype: trash Roomba, the recycle police, and clean tracks, as well as initial exploratory evaluation of our approach.


robot and human interactive communication | 2011

Exploring the affect of abstract motion in social human-robot interaction

John Harris; Ehud Sharlin

We present our exploration of the emotional impact that abstract robot motion has on human-robot interaction (HRI). We argue for the importance of designing for the fundamental characteristics of physical robot motion as distinct from designing the robots visual appearance or functional context. We discuss our design approach, the creation of an abstract robotic motion platform that is nearly formless and affordance-less, and our evaluation of the affect abstract motion had on more than thirty participants which interacted with our robotic platform in a series of studies. We detail our results and explain how different styles of robot motion were mapped to emotional responses in human observers. We believe that our findings can inform and provide important insight into the purposeful use of motion as a design tool in social human-robot interaction.


interactive tabletops and surfaces | 2011

Point it, split it, peel it, view it : techniques for interactive reservoir visualization on tabletops

Nicole Sultanum; Sowmya Somanath; Ehud Sharlin; Mario Costa Sousa

Reservoir engineers rely on virtual representations of oil reservoirs to make crucial decisions relating, for example, to the modeling and prediction of fluid behavior, or to the optimal locations for drilling wells. Therefore, they are in constant pursue of better virtual representations of the reservoir models, improved user awareness of their embedded data, and more intuitive ways to explore them, all ultimately leading to more informed decision making. Tabletops have great potential in providing powerful interactive representation to reservoir engineers, as well as enhancing the flexibility, immediacy and overall capabilities of their analysis, and consequently bringing more confidence into the decision making process. In this paper, we present a collection of 3D reservoir visualization techniques on tabletop interfaces applied to the domain of reservoir engineering, and argue that these provide greater insight into reservoir models. We support our claims with findings from a qualitative user study conducted with 12 reservoir engineers, which brought us insight into our techniques, as well as a discussion on the potential of tabletop-based visualization solutions for the domain of reservoir engineering.


human-robot interaction | 2009

Using bio-electrical signals to influence the social behaviours of domesticated robots

Paul Saulnier; Ehud Sharlin; Saul Greenberg

Several emerging computer devices read bio-electrical signals (e.g., electro-corticographic signals, skin biopotential or facial muscle tension) and translate them into computer- understandable input. We investigated how one low-cost commercially-available device could be used to control a domestic robot. First, we used the device to issue direct motion commands; while we could control the device somewhat, it proved difficult to do reliably. Second, we interpreted one class of signals as suggestive of emotional stress, and used that as an emotional parameter to influence (but not directly control) robot behaviour. In this case, the robot would react to human stress by staying out of the persons way. Our work suggests that affecting behaviour may be a reasonable way to leverage such devices.

Collaboration


Dive into the Ehud Sharlin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Min Xin

University of Calgary

View shared research outputs
Researchain Logo
Decentralizing Knowledge