Joel Lanir
University of Haifa
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Joel Lanir.
Interacting with Computers | 2013
Joel Lanir; Tsvi Kuflik; Eyal Dim; Alan J. Wecker; Oliviero Stock
Many museums offer their visitors the use of a mobile guide to enhance their visit experience. Novel mobile guides have the potential to provide personalized, context-aware, rich content to museum visitors. However, they might also affect the way visitors behave and interact. While many studies have examined novel features that these guides can provide to enhance the visit experience, few have looked into the impact that a mobile guide might have on the actual behavior of the visitors. We describe a field study conducted with 403 actual museum visitors, over a period of 10 months comparing behaviors of visitors who used a mobile multimedia location-aware guide during their visit and that of visitors who did not use any electronic aid. Results indicate that visitors’ behavior was altered considerably when using a mobile guide. Visitors using a mobile guide visited the museum longer and were attracted to and spent more time at exhibits where they could get information from the guide. In addition, we provide empirical evidence of the decoupling effect that a mobile guide has on pairs of visitors. Using a mobile guide caused visitors to reduce proximity and to interact less with their fellow group members. Finally, we discuss what may be done to reduce this negative social effect.
Information Technology & Tourism | 2015
Tsvi Kuflik; Alan J. Wecker; Joel Lanir; Oliviero Stock
The Cultural Heritage experience at the museum begins before the actual on-site visit and continues with memories and reflections after the visit. In considering the potential of novel information and communication technology to enhance the entire visit experience, one scenario envisioned is extending the on-site visit boundaries, to help the visitors access information concerning exhibits that are of primary interest to them during pre-visit planning, provide relevant information to the visitors during the visit, and follow up with post visit memories and reflections. All this can be done by using today’s state of the art mobile and web-based applications, as well as any new foreseeable emerging technology. So far, research on applying novel information and communication technology in the cultural heritage domain has focused primarily on exploring specific aspects of the technology and its capability for supporting the individual visitor mainly during the physical, on-site, visit (and in some cases in additional specific phases such as prior or after the visit). This paper suggests a novel, integrative framework for supporting the pre, during and post visit phases in a personalized manner. It is based on a set of standard, common models: a visitor model, a site model and a visit model, all enable a large variety of services to store, update and reuse data during the three phases of the visit. Our contribution is presenting a framework architecture with its underlying infrastructure, and showing in a case study how this framework supports the various visit phases in an actual museum. The suggested framework is generic; it is not limited to a specific setting or scenario and it is open and can be easily adopted and used by practitioners and researchers to be implemented in different sites and settings. As such, it provides a further step in extending the cultural heritage experience beyond the on-site visit and towards linking individual episodes into complete, memorable personal experiences.
symposium on geometry processing | 2013
Yanir Kleiman; Noa Fish; Joel Lanir; Daniel Cohen-Or
Large datasets of 3D objects require an intuitive way to browse and quickly explore shapes from the collection. We present a dynamic map of shapes where similar shapes are placed next to each other. Similarity between 3D models exists in a high dimensional space which cannot be accurately expressed in a two dimensional map. We solve this discrepancy by providing a local map with pan capabilities and a user interface that resembles an online experience of navigating through geographical maps. As the user navigates through the map, new shapes appear which correspond to the specific navigation tendencies and interests of the user, while maintaining a continuous browsing experience. In contrast with state of the art methods which typically reduce the search space by selecting constraints or employing relevance feedback, our method enables exploration of large sets without constraining the search space, allowing the user greater creativity and serendipity. A user study evaluation showed a strong preference of users for our method over a standard relevance feedback method.
human factors in computing systems | 2013
Joel Lanir; Ran Stone; Benjamin Cohen; Pavel Gurevich
In this paper we investigate user performance and user behavior, related to the issue of who controls the point of view in a remote assistance scenario. We describe an experiment that examined users completing two different tasks with the aid of a remote gesturing device under two conditions: when control of the camera and gesturing point of view was in the hands of the remote helper, and when it was in the hands of the worker. Results indicate that in general, when most of the knowledge is with the helper, it is preferable to leave control in the hands of the helper. However, these results may depend on the situation and task at hand.
human computer interaction with mobile devices and services | 2011
Alan J. Wecker; Joel Lanir; Tsvi Kuflik; Oliviero Stock
In this paper we describe the in-progress work of the Pathlight navigation system for groups and individuals. Pathlight provides indoor navigation support in the museum using a handheld projector. We describe some of the advantages the system provides, look at some background, briefly describe some system features, and posit some open questions for further investigation.
human factors in computing systems | 2015
Yanir Kleiman; Joel Lanir; Dov Danon; Yasmin Felberbaum; Daniel Cohen-Or
We present a novel system for browsing through a very large set of images according to similarity. The images are dynamically placed on a 2D canvas next to their nearest neighbors in a high-dimensional feature space. The layout and choice of images is generated on-the-fly during user interaction, reflecting the users navigation tendencies and interests. This intuitive solution for image browsing provides a continuous experience of navigating through an infinite 2D grid arranged by similarity. In contrast to common multidimensional embedding methods, our solution does not entail an upfront creation of a full global map. Image map generation is dynamic, fast and scalable, independent of the number of images in the dataset, and seamlessly supports online updates to the dataset. Thus, the technique is a viable solution for massive and constantly varying datasets consisting of millions of images. Evaluation of our approach shows that when using DynamicMaps, users viewed many more images per minute compared to a standard relevance feedback interface, suggesting that it supports more fluid and natural interaction that enables easier and faster movement in the image space. Most users preferred DynamicMaps, indicating it is more exploratory, better supports serendipitous browsing and more fun to use
conference on computer supported cooperative work | 2015
Pavel Gurevich; Joel Lanir; Benjamin Cohen
TeleAdvisor is a versatile projection-based augmented reality system designed for remote collaboration. It allows a remote expert to naturally guide a local user in need of assistance in carrying out physical tasks around real-world objects. The system consists of a small projector and two cameras mounted on top of a tele-operated robotic arm at the worker’s side, and an interface to view the camera stream, control the point-of-view and gesture using projected annotations at the remote expert’s side. TeleAdvisor provides a hands-free, mobile, low-cost solution that supports gesturing by the remote expert while minimizing the cognitive overhead of the local worker. We describe the challenges, design considerations and implementation details of the two phases of the TeleAdvisor prototype, as well as its evaluation and deployment at an industrial manufacturing center. We summarize our understandings from our experiences during the project and discuss the general implications for design of augmented reality remote collaboration systems.
ubiquitous computing | 2017
Joel Lanir; Tsvi Kuflik; Julia Sheidin; Nisan Yavin; Kate Leiderman; Michael Segal
Museum curators and personnel are interested in understanding what is happening at their museum: what exhibitions and exhibits do visitors attend to, what exhibits visitors spend most time at, what hours of the day are most busy at certain areas in the museum and more. We use automatic tracking of visitors’ position, movements and interaction at the museum to log visitor information. Using this information, we provide an interface that visualizes individual and small group movement patterns, presentations watched, and aggregated information of overall visitor engagement at the museum. We utilized a user centered design approach in which we gathered requirements, iteratively designed and implemented a working prototype and evaluated it with the help of domain experts (museum curators and other museum personnel). We describe our efforts and provide insights from the design and evaluation of our system, and outline how it might be generalized for other indoor domains such as supermarkets or shopping malls.
ubiquitous computing | 2016
Joel Lanir; Alan J. Wecker; Tsvi Kuflik; Yasmin Felberbaum
We conducted an exploratory study that examines the use of shared mobile displays such as mobile projectors and tablets to support group activities. We compare how a small group of visitors use either a shared display or personal individual devices in a museum visit context, in both a navigation task and a media viewing task. Group proximity, decision making, leadership patterns, and interaction between group members as well as attitudes are analyzed. We report on various usage patterns observed with group use of shared displays and discuss user preferences in comparison with the non-shared handheld alternative. Results show how mobile shared displays can support and enhance the group experience, by providing a shared mobile environment. Mobile shared displays increase group cohesiveness as was shown by increased proximity and amount of discussion by participants. Users perceive the use of shared displays as both useful and enjoyable, with the caveat that many users still want to retain individual control. We discuss this trade-off between groupness and individual control, as well as provide an analysis of the relative advantages of each shared display option.
human factors in computing systems | 2015
Ilya Efanov; Joel Lanir
This work in progress aims at making indirect multi-touch interaction more usable by providing 3D visualizations of the hands and fingers so the user can continuously know their positions before an interaction occurs. We use depth sensing cameras to track the users hands above the surface and to recognize the point of interaction with a plain horizontal surface at a predefined height. This allows us to support various visual augmentation techniques such as visualizations of 3D hand contours, skeletons, and fingertips that provide visual cues for depth estimation when the hand is above the surface as well as cues for when touching the surface. The purpose is to provide the users with effective and intuitive indirect multi-touch interaction on a regular desktop PC.