Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Limin Zeng is active.

Publication


Featured researches published by Limin Zeng.


COST'11 Proceedings of the 2011 international conference on Cognitive Behavioural Systems | 2011

ATMap: annotated tactile maps for the visually impaired

Limin Zeng; Gerhard Weber

For the visually impaired, there are various challenges to understand cognitive spatial maps, specifically for the born blind. Although a few existing audio-haptic maps provide possibilities to access geographic data, most of them are hard to offer convenient services through interactive methods. We developed an interactive tactile map system, called ATMap. It allows users to create and share geographic referenced enhancing annotations on a 2D tactile map, in order to obtain more about relevant places beyond static information in GIS database. 5 blind users have been recruited to evaluate the system in a pilot study.


Interacting with Computers | 2015

Interactive Audio-haptic Map Explorer on a Tactile Display

Limin Zeng; Mei Miao; Gerhard Weber

In this article, we propose and evaluate a novel audio-haptic map system to improve map accessibility for the visually impaired while reading city maps and making pre-journey routes. The system employs a touch-sensitive pin-matrix display and represents various map elements through a pre-defined set of tactile map symbols consisting of raised and lowered pins. The interactive user interface allows users to not only acquire auditory and Braille geographic information by touching the involved map elements, but also prepare pre-journey routes by exploring large-scale areas through panning and zooming. Furthermore, the positive results of the evaluation conducted with 10 blind users indicate that blind users are able to use the proposed map system to read street maps and make pre-journey routes in unknown areas, with acquired configurational knowledge and route knowledge. Two-thirds of participants prefer swell-paper maps with only two zoom levels.


conference on computers and accessibility | 2012

Exploration and avoidance of surrounding obstacles for the visually impaired

Limin Zeng; Denise Prescher; Gerhard Weber

Proximity-based interaction through a long cane is essential for the blind and the visually impaired. We designed and implemented an obstacle detector consisting of a 3D Time-of-Flight (TOF) camera and a planar tactile display to extend the interaction range and provide rich non-visual information about the environment. Users choose a better path after acquiring the spatial layout of obstacles than with a white cane alone. A user study with 6 blind people was analyzed and showed extra time is needed to ensure safe walking while reading the layout. Both hanging and ground-based obstacles were circumvented. Tactile mapping information has been designed for representation of precise spatial information around a blind user.


human computer interaction with mobile devices and services | 2015

A Pilot Study of Collaborative Accessibility: How Blind People Find an Entrance

Limin Zeng; Gerhard Weber

Due to inaccurate GPS data and complex environment close to buildings, for blind people it is often hard to find entrances in unknown areas. In this paper, we propose an enhance method that combines crowd-sourcing technology and location-based services, in order to help blind pedestrians find entrances independently in unknown regions. Besides collecting environmental accessibility information collaboratively, within this method a reference point has to be created for each entrance. Users are guided to a reference point by current GPS-based navigation systems, and rely on the environmental accessibility information walking from the reference point to the target entrance finally. A pilot study with 10 subjects (5 blind and 5 blindfolded) indicated that, the proposed method would help blind people find entrances in unfamiliar areas, and we found the blind and blindfolded individuals performed differently due to their different levels of orientation and mobility skills. Overall, it is promising to integrate the proposed approach into existing navigation systems for blind and visually impaired people.


Universal Access in The Information Society | 2017

Improvement in environmental accessibility via volunteered geographic information: a case study

Limin Zeng; Romina Kühn; Gerhard Weber

Although geo-crowdsourcing approaches provide an opportunity to collect and share environmental accessibility information for people with disabilities, it is not clear whether individuals from different user groups have similar or different behavior while contributing volunteered geographic information about environmental accessibility. In this paper, we present a case study to investigate how users (including elderly people, wheelchair users, blind and visually impaired people as well as volunteers) annotate environmental accessibility information in their journey. We found that subjects from different user groups had different behavior while annotating accessibility information and volunteers who do not have a disability are not good at spotting environmental accessibility issues. With these findings, we conclude a series of insights about how to collect collaborative environmental accessibility for designers and developers.


human computer interaction with mobile devices and services | 2017

Range-IT: detection and multimodal presentation of indoor objects for visually impaired people

Limin Zeng; Gerhard Weber; Markus Simros; Peter Conradie; Jelle Saldien; Ilse Ravyse; Jan B. F. van Erp; Tina Mioch

In the paper we present our Range-IT prototype, which is a 3D depth camera based electronic travel aid (ETA) to assist visually impaired people in finding out detailed information of surrounding objects. In addition to detecting indoor obstacles and identifying several objects of interest (e.g., walls, open doors and stairs) up to 7 meters, the Range-IT system employs a multimodal audio-vibrotactile user interface to present this spatial information.


Universal Access in The Information Society | 2017

Externalizing cognitive maps via map reconstruction and verbal description

Mei Miao; Limin Zeng; Gerhard Weber

Helping blind people to build cognitive maps of an environment is one of the aims of several assistive systems. In order to evaluate such assistive technologies during the development process, users are often asked to externalize their cognitive maps. Using appropriate methods is of considerable importance for externalizing cognitive maps for blind people and sighted people. In this paper, two externalization methods, reconstruction with magnetic bars and verbal description, were investigated with blind people and sighted people. The investigation focused on the three issues: (1) the applicability of the two methods in terms of different knowledge levels; (2) the effect of sensory inputs (e.g. tactile, audio and audio-tactile) for externalizing cognitive maps by blind people; and (3) the ability of the two methods for blind and sighted people. Experimental results by ten blind and ten sighted subjects show reconstruction with magnetic bars is suitable independent of knowledge levels and sensory inputs. Verbal description is suitable in terms of route knowledge if sensory inputs are tactile-only and audio-tactile-based methods. Future studies about how to help blind people externalize route and landmark knowledge when the audio-proprioceptive input is provided should be considered.


Archive | 2014

Examples of Haptic System Development

Limin Zeng; Gerhard Weber; Ingo Zoller; Peter Lotz; T. A. Kern; Jörg Reisinger; Thorsten Meiss; Thomas Opitz; Tim Rossner; Nataliya Stefanova

In this section, several examples of task-specific haptic systems are given. They give an insight into the process of defining haptic interactions for a given purpose and illustrate the development and evaluation process outlined in this book so far. Examples were chosen by the editors to cover different basic system structures. Section 14.1—Tactile You-Are-Here-Maps illustrates the usage of a tactile display in an assistive manner, enabling a more autonomous movement of people with visual impairments. Section 14.2—User Interface for Automotive Applications presents the development of a haptic interface for a new kind of user interaction in a car. It incorporates touch input and is able to simulate different key characteristics for intuitive haptic feedback. Section 14.3—HapCath describes a comanipulation system to provide additional haptic feedback in cardiovascular interventions. The feedback is intended to reduce exposure for both patient and physician and to permit new kinds of diagnosis during an intervention.


international conference on computers helping people with special needs | 2018

HapticRein: Design and Development of an Interactive Haptic Rein for a Guidance Robot

Limin Zeng; Björn Einert; Alexander Pitkin; Gerhard Weber

In daily life, a guide dog as a companion assists blind and visually impaired people (BVIP) to perform independent and safe journey. However, due to several reasons (e.g., cost and long-term training) only a few BVIP would own their guide dogs. To let much more BVIP would have accessible guidance services in unfamiliar public or private areas, we plan to develop an interactive guide dog robot prototype. In this paper, we present one of the most important components of a guidance robot for BVIP, an interactive haptic rein which consists of force-sensing sensors to control and balance the walking speed between BVIP and robots in a natural way, and vibrated actuators under fingers to acquire haptic information, e.g., turning left/right. The results of preliminary user studies indicated that the proposed haptic rein allow BVIP to control and communicate with a guidance robot via a natural haptic interaction.


international conference on computers helping people with special needs | 2018

Virtual Navigation Environment for Blind and Low Vision People.

Andreas Kunz; Klaus Miesenberger; Limin Zeng; Gerhard Weber

For comprehensively participating in society, independent and safe mobility is an important skill for many daily activities. Spatial cognition is one of the most important human capabilities and addresses the acquisition, processing and utilization of knowledge about the spatial layout of environments. Humans predominantly use the visual sense for this and for blind and low vision people, the lack of spatial perception reduces their quality of life and their ability of independent living. In particular the spatial navigation in unknown environments imposes challenges, since there is no possibility to train navigation tasks in advance. Today, blind and visually impaired people still rely on traditional navigation aids such as a cane for micro-navigation, which - however - does not help for developing orientation at larger scale or for planning of routes. To overcome this problem, this paper introduces the concept of a virtual environment that allows experiencing unknown locations by real walking while still staying in a sage controlled environment. Since this virtual environment can be controlled in its complexity, it can be adjusted from an abstract training scenario to a real-life situation such as train stations or airports.

Collaboration


Dive into the Limin Zeng's collaboration.

Top Co-Authors

Avatar

Gerhard Weber

Dresden University of Technology

View shared research outputs
Top Co-Authors

Avatar

Alexander Fickel

Dresden University of Technology

View shared research outputs
Top Co-Authors

Avatar

Markus Simros

Dresden University of Technology

View shared research outputs
Top Co-Authors

Avatar

Mei Miao

Dresden University of Technology

View shared research outputs
Top Co-Authors

Avatar

Alexander Pitkin

Dresden University of Technology

View shared research outputs
Top Co-Authors

Avatar

Björn Einert

Dresden University of Technology

View shared research outputs
Top Co-Authors

Avatar

Denise Prescher

Dresden University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nataliya Stefanova

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Romina Kühn

Dresden University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge