Zunaid Kazi
University of Delaware
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Zunaid Kazi.
Robotica | 1997
William S. Harwin; Raymond G. Gosine; Zunaid Kazi; David S. Lees; John L. Dallaway
There is a wide diversity in the functioning and programming of robots designed and programmed to assist individuals with disabilities. The planning and structure of four rehabilitation robot implementations is presented. The first is the CURL language developed for human interface and the most widely used in this field. The second, MUSIIC, explores methods for direct manipulation of objects. RoboGlyph uses symbolic constructs to assist with the direction and programming of rehabilitation robots and finally a multi-tasking operating executive is discussed that controls a bilateral head operated telerobot. These four implementations reflect a wide range of interface concepts for the intended users.
southeastcon | 1996
Shoupu Chen; Zunaid Kazi; Matthew Beitler; Marcos Salganicoff; Daniel L. Chester; Richard A. Foulds
One of the most challenging problems in rehabilitation robotics is the design of an efficient human-machine interface (HMI) allowing the user with a disability considerable freedom and flexibility. A multimodal user direction approach combining command and control methods is a very promising way to achieve this goal. This multimodal design is motivated by the idea of minimizing the users burden of operating a robot manipulator while utilizing the users intelligence and available mobilities. With this design, the user with a physical disability simply uses gesture (pointing with a laser pointer) to indicate a location or a desired object and uses speech to activate the system. Recognition of the spoken input is also used to supplant the need for general purpose object recognition between different objects and to perform the critical function of disambiguation. The robot system is designed to operate in an unstructured environment containing objects that are reasonably predictable. A novel reactive planning mechanism, of which the user is an active integral component, in conjunction with a stereo-vision system and an object-oriented knowledge base, provides the robot system with the 3D information of the surrounding world as well as the motion strategies.
Lecture Notes in Computer Science | 1998
Zunaid Kazi; Shoupu Chen; Matthew Beitler; Daniel L. Chester; Richard A. Foulds
The Multimodal User Supervised Interface and Intelligent Control (MUSIIC) project addresses the issue of telemanipulation of everyday objects in an unstructured environment. Telerobot control by individuals with physical limitations pose a set of challenging problems that need to be resolved. MUSIIC addresses these problems by integrating a speech and gesture driven human-machine interface with a knowledge driven planner and a 3-D vision system. The resultant system offers the opportunity to study unstructured world telemanipulation by people with physical disabilities and provides means for generalizing to effective manipulation techniques for real-world unstructured tasks in domains where direct physical control may be limited due to time delay, lack of sensation, and coordination.
Image and Vision Computing | 1998
Shoupu Chen; Zunaid Kazi; Richard A. Foulds; Daniel L. Chester
This paper presents a general purpose assistive tele-robot system that is designed to manipulate objects in a three-dimensional environment by using color and stereo vision. The incorporated vision system allows the user to operate the robot remotely simply by gesturing (pointing with a laser pointer). In this vision-based tele-manipulation system, the user is in the control loop actively interacting with the reactive robot motion planner through a multimodal interface (vision and speech). Because of this unique feature, the robot system has been simplified and released from performing the complex object recognition tasks that are often required by autonomous systems.
Robotica | 1998
Zunaid Kazi; Richard A. Foulds
The Multimodal User Supervised Interface and Intelligent Control (MUSIIC) project focuses on a multimodal human-machine interface which addresses user need to manipulate familiar objects in an unstructured environment. The control of a robot by individuals with significant physical limitations presents a challenging problem of telemanipulation. This is addressed by a unique user-interface integrating the users command (speech) and gestures (pointing) with autonomous planning techniques (knowledge-bases and 3-D vision). The resultant test-bed offers the opportunity to study telemanipulation by individuals with physical disabilities, and can be generalized to an effective technique for other, including remote and time-delayed, telemanipulation. This paper focuses on the knowledge-driven planning mechanism that is central to the MUSIIC system.
Archive | 1995
Matthew Beitler; Zunaid Kazi; Marcos Salganicoff; Richard A. Foulds; Shoupu Chen; Daniel L. Chester
Archive | 1995
Matthew Beitler; Richard A. Foulds; Zunaid Kazi; Daniel L. Chester; Shoupu Chen; Marcos Salganicoff
Archive | 1995
Zunaid Kazi; Marcos Salganicoff; Matthew Beitler; Shoupu Chen; Daniel L. Chester; Richard A. Foulds
Proceedings of SPIE | 1995
Zunaid Kazi; Matthew Beitler; Marcos Salganicoff; Shoupu Chen; Daniel L. Chester; Richard A. Foulds
Archive | 1995
Zunaid Kazi; Marcos Salganicoff; Matthew Beitler; Shoupu Chen; Daniel L. Chester; Richard A. Foulds