J. Pablo Munoz
City University of New York
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by J. Pablo Munoz.
adaptive agents and multi-agents systems | 2011
Elizabeth Sklar; A. Tuna Ozgelen; J. Pablo Munoz; Joel Gonzalez; Mark Manashirov; Susan L. Epstein; Simon Parsons
In this workshop paper, we share the design and on-going implementation of our HRTeam framework, which is constructed to support multiple robots working with a human operator in a dynamic environment. The team is comprised of one human plus a heterogeneous set of inexpensive, limited-function robots. Although each individual robot has restricted mobility and sensing capabilities, together the team members constitute a multi-function, multi-robot facility. We describe low-level system architecture details and explain how we have integrated a popular robotic control and simulation environment into our framework to support application of multi-agent techniques in a hardware-based environment. We highlight lessons learned regarding the integration of multiple varying robot platforms into our system, from both hardware and software perspectives. Our aim is to generate discussion amongst multi-robot researchers concerning issues that are of particular interest and present particular difficulties to the multi-robot systems community.
systems, man and cybernetics | 2015
Xiaochen Zhang; Bing Li; Samleo L. Joseph; Jizhong Xiao; Yi Sun; Yingli Tian; J. Pablo Munoz; Chucai Yi
This paper proposes a novel assistive navigation system based on simultaneous localization and mapping (SLAM) and semantic path planning to help visually impaired users navigate in indoor environments. The system integrates multiple wearable sensors and feedback devices including a RGB-D sensor and an inertial measurement unit (IMU) on the waist, a head mounted camera, a microphone and an earplug/speaker. We develop a visual odometry algorithm based on RGB-D data to estimate the users position and orientation, and refine the orientation error using the IMU. We employ the head mounted camera to recognize the door numbers and the RGB-D sensor to detect major landmarks such as corridor corners. By matching the detected landmarks against the corresponding features on the digitalized floor map, the system localizes the user, and provides verbal instruction to guide the user to the desired destination. The software modules of our system are implemented in Robotics Operating System (ROS). The prototype of the proposed assistive navigation system is evaluated by blindfolded sight persons. The field tests confirm the feasibility of the proposed algorithms and the system prototype.
european conference on computer vision | 2016
Bing Li; J. Pablo Munoz; Xuejian Rong; Jizhong Xiao; Yingli Tian; Aries Arditi
This paper presents a novel mobile wearable context-aware indoor maps and navigation system with obstacle avoidance for the blind. The system includes an indoor map editor and an App on Tango devices with multiple modules. The indoor map editor parses spatial semantic information from a building architectural model, and represents it as a high-level semantic map to support context awareness. An obstacle avoidance module detects objects in front using a depth sensor. Based on the ego-motion tracking within the Tango, localization alignment on the semantic map, and obstacle detection, the system automatically generates a safe path to a desired destination. A speech-audio interface delivers user input, guidance and alert cues in real-time using a priority-based mechanism to reduce the user’s cognitive load. Field tests involving blindfolded and blind subjects demonstrate that the proposed prototype performs context-aware and safety indoor assistive navigation effectively.
robotics and biomimetics | 2015
Bing Li; Xiaochen Zhang; J. Pablo Munoz; Jizhong Xiao; Xuejian Rong; Yingli Tian
A wearable Obstacle Stereo Feedback (OSF) System for the Blind people based on 3D space obstacle detection is presented to assist the navigation. The OSF system embedded with a depth sensor to perceive the in-front 3D spatial information in the form of point clouds. We implemented the downsampling Random Sample Consensus (RANSAC) algorithm to process the perceived point cloud, and detect the obstacles in front of the user. Finally, Head-Related Transfer Functions (HRTF) are applied to create the virtual stereo sound which represents the obstacles according to its coordinate in the 3D space. The experiment shows that OSF system can detect the obstacle in the indoor environment effectively and provides a feasible auditory perception to indicate the in-front safety zone for the blind user.
international symposium on visual computing | 2016
Xuejian Rong; Bing Li; J. Pablo Munoz; Jizhong Xiao; Aries Arditi; Yingli Tian
Scene text in indoor environments usually preserves and communicates important contextual information which can significantly enhance the independent travel of blind and visually impaired people. In this paper, we present an assistive text spotting navigation system based on an RGB-D mobile device for blind or severely visually impaired people. Specifically, a novel spatial-temporal text localization algorithm is proposed to localize and prune text regions, by integrating stroke-specific features with a subsequent text tracking process. The density of extracted text-specific feature points serves as an efficient text indicator to guide the user closer to text-likely regions for better recognition performance. Next, detected text regions are binarized and recognized by off-the-shelf optical character recognition methods. Significant non-text indicator signage can also be matched to provide additional environment information. Both recognized results are then transferred to speech feedback for user interaction. Our proposed video text localization approach is quantitatively evaluated on the ICDAR 2013 dataset, and the experimental results demonstrate the effectiveness of our proposed method.
Archive | 2012
Susan L. Epstein; Eric Schneider; A. Tuna Ozgelen; J. Pablo Munoz; Michael Costantino; Elizabeth Sklar; Simon Parsons
Enabling Intelligence through Middleware | 2012
Elizabeth Sklar; Simon Parsons; Susan L. Epstein; Arif Tuna Ozgelen; Joel Gonzalez; Jesse Lopez; Mitch Lustig; Linda Ma; Mark Manashiro; J. Pablo Munoz; S. Bruno Salazar; Miriam Schwartz
international joint conference on artificial intelligence | 2016
J. Pablo Munoz; Bing Li; Xuejian Rong; Jizhong Xiao; Yingli Tian; Aries Arditi
ieee international conference on cyber technology in automation control and intelligent systems | 2017
J. Pablo Munoz; Bing Li; Xuejian Rong; Jizhong Xiao; Yingli Tian; Aries Arditi
national conference on artificial intelligence | 2011
J. Pablo Munoz; Arif Tuna Ozgelen; Elizabeth Sklar