Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joydeep Biswas is active.

Publication


Featured researches published by Joydeep Biswas.


international conference on robotics and automation | 2010

WiFi localization and navigation for autonomous indoor mobile robots

Joydeep Biswas; Manuela M. Veloso

Building upon previous work that demonstrates the effectiveness of WiFi localization information per se, in this paper we contribute a mobile robot that autonomously navigates in indoor environments using WiFi sensory data. We model the world as a WiFi signature map with geometric constraints and introduce a continuous perceptual model of the environment generated from the discrete graph-based WiFi signal strength sampling. We contribute our WiFi localization algorithm which continuously uses the perceptual model to update the robot location in conjunction with its odometry data. We then briefly introduce a navigation approach that robustly uses the WiFi location estimates. We present the results of our exhaustive tests of the WiFi localization independently and in conjunction with the navigation of our custom-built mobile robot in extensive long autonomous runs.


international conference on robotics and automation | 2012

Depth camera based indoor mobile robot localization and navigation

Joydeep Biswas; Manuela M. Veloso

The sheer volume of data generated by depth cameras provides a challenge to process in real time, in particular when used for indoor mobile robot localization and navigation. We introduce the Fast Sampling Plane Filtering (FSPF) algorithm to reduce the volume of the 3D point cloud by sampling points from the depth image, and classifying local grouped sets of points as belonging to planes in 3D (the “plane filtered” points) or points that do not correspond to planes within a specified error margin (the “outlier” points). We then introduce a localization algorithm based on an observation model that down-projects the plane filtered points on to 2D, and assigns correspondences for each point to lines in the 2D map. The full sampled point cloud (consisting of both plane filtered as well as outlier points) is processed for obstacle avoidance for autonomous navigation. All our algorithms process only the depth information, and do not require additional RGB data. The FSPF, localization and obstacle avoidance algorithms run in real time at full camera frame rates (30Hz) with low CPU requirements (16%). We provide experimental results demonstrating the effectiveness of our approach for indoor mobile robot localization and navigation. We further compare the accuracy and robustness in localization using depth cameras with FSPF vs. alternative approaches that simulate laser rangefinder scans from the 3D data.


The International Journal of Robotics Research | 2013

Localization and navigation of the CoBots over long-term deployments

Joydeep Biswas; Manuela M. Veloso

For the last three years, we have developed and researched multiple collaborative robots, CoBots, which have been autonomously traversing our multi-floor buildings. We pursue the goal of long-term autonomy for indoor service mobile robots as the ability for them to be deployed indefinitely while they perform tasks in an evolving environment. The CoBots include several levels of autonomy, and in this paper we focus on their localization and navigation algorithms. We present the Corrective Gradient Refinement (CGR) algorithm, which refines the proposal distribution of the particle filter used for localization with sensor observations using analytically computed state space derivatives on a vector map. We also present the Fast Sampling Plane Filtering algorithm that extracts planar regions from depth images in real time. These planar regions are then projected onto the 2D vector map of the building, and along with the laser rangefinder observations, used with CGR for localization. For navigation, we present a hierarchical planner, which computes a topological policy using a graph representation of the environment, computes motion commands based on the topological policy, and then modifies the motion commands to side-step perceived obstacles. We started logging the deployments of the CoBots one and a half years ago, and have since collected logs of the CoBots traversing more than 130 km over 1082 deployments and a total run time of 182 h, which we publish as a dataset consisting of more than 10 million laser scans. The logs show that although there have been continuous changes in the environment, the robots are robust to most of them, and there exist only a few locations where changes in the environment cause increased uncertainty in localization.


intelligent robots and systems | 2011

Corrective Gradient Refinement for mobile robot localization

Joydeep Biswas; Brian Coltin; Manuela M. Veloso

Particle filters for mobile robot localization must balance computational requirements and accuracy of localization. Increasing the number of particles in a particle filter improves accuracy, but also increases the computational requirements. Hence, we investigate a different paradigm to better utilize particles than to increase their numbers. To this end, we introduce the Corrective Gradient Refinement (CGR) algorithm that uses the state space gradients of the observation model to improve accuracy while maintaining low computational requirements. We develop an observation model for mobile robot localization using point cloud sensors (LIDAR and depth cameras) with vector maps. This observation model is then used to analytically compute the state space gradients necessary for CGR. We show experimentally that the resulting complete localization algorithm is more accurate than the Sampling/Importance Resampling Monte Carlo Localization algorithm, while requiring fewer particles.


intelligent robots and systems | 2012

CoBots: Collaborative robots servicing multi-floor buildings

Manuela M. Veloso; Joydeep Biswas; Brian Coltin; Stephanie Rosenthal; Thomas Kollar; Çetin Meriçli; Mehdi Samadi; Susana Brandão; Rodrigo Ventura

In this video we briefly illustrate the progress and contributions made with our mobile, indoor, service robots CoBots (Collaborative Robots), since their creation in 2009. Many researchers, present authors included, aim for autonomous mobile robots that robustly perform service tasks for humans in our indoor environments. The efforts towards this goal have been numerous and successful, and we build upon them. However, there are clearly many research challenges remaining until we can experience intelligent mobile robots that are fully functional and capable in our human environments.


robot soccer world cup | 2013

Multi-sensor Mobile Robot Localization for Diverse Environments

Joydeep Biswas; Manuela M. Veloso

Mobile robot localization with different sensors and algorithms is a widely studied problem, and there have been many approaches proposed, with considerable degrees of success. However, every sensor and algorithm has limitations, due to which we believe no single localization algorithm can be “perfect,” or universally applicable to all situations.


international conference on robotics and automation | 2013

Fast human detection for indoor mobile robots using depth images

Benjamin Choi; Çetin Meriçli; Joydeep Biswas; Manuela M. Veloso

A human detection algorithm running on an indoor mobile robot has to address challenges including occlusions due to cluttered environments, changing backgrounds due to the robots motion, and limited on-board computational resources. We introduce a fast human detection algorithm for mobile robots equipped with depth cameras. First, we segment the raw depth image using a graph-based segmentation algorithm. Next, we apply a set of parameterized heuristics to filter and merge the segmented regions to obtain a set of candidates. Finally, we compute a Histogram of Oriented Depth (HOD) descriptor for each candidate, and test for human presence with a linear SVM. We experimentally evaluate our approach on a publicly available dataset of humans in an open area as well as our own dataset of humans in a cluttered cafe environment. Our algorithm performs comparably well on a single CPU core against another HOD-based algorithm that runs on a GPU even when the number of training examples is decreased by half. We discuss the impact of the number of training examples on performance, and demonstrate that our approach is able to detect humans in different postures (e.g. standing, walking, sitting) and with occlusions.


robot soccer world cup | 2012

Effective semi-autonomous telepresence

Brian Coltin; Joydeep Biswas; Dean A. Pomerleau; Manuela M. Veloso

We investigate mobile telepresence robots to address the lack of mobility in traditional videoconferencing. To operate these robots, intuitive and powerful interfaces are needed. We present CoBot-2, an indoor mobile telepresence robot with autonomous capabilities, and a browser-based interface to control it. CoBot-2 and its web interface have been used extensively to remotely attend meetings and to guide local visitors to destinations in the building. From the web interface, users can control CoBot-2s camera, and drive with either directional commands, by clicking on a point on the floor of the camera image, or by clicking on a point in a map. We conduct a user study in which we examine preferences among the three control interfaces for novice users. The results suggest that the three control interfaces together cover well the control preferences of different users, and that users often prefer to use a combination of control interfaces. CoBot-2 also serves as a tour guide robot, and has been demonstrated to safely navigate through dense crowds in a long-term trial.


international conference on robotics and automation | 2014

Episodic non-Markov localization: Reasoning about short-term and long-term features

Joydeep Biswas; Manuela M. Veloso

Markov localization and its variants are widely used for localization of mobile robots. These methods assume Markov independence of observations, implying that observations made by a robot correspond to a static map. However, in real human environments, observations include occlusions due to unmapped objects like chairs and tables, and dynamic objects like humans. We introduce an episodic non-Markov localization algorithm that maintains estimates of the belief over the trajectory of the robot while explicitly reasoning about observations and their correlations arising from unmapped static objects, moving objects, as well as objects from the static map. Observations are classified as arising from long-term features, short-term features, or dynamic features, which correspond to mapped objects, unmapped static objects, and unmapped dynamic objects respectively. By detecting time steps along the robots trajectory where unmapped observations prior to such time steps are unrelated to those afterwards, non-Markov localization limits the history of observations and pose estimates to “episodes” over which the belief is computed. We demonstrate non-Markov localization in challenging real world indoor and outdoor environments over multiple datasets, comparing it with alternative state-of-the-art approaches, showing it to be robust as well as accurate.


intelligent robots and systems | 2012

Planar polygon extraction and merging from depth images

Joydeep Biswas; Manuela M. Veloso

There has been considerable interest recently in building 3D maps of environments using inexpensive depth cameras like the Microsoft Kinect sensor. We exploit the fact that typical indoor scenes have an abundance of planar features by modeling environments as sets of plane polygons. To this end, we build upon the Fast Sampling Plane Filtering (FSPF) algorithm that extracts points belonging to local neighborhoods of planes from depth images, even in the presence of clutter. We introduce an algorithm that uses the FSPF-generated plane filtered point clouds to generate convex polygons from individual observed depth images. We then contribute an approach of merging these detected polygons across successive frames while accounting for a complete history of observed plane filtered points without explicitly maintaining a list of all observed points. The FSPF and polygon merging algorithms run in real time at full camera frame rates with low CPU requirements: in a real world indoor environment scene, the FSPF and polygon merging algorithms take 2.5 ms on average to process a single 640 × 480 depth image. We provide experimental results demonstrating the computational efficiency of the algorithm and the accuracy of the detected plane polygons by comparing with ground truth.

Collaboration


Dive into the Joydeep Biswas's collaboration.

Top Co-Authors

Avatar

Manuela M. Veloso

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Brian Coltin

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Danny Zhu

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Stefan Zickler

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Samer B. Nashed

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar

Steven D. Klee

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Benjamin Choi

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Jarrett Holtz

University of Massachusetts Amherst

View shared research outputs
Researchain Logo
Decentralizing Knowledge