Mohan Sridharan
University of Auckland
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mohan Sridharan.
international conference on robotics and automation | 2005
Mohan Sridharan; Gregory Kuhlmann; Peter Stone
Mobile robot localization, the ability of a robot to determine its global position and orientation, continues to be a major research focus in robotics. In most past cases, such localization has been studied on wheeled robots with range finding sensors such as sonar or lasers. In this paper, we consider the more challenging scenario of a legged robot localizing with a limited field-of-view camera as its primary sensory input. We begin with a baseline implementation adapted from the literature that provides a reasonable level of competence, but that exhibits some weaknesses in real-world tests. We propose a series of practical enhancements designed to improve the robot’s sensory and actuator models that enable our robots to achieve a 50% improvement in localization accuracy over the baseline implementation. We go on to demonstrate how the accuracy improvement is even more dramatic when the robot is subjected to large unmodeled movements. These enhancements are each individually straightforward, but together they provide a roadmap for avoiding potential pitfalls when implementing Monte Carlo Localization on vision-based and/or legged robots.
Artificial Intelligence | 2010
Mohan Sridharan; Jeremy L. Wyatt; Richard Dearden
Flexible, general-purpose robots need to autonomously tailor their sensing and information processing to the task at hand. We pose this challenge as the task of planning under uncertainty. In our domain, the goal is to plan a sequence of visual operators to apply on regions of interest (ROIs) in images of a scene, so that a human and a robot can jointly manipulate and converse about objects on a tabletop. We pose visual processing management as an instance of probabilistic sequential decision making, and specifically as a Partially Observable Markov Decision Process (POMDP). The POMDP formulation uses models that quantitatively capture the unreliability of the operators and enable a robot to reason precisely about the trade-offs between plan reliability and plan execution time. Since planning in practical-sized POMDPs is intractable, we partially ameliorate this intractability for visual processing by defining a novel hierarchical POMDP based on the cognitive requirements of the corresponding planning task. We compare our hierarchical POMDP planning system (HiPPo) with a non-hierarchical POMDP formulation and the Continual Planning (CP) framework that handles uncertainty in a qualitative manner. We show empirically that HiPPo and CP outperform the naive application of all visual operators on all ROIs. The key result is that the POMDP methods produce more robust plans than CP or the naive visual processing. In summary, visual processing problems represent a challenging domain for planning techniques and our hierarchical POMDP-based approach for visual processing management opens up a promising new line of research.
intelligent robots and systems | 2008
Aniket Murarka; Mohan Sridharan; Benjamin Kuipers
A mobile robot operating in an urban environment has to navigate around obstacles and hazards. Though a significant amount of work has been done on detecting obstacles, not much attention has been given to the detection of drop-offs, e.g., sidewalk curbs, downward stairs, and other hazards where an error could lead to disastrous consequences. In this paper, we propose algorithms for detecting both obstacles and drop-offs (also called negative obstacles) in an urban setting using stereo vision and motion cues. We propose a global color segmentation stereo method and compare its performance at detecting hazards against prior work using a local correlation stereo method. Furthermore, we introduce a novel drop-off detection scheme based on visual motion cues that adds to the performance of the stereo-vision methods. All algorithms are implemented and evaluated on data obtained by driving a mobile robot in urban environments.
robot soccer world cup | 2004
Mohan Sridharan; Peter Stone
To date, RoboCup games have all been played under constant, bright lighting conditions. However, in order to meet the overall goal of RoboCup, robots will need to be able to seamlessly handle changing, natural light. One method for doing so is to be able to identify colors regardless of illumination: color constancy. Color constancy is a relatively recent, but increasingly important, topic in vision research. Most approaches so far have focussed on stationary cameras. In this paper we propose a methodology for color constancy on mobile robots. We describe a technique that we have used to solve a subset of the problem, in real-time, based on color space distributions and the KL-divergence measure. We fully implement our technique and present detailed empirical results in a robot soccer scenario.
intelligent robots and systems | 2005
Mohan Sridharan; Peter Stone
Computer vision is a broad and significant ongoing research challenge, even when performed on an individual image or on streaming video from a high-quality stationary camera with abundant computational resources. When faced with streaming video from a lower-quality, rapidly moving camera and limited computational resources, the challenge increases. We present our implementation of a vision system on a mobile robot platform that uses a camera image as the primary sensory input. Having to perform all processing, including segmentation and object detection, in real-time on-board the robot, eliminates the possibility of using some state-of-the-art methods that otherwise might apply. We describe the methods that we developed to achieve a practical vision system within these constraints. Our approach is fully implemented and tested on a team of Sony AIBO robots.
Robotics and Autonomous Systems | 2009
Mohan Sridharan; Peter Stone
Recent developments in sensor technology have made it feasible to use mobile robots in several fields, but robots still lack the ability to accurately sense the environment. A major challenge to the widespread deployment of mobile robots is the ability to function autonomously, learning useful models of environmental features, recognizing environmental changes, and adapting the learned models in response to such changes. This article focuses on such learning and adaptation in the context of color segmentation on mobile robots in the presence of illumination changes. The main contribution of this article is a survey of vision algorithms that are potentially applicable to color-based mobile robot vision. We therefore look at algorithms for color segmentation, color learning and illumination invariance on mobile robot platforms, including approaches that tackle just the underlying vision problems. Furthermore, we investigate how the inter-dependencies between these modules and high-level action planning can be exploited to achieve autonomous learning and adaptation. The goal is to determine the suitability of the state-of-the-art vision algorithms for mobile robot domains, and to identify the challenges that still need to be addressed to enable mobile robots to learn and adapt models for color, so as to operate autonomously in natural conditions.
Robotics and Autonomous Systems | 2006
Peter Stone; Mohan Sridharan; Daniel Stronger; Gregory Kuhlmann; Nate Kohl; Peggy Fidelman; Nicholas K. Jong
Mobile robots must cope with uncertainty from many sources along the path from interpreting raw sensor inputs to behavior selection to execution of the resulting primitive actions. This article identifies several such sources and introduces methods for (i) reducing uncertainty and (ii) making decisions in the face of uncertainty. We present a complete vision-based robotic system that includes several algorithms for learning models that are useful and necessary for planning, and then place particular emphasis on the planning and decision-making capabilities of the robot. Specifically, we present models for autonomous color calibration, autonomous sensor and actuator modeling, and an adaptation of particle filtering for improved localization on legged robots. These contributions enable effective planning under uncertainty for robots engaged in goaloriented behavior within a dynamic, collaborative and adversarial environment. Each of our algorithms is fully implemented and tested on a commercial off-the-shelf vision-based quadruped robot. c 2006 Elsevier B.V. All rights reserved.
IEEE Transactions on Robotics | 2015
Shiqi Zhang; Mohan Sridharan; Jeremy L. Wyatt
Deployment of robots in practical domains poses key knowledge representation and reasoning challenges. Robots need to represent and reason with incomplete domain knowledge, acquiring and using sensor inputs based on need and availability. This paper presents an architecture that exploits the complementary strengths of declarative programming and probabilistic graphical models as a step toward addressing these challenges. Answer Set Prolog (ASP), a declarative language, is used to represent, and perform inference with, incomplete domain knowledge, including default information that holds in all but a few exceptional situations. A hierarchy of partially observable Markov decision processes (POMDPs) probabilistically models the uncertainty in sensor input processing and navigation. Nonmonotonic logical inference in ASP is used to generate a multinomial prior for probabilistic state estimation with the hierarchy of POMDPs. It is also used with historical data to construct a beta (meta) density model of priors for metareasoning and early termination of trials when appropriate. Robots equipped with this architecture automatically tailor sensor input processing and navigation to tasks at hand, revising existing knowledge using information extracted from sensor inputs. The architecture is empirically evaluated in simulation and on a mobile robot visually localizing objects in indoor domains.
international symposium on software reliability engineering | 2010
Mohan Sridharan; Akbar Siami Namin
Mutation testing is a fault-based testing technique for measuring the adequacy of a test suite. Test suites are assigned scores based on their ability to expose synthetic faults (i.e., mutants) generated by a range of well-defined mathematical operators. The test suites can then be augmented to expose the mutants that remain undetected and are not semantically equivalent to the original code. However, the mutation score can be increased superfluously by mutants that are easy to expose. In addition, it is infeasible to examine all the mutants generated by a large set of mutation operators. Existing approaches have therefore focused on determining the sufficient set of mutation operators and the set of equivalent mutants. Instead, this paper proposes a novel Bayesian approach that prioritizes operators whose mutants are likely to remain unexposed by the existing test suites. Probabilistic sampling methods are adapted to iteratively examine a subset of the available mutants and direct focus towards the more informative operators. Experimental results show that the proposed approach identifies more than 90% of the important operators by examining ? 20% of the available mutants, and causes a 6% increase in the importance measure of the selected mutants.
international conference on social robotics | 2014
Shiqi Zhang; Mohan Sridharan; Michael Gelfond; Jeremy L. Wyatt
This paper describes an architecture that combines the complementary strengths of probabilistic graphical models and declarative programming to enable robots to represent and reason with qualitative and quantitative descriptions of uncertainty and domain knowledge. An action language is used for the architecture’s low-level (LL) and high-level (HL) system descriptions, and the HL definition of recorded history is expanded to allow prioritized defaults. For any given objective, tentative plans created in the HL using commonsense reasoning are implemented in the LL using probabilistic algorithms, and the corresponding observations are added to the HL history. Tight coupling between the levels helps automate the selection of relevant variables and the generation of policies in the LL for each HL action, and supports reasoning with violation of defaults, noisy observations and unreliable actions in complex domains. The architecture is evaluated in simulation and on robots moving objects in indoor domains.