Danish Shaikh
Maersk
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Danish Shaikh.
distributed autonomous robotic systems | 2009
David Johan Christensen; Mirko Bordignon; Ulrik Pagh Schultz; Danish Shaikh; Kasper Stoy
Hand-coding locomotion controllers for modular robots is difficult due to their polymorphic nature. Instead, we propose to use a simple and distributed reinforcement learning strategy. ATRON modules with identical controllers can be assembled in any configuration. To optimize the robot’s locomotion speed its modules independently and in parallel adjust their behavior based on a single global reward signal. In simulation, we study the learning strategy’s performance on different robot configurations. On the physical platform, we perform learning experiments with ATRON robots learning to move as fast as possible. We conclude that the learning strategy is effective and may be a practical approach to design gaits.
international work conference on the interplay between natural and artificial computation | 2009
Danish Shaikh; John Hallam; Jakob Christensen-Dalsgaard; Lei Zhang
The peripheral auditory system of a lizard is structured as a pressure difference receiver with strong broadband directional sensitivity. Previous work has demonstrated that this system can be implemented as a set of digital filters generated by considering the lumped-parameter model of the auditory system, and can be used successfully for step control steering of mobile robots. We extend the work to the continuous steering case, implementing the same model on a Braitenberg vehicle-like robot. The performance of the robot is evaluated in a phonotaxis task. The robot shows strong directional sensitivity and successful phonotaxis for a sound frequency range of 1400 Hz---1900 Hz. We conclude that the performance of the model in the continuous control task is comparable to that in the step control task.
Biological Cybernetics | 2016
Danish Shaikh; John Hallam; Jakob Christensen-Dalsgaard
The peripheral auditory system of lizards has been extensively studied, because of its remarkable directionality. In this paper, we review the research that has been performed on this system using a biorobotic approach. The various robotic implementations developed to date, both wheeled and legged, of the auditory model exhibit strong phonotactic performance for two types of steering mechanisms—a simple threshold decision model and Braitenberg sensorimotor cross-couplings. The Braitenberg approach removed the need for a decision model, but produced relatively inefficient robot trajectories. Introducing various asymmetries in the auditory model reduced the efficiency of the robot trajectories, but successful phonotaxis was maintained. Relatively loud noise distractors degraded the trajectory efficiency and above-threshold noise resulted in unsuccessful phonotaxis. Machine learning techniques were applied to successfully compensate for asymmetries as well as noise distractors. Such techniques were also successfully used to construct a representation of auditory space, which is crucial for sound localisation while remaining stationary as opposed to phonotaxis-based localisation. The peripheral auditory model was furthermore found to adhere to an auditory scaling law governing the variation in frequency response with respect to physical ear separation. Overall, the research to date paves the way towards investigating the more fundamental topic of auditory metres versus auditory maps, and the existing robotic implementations can act as tools to compare the two approaches.
simulation of adaptive behavior | 2016
Danish Shaikh; Poramate Manoonpong
Acoustic tracking of a moving sound source is relevant in many domains including robotic phonotaxis and human-robot interaction. Typical approaches rely on processing time-difference-of-arrival cues obtained via multi-microphone arrays with Kalman or particle filters, or other computationally expensive algorithms. We present a novel bio-inspired solution to acoustic tracking that uses only two microphones. The system is based on a neural mechanism coupled with a model of the peripheral auditory system of lizards. The peripheral auditory model provides sound direction information which the neural mechanism uses to learn the target’s velocity via fast correlation-based unsupervised learning. Simulation results for tracking a pure tone acoustic target moving along a semi-circular trajectory validate our approach. Three different angular velocities in three separate trials were employed for the validation. A comparison with a Braitenberg vehicle-like steering strategy shows the improved performance of our learning-based approach.
simulation of adaptive behavior | 2010
Danish Shaikh; John Hallam; Jakob Christensen-Dalsgaard
The peripheral auditory system of a lizard is strongly directional. This directionality is created by acoustical coupling of the two eardrums and is strongly dependent on characteristics of the middle ear, such as interaural distance, resonance frequency of the middle ear cavity and of the tympanum. Therefore, directionality should be strongly influenced by their scaling. In the present study, we have exploited an FPGA-based mobile robot based on a model of the lizard ear to investigate the influence of scaling on the directional response, in terms of the robots performance in a phonotaxis task. The results clearly indicate that the models frequency response scales proportionally with the model parameters.
international symposium on robotics | 2017
Danish Shaikh; Michael Kjær Schmidt
Three-dimensional acoustic localisation is relevant in personal and social robot platforms. Conventional approaches extract interaural time difference cues via impractically large stationary two-dimensional multi-microphone grids with at least four microphones or spectral cues via head-related transfer functions of stationary KEMAR dummy heads equipped with two microphones. We present a preliminary approach using two sound sensors, whose directed movements resolve the location of a stationary acoustic target in three dimensions. A model of the peripheral auditory system of lizards provides sound direction information in a single plane which by itself is insufficient to localise the acoustic target in three dimensions. Two spatial orientations of this plane by rotating the sound sensors by −45° and +45° along the sagittal axis generate a pair of measurements, each encoding the location of the acoustic target with respect to one plane of rotation. A multi-layer perceptron neural network is trained via supervised learning to translate the combination of the two measurements into an estimate of the relative location of the acoustic target in terms of its azimuth and elevation. The acoustic localisation performance of the system is evaluated in simulation for noiseless as well as noisy sinusoidal auditory signals with a 20 dB signal-to-noise ratio for four different sound frequencies of 1450 Hz, 1650 Hz, 1850 Hz and 2050 Hz that span the response frequency range of the peripheral auditory model. Three different neural networks with respectively one hidden layer with ten neurons, one hidden layer with twenty neurons and two hidden layers with ten neurons are comparatively evaluated. The neural networks are evaluated for varying locations of the acoustic target on the surface of the frontal spherical section in space defined by an azimuth and elevation range of [−90°, +90°] with a resolution of 1° in both planes.
international conference on engineering applications of neural networks | 2017
Danish Shaikh; Poramate Manoonpong
Reactive spatial robot navigation in goal-directed tasks such as phonotaxis requires generating consistent and stable trajectories towards an acoustic target while avoiding obstacles. High-level goal-directed steering behaviour can steer a robot towards the target by mapping sound direction information to appropriate wheel velocities. However, low-level obstacle avoidance behaviour based on distance sensors may significantly alter wheel velocities and temporarily direct the robot away from the sound source, creating conflict between the two behaviours. How can such a conflict in reactive controllers be resolved in a manner that generates consistent and stable robot trajectories? We propose a neural circuit that minimises this conflict by learning sensorimotor mappings as neuronal transfer functions between the perceived sound direction and wheel velocities of a simulated non-holonomic mobile robot. These mappings constitute the high-level goal-directed steering behaviour. Sound direction information is obtained from a model of the lizard peripheral auditory system. The parameters of the transfer functions are learned via an online unsupervised correlation learning algorithm through interaction with obstacles in the form of low-level obstacle avoidance behaviour in the environment. The simulated robot is able to navigate towards a virtual sound source placed 3 m away that continuously emits a tone of frequency 2.2 kHz, while avoiding randomly placed obstacles in the environment. We demonstrate through two independent trials in simulation that in both cases the neural circuit learns consistent and stable trajectories as compared to navigation without learning.
Frontiers in Neurorobotics | 2017
Danish Shaikh; Poramate Manoonpong
Biological motion-sensitive neural circuits are quite adept in perceiving the relative motion of a relevant stimulus. Motion perception is a fundamental ability in neural sensory processing and crucial in target tracking tasks. Tracking a stimulus entails the ability to perceive its motion, i.e., extracting information about its direction and velocity. Here we focus on auditory motion perception of sound stimuli, which is poorly understood as compared to its visual counterpart. In earlier work we have developed a bio-inspired neural learning mechanism for acoustic motion perception. The mechanism extracts directional information via a model of the peripheral auditory system of lizards. The mechanism uses only this directional information obtained via specific motor behaviour to learn the angular velocity of unoccluded sound stimuli in motion. In nature however the stimulus being tracked may be occluded by artefacts in the environment, such as an escaping prey momentarily disappearing behind a cover of trees. This article extends the earlier work by presenting a comparative investigation of auditory motion perception for unoccluded and occluded tonal sound stimuli with a frequency of 2.2 kHz in both simulation and practice. Three instances of each stimulus are employed, differing in their movement velocities–0.5°/time step, 1.0°/time step and 1.5°/time step. To validate the approach in practice, we implement the proposed neural mechanism on a wheeled mobile robot and evaluate its performance in auditory tracking.
CLAWAR 2017: 20th International Conference on Climbing and Walking Robots and the Support Technologies for Mobile Machines | 2017
Danish Shaikh; Poramate Manoonpong; Gervase Tuxworth; Leon Bodenhagen
Biological systems often combine cues from two different sensory modalities to execute goal-oriented sensorimotor tasks, which otherwise cannot be accurately executed with either sensory stream in isolation. When auditory cues alone are not sufficient to accurately localise an audio-visual target by orienting towards it, visual cues can complement their auditory counterparts and improve localisation accuracy. We present a multisensory goal-oriented locomotion control architecture that uses visual feedback to adaptively improve acoustomotor orientation response of the hexapod robot AMOS II. The robot is tasked with localising an audio-visual target by turning towards it. The architecture extracts sound direction information with a model of the peripheral auditory system of lizards to modulate locomotion control parameters driving the turning behaviour. The visual information adaptively changes the strength of the acoustomotor coupling to adjust turning speed of the robot. Our experiments demonstrate improved orientation towards the audio-visual target emitting a tone of frequency 2.2 kHz located at an angular offset of 45◦ from the robot.
Archive | 2008
David Brandt; Jørgen Christian Larsen; David Johan Christensen; Ricardo Franco Mendoza Garcia; Danish Shaikh; Ulrik Pagh Schultz; Kasper Stoy