Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Affan Shaukat is active.

Publication


Featured researches published by Affan Shaukat.


IEEE Transactions on Human-Machine Systems | 2013

Characterizing Driver Intention via Hierarchical Perception–Action Modeling

David Windridge; Affan Shaukat; Erik Hollnagel

We seek a mechanism for the classification of the intentional behavior of a cognitive agent, specifically a driver, in terms of a psychological Perception-Action (P-A) model, such that the resulting system would be potentially suitable for use in intelligent driver assistance. P-A models of human intentionality assume that a cognitive agents perceptual domain is learned in response to the outcome of the agents actions rather than vice versa. In this way, the perceptual domain is maintained at an appropriate level of complexity in relation to the agents embodied motor capabilities, greatly simplifying visual processing. A subsumptive P-A model further captures the hierarchical nature of the subtask structure implicit in human actions and assumes that a parallel hierarchical structuring exists within the perceptual domain. Adopting this model enables us to characterize intentions at each level of the P-A hierarchy in terms of a range of descriptors derived from the U.K. Highway Code by examining their correlation with driver gaze behavior. The problem of classifying intentions thus becomes one of reconciling high-level protocols (i.e., Highway Code rules) with low-level perceptual features. We perform a “proof-of-concept” assessment of the model by comparative evaluation of a number of logic-based methods (both stochastic and deductive) for carrying out this classification utilizing the control, signal, and motor inputs of an instrumented vehicle driven by a single driver, and find that a deductive model gives superior intentional classification performance due to the strongly protocol-governed nature of the driving environment.


IEEE Transactions on Systems, Man, and Cybernetics | 2013

A Framework for Hierarchical Perception–Action Learning Utilizing Fuzzy Reasoning

David Windridge; Michael Felsberg; Affan Shaukat

Perception-action (P-A) learning is an approach to cognitive system building that seeks to reduce the complexity associated with conventional environment-representation/action-planning approaches. Instead, actions are directly mapped onto the perceptual transitions that they bring about, eliminating the need for intermediate representation and significantly reducing training requirements. We here set out a very general learning framework for cognitive systems in which online learning of the P-A mapping may be conducted within a symbolic processing context, so that complex contextual reasoning can influence the P-A mapping. In utilizing a variational calculus approach to define a suitable objective function, the P-A mapping can be treated as an online learning problem via gradient descent using partial derivatives. Our central theoretical result is to demonstrate top-down modulation of low-level perceptual confidences via the Jacobian of the higher levels of a subsumptive P-A hierarchy. Thus, the separation of the Jacobian as a multiplying factor between levels within the objective function naturally enables the integration of abstract symbolic manipulation in the form of fuzzy deductive logic into the P-A mapping learning. We experimentally demonstrate that the resulting framework achieves significantly better accuracy than using P-A learning without top-down modulation. We also demonstrate that it permits novel forms of context-dependent multilevel P-A mapping, applying the mechanism in the context of an intelligent driver assistance system.


Journal of Field Robotics | 2016

Planetary Monocular Simultaneous Localization and Mapping

Abhinav Bajpai; Guy Burroughes; Affan Shaukat; Yang Gao

Planetary monocular simultaneous localization and mapping PM-SLAM, a modular, monocular SLAM system for use in planetary exploration, is presented. The approach incorporates a biologically inspired visual saliency model i.e., semantic feature detection for visual perception in order to improve robustness in the challenging operating environment of planetary exploration. A novel method of generating hybrid-salient features, using point-based descriptors to track the products of the visual saliency models, is introduced. The tracked features are used for rover and map state-estimation using a SLAM filter, resulting in a system suitable for use in long-distance autonomous microrover navigation, and the inherent hardware constraints of planetary rovers. Monocular images are used as an input to the system, as a major motivation is to reduce system complexity and optimize for microrover platforms. This paper sets out the various components of the modular SLAM system and then assesses their comparative performance using simulated data from the Planetary and Asteroid Natural Scene Generation Utility PANGU, as well as real-world datasets from the West Wittering field trials performed by the STAR Lab and the SEEKER field trials in Chile performed by the European Space Agency. The system as a whole was shown to perform reliably, with the best performance observed using a combination of Hou-saliency and speeded-up robust features SURF descriptors with an extended Kalman filter, which performed with higher accuracy than a state-of-the-art, independently optimized visual odometry localization system on a challenging real-world dataset.


Sensors | 2016

Towards Camera-LIDAR Fusion-Based Terrain Modelling for Planetary Surfaces: Review and Analysis

Affan Shaukat; Peter C. Blacker; Conrad Spiteri; Yang Gao

In recent decades, terrain modelling and reconstruction techniques have increased research interest in precise short and long distance autonomous navigation, localisation and mapping within field robotics. One of the most challenging applications is in relation to autonomous planetary exploration using mobile robots. Rovers deployed to explore extraterrestrial surfaces are required to perceive and model the environment with little or no intervention from the ground station. Up to date, stereopsis represents the state-of-the art method and can achieve short-distance planetary surface modelling. However, future space missions will require scene reconstruction at greater distance, fidelity and feature complexity, potentially using other sensors like Light Detection And Ranging (LIDAR). LIDAR has been extensively exploited for target detection, identification, and depth estimation in terrestrial robotics, but is still under development to become a viable technology for space robotics. This paper will first review current methods for scene reconstruction and terrain modelling using cameras in planetary robotics and LIDARs in terrestrial robotics; then we will propose camera-LIDAR fusion as a feasible technique to overcome the limitations of either of these individual sensors for planetary exploration. A comprehensive analysis will be presented to demonstrate the advantages of camera-LIDAR fusion in terms of range, fidelity, accuracy and computation.


IEEE Intelligent Systems | 2018

Autonomous nuclear waste management

Jonathan M. Aitken; Affan Shaukat; Elisa Cucco; Louise A. Dennis; Sandor M. Veres; Yang Gao; Michael Fisher; Jeff Kuo; Tom Robinson; Paul Mort

Redundant and nonoperational buildings at nuclear sites are decommissioned over a period of time. The process involves demolition of physical infrastructure resulting in large quantities of residual waste material. The resulting waste materials are packed into import containers to be delivered for postprocessing, containing either sealed canisters or assortments of miscellaneous objects. At present postprocessing does not happen within the United Kingdom. Sellafield Ltd. and National Nuclear Laboratory are developing a process for future operation so that upon an initial inspection, imported waste materials undergo two stages of postprocessing before being packed into export containers, namely sort and segregate or sort and disrupt. The postprocessing facility will remotely treat and export a wide range of wastes before downstream encapsulation. Certain wastes require additional treatment, such as disruption, before export to ensure suitability for long-term disposal. This paper focuses on the design, development, and demonstration of a reconfigurable rational agent-based robotic system that aims to highly automate these processes removing the need for close human supervision. The proposed system is being demonstrated through a downsized, lab-based setup incorporating a small-scale robotic arm, a time-of-flight camera, and high-level rational agent-based decision making and control framework.


Robotics and Autonomous Systems | 2017

Structure Augmented Monocular Saliency for Planetary Rovers

Conrad Spiteri; Affan Shaukat; Yang Gao

Abstract This paper proposes a novel object detection method based on the visual saliency model in order to reliably detect objects such as rocks from single monocular planetary images. The algorithm takes advantage of the relatively homogeneous and distinct albedos present in planetary environments such as Mars or the Moon to extract a Digital Terrain Model of a scene using photoclinometry. The Digital Terrain Model is then incorporated into a bottom-up visual saliency algorithm to augment objects that protrude out of the ground. This Structure Augmented Monocular Saliency algorithm (SAMS) improves the accuracy and reliability of detecting objects in a planetary environment with no training requirements, greater robustness and lower computational complexity than 3D saliency models. Comprehensive analysis of the proposed method is performed using three challenging benchmark datasets. The results show that the Structure Augmented Monocular Saliency (SAMS) algorithm performs better than against commonly used visual saliency models on the same datasets.


conference towards autonomous robotic systems | 2016

Agent-based autonomous systems and abstraction engines: Theory meets practice

Louise A. Dennis; Jonathan M. Aitken; Joe Collenette; Elisa Cucco; Maryam Kamali; Owen McAree; Affan Shaukat; Katie Atkinson; Yang Gao; Sandor M. Veres; Michael Fisher

We report on experiences in the development of hybrid autonomous systems where high-level decisions are made by a rational agent. This rational agent interacts with other sub-systems via an abstraction engine. We describe three systems we have developed using the EASS BDI agent programming language and framework which supports this architecture. As a result of these experiences we recommend changes to the theoretical operational semantics that underpins the EASS framework and present a fourth implementation using the new semantics.


Archive | 2016

Self-Reconfiguring Robotic Framework Using Fuzzy and Ontological Decision Making

Affan Shaukat; Guy Burroughes; Yang Gao

Advanced automation requires complex robotic systems that are susceptible to mechanical, software and sensory failures. While bespoke solutions exist to avoid such situations, there is a requirement to develop generic robotic framework that can allow autonomous recovery from anomalous conditions through hardware or software reconfiguration. This paper presents a novel robotic architecture that combines fuzzy reasoning with ontology-based deliberative decision making to enable self-reconfigurability within a complex robotic system architecture. The fuzzy reasoning module incorporates multiple types of fuzzy inference models that passively monitor the constituent sub-systems for any anomalous changes. A response is generated in retrospect of this monitoring process that is sent to an Ontology-based rational agent in order to perform system reconfiguration. A reconfiguration routine is generated to maintain optimal performance within such complex architectures. The current research work will apply the proposed framework to the problem of autonomous visual navigation of unmanned ground vehicles. An increase in system performance is observed every time a reconfiguration routine is triggered. Experimental analysis is carried out using real-world data, concluding that the proposed system concept gives superior performance against non-reconfigurable robotic frameworks.


Archive | 2013

Quasi-thematic feature detection and tracking for future rover long-distance autonomous navigation

Affan Shaukat; Conrad Spiteri; Yang Gao; S Al-Milli; A Bajpai


Robotics and Autonomous Systems | 2016

Visual classification of waste material for nuclear decommissioning

Affan Shaukat; Yang Gao; Jeffrey A. Kuo; Bob A. Bowen; Paul Mort

Collaboration


Dive into the Affan Shaukat's collaboration.

Top Co-Authors

Avatar

Yang Gao

University of Surrey

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Erik Hollnagel

University of Southern Denmark

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Elisa Cucco

University of Liverpool

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge