Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ali Nasir is active.

Publication


Featured researches published by Ali Nasir.


Journal of Aerospace Information Systems | 2015

Human intent prediction using Markov decision processes

Catharine L. R. McGhan; Ali Nasir; Ella M. Atkins

This paper describes a modeling method for predicting a human’s task-level intent through the use of Markov Decision Processes. Intent prediction can be used by a robot to improve decision-making when human and robot operate in a shared physical space. This work presumes human and robot goals are independent such that the robot seeks to avoid interfering with the human rather than directly assisting the human. The proposed human intent prediction system transforms goal sequences the human is expected to complete, a limited past action history, and a correlation of observed behaviors with actions into a prediction of the in-progress or next action the humans is most likely to take. An intra-vehicle activity space robotics application example is presented.


AIAA Infotech at Aerospace Conference and Exhibit 2012 | 2012

Human Intent Prediction Using Markov Decision Processes.

Catharine L. R. McGhan; Ali Nasir; Ella M. Atkins

This paper describes a system for modeling human task-level intent through the use of Markov Decision Processes (MDPs). To maintain safety and efficiency during physicallyproximal human-robot collaboration, it is necessary for both human and robot to communicate or otherwise deconflict physical actions. Human-state aware robot intelligence is necessary to facilitate this. However, physical action deconfliction without explicit communication requires a robot to estimate a human (or robotic) companion’s current action(s) and goal priorities, and then use this information to predict their intended future action sequence. Models tailored to a particular human can also enable online human intent prediction. We call the former a ‘simulated human’ model – one that is non-specific and generalized to statistical norms of human reaction obtained from human subject testing. The latter we call a ‘human matching’ model – one that attempts to produce the same output as a particular human subject, requiring online learning for improved accuracy. We propose the creation of ‘simulated human’ and ‘human matching’ models in this manuscript as a means for a robot to intelligently predict a human companion’s intended future actions. We develop a Human Intent Prediction (HIP) system, which can model human choice, to satisfy these needs. This system, when given a time history of previous actions as input, predicts the most likely action a human agent will next make to a robot’s task scheduling system. Our HIP system is applied to an intra-vehicle activity (IVA) space robotics application. We use data from preliminary human subject testing to formulate and populate our models in an offline learning process that illustrates how the models can adapt to better predict intent as new training data is incorporated.


AIAA Guidance, Navigation, and Control Conference | 2010

Fault Tolerance for Spacecraft Attitude Management

Ali Nasir; Ella M. Atkins

We present an autonomy architecture called Fault Tolerant Remote Agent that integrates symbolic reasoning from AI planning/scheduling with physics-based fault-tolerant control. Application to spacecraft attitude management in the presence of diverse failure classes is studied. We first review fault tolerance in AI and control-theoretic contexts and introduce an architecture in which the capabilities of each can be integrated into a more comprehensive fault management framework. We then present fault identification and reconfiguration algorithms for a spacecraft attitude control case study. Simulation results demonstrate good recovery by the spacecraft for situations in which controllability is not lost. These simulations also illustrate how logic-based and physics-based algorithms cooperatively achieve a more comprehensive fault management capability than would be possible with either algorithm class alone.


AIAA Infotech at Aerospace Conference and Exhibit 2012 | 2012

A Mission Based Fault Reconfiguration Framework for Spacecraft Applications

Ali Nasir; Ella M. Atkins; Ilya V. Kolmanovsky

We present a Markov Decision Process (MDP) framework for computing post-fault reconfiguration policies that are optimal with respect to a discounted cost. Our cost function penalizes states that are unsuitable to achieve the remaining objectives of the given mission. The cost function also penalizes states where the necessary goal achievement actions cannot be executed. We incorporate probabilities of missed detections and false alarms for a given fault condition into our cost to encourage the selection of policies that minimize the likelihood of incorrect reconfiguration. To illustrate the implementation of our proposed framework, we present an example inspired by the Far Ultraviolet Spectroscopic Explorer (FUSE) spacecraft with a mission to collect scientific data from 5 targets. Using this example, we also demonstrate that there is a design tradeoff between safe operation and mission completion. Simulation results are presented to illustrate and manage this tradeoff through the selection of optimization parameters.


AIAA Infotech at Aerospace Conference and Exhibit 2011 | 2011

Conflict resolution algorithms for fault detection and diagnosis

Ali Nasir; Ella M. Atkins; Ilya V. Kolmanovsky

We present two approaches for conflict resolution between two fault detection schemes, detecting the same fault, via optimization with bounded adjustment of detection thresholds. In our first method, we assume initially that there is no conflict and optimize the thresholds of both schemes with respect to a partial cost function that penalizes false alarms and missed detections. Then we continuously update thresholds based on a comprehensive cost function that penalizes conflicts in addition to false alarms and missed detections. Our updates are bounded and controlled in such a way that the cost function always assumes the lowest possible cost as a function of thresholds. We make use of residual signals to minimize computational complexity. In our second method, we present a more general solution to the conflict resolution problem using a Markov Decision Process framework that generates an optimal policy for fault detection threshold. This method is computationally more complex but it is more general, does not require knowledge of residuals, and does not require initial optimization of the thresholds. We introduce an error signal that indicates failure in resolving the conflict using threshold updating in which case, a supervisor (human or computer) can be alerted and prompted to take a corrective action. We implemented our methods on a spacecraft attitude control thruster- valve system simulation with high noise. Our results show good performance and substantial reduction in conflicts under highly uncertain conditions. Nomenclature i a + = Penalty weight for missed detection of fault by detection scheme i i a− = Penalty weight for false alarm of fault by detection scheme i i b = Binary flag indicating presence of fault detected by scheme i (depends on thresholds and input to the fault detection scheme). i v = Threshold value for fault detection in scheme i i v = Optimal value of threshold based on receiver operating characteristics of detection scheme i i v = Upper bound on threshold value based on penalties in the cost function i v = Lower bound on threshold value based on penalties in the cost function


IFAC Proceedings Volumes | 2011

Science Optimal Spacecraft Attitude Maneuvering While Accounting for Failure Mode

Ali Nasir; Ella M. Atkins; Ilya V. Kolmanovsky

Abstract We present a framework to generate optimal sequence of actions for spacecraft missions. Our framework is based on an application of the theory of Markov decision processes and stochastic dynamic programming. While this framework is general, we present our approach in the context of a specific spacecraft mission, requiring spacecraft attitude control for pointing to collect science data from a number of celestial objects. Our generated sequences are optimal in a sense that the expected reward of science data collected in the presence of the possible failures is maximized.


international bhurban conference on applied sciences and technology | 2017

Optimal control for stochastic model of epidemic infections

Ali Nasir; Huma Rehman

This paper discusses the development of a discrete stochastic model for SIR epidemic infection and calculation of the optimal control policy for the proposed model. Specifically, a Markov Decision Process based modeling approach is proposed as opposed to the traditional state space modeling. Proposed model consists of set of discrete states, actions, and transition probabilities. Selection of an optimality criterion is discussed for calculation of the optimal control policy. The behavior of the optimal policy and the tradeoffs involved in the selection of the optimality criterion are discussed through case study and graphical representations respectively. Furthermore, the concept of scaling the population size is introduced in order to tackle large scale problems.


International Journal of Electronics | 2017

Study of Piezoelectric Vibration Energy Harvester with non-linear conditioning circuit using an integrated model

Ali Manzoor; Sajid Rafique; Muhammad Usman Iftikhar; Khalid Mahmood Ul Hassan; Ali Nasir

ABSTRACT Piezoelectric vibration energy harvester (PVEH) consists of a cantilever bimorph with piezoelectric layers pasted on its top and bottom, which can harvest power from vibrations and feed to low power wireless sensor nodes through some power conditioning circuit. In this paper, a non-linear conditioning circuit, consisting of a full-bridge rectifier followed by a buck–boost converter, is employed to investigate the issues of electrical side of the energy harvesting system. An integrated mathematical model of complete electromechanical system has been developed. Previously, researchers have studied PVEH with sophisticated piezo-beam models but employed simplistic linear circuits, such as resistor, as electrical load. In contrast, other researchers have worked on more complex non-linear circuits but with over-simplified piezo-beam models. Such models neglect different aspects of the system which result from complex interactions of its electrical and mechanical subsystems. In this work, authors have integrated the distributed parameter-based model of piezo-beam presented in literature with a real world non-linear electrical load. Then, the developed integrated model is employed to analyse the stability of complete energy harvesting system. This work provides a more realistic and useful electromechanical model having realistic non-linear electrical load unlike the simplistic linear circuit elements employed by many researchers.


IEEE Transactions on Systems, Man, and Cybernetics | 2017

Robust Science-Optimal Spacecraft Control for Circular Orbit Missions

Ali Nasir; Ella M. Atkins; Ilya V. Kolmanovsky

This paper describes a Markov decision process approach to a robust spacecraft mission control policy that maximizes the expected value of science reward assuming a circular orbit. The control policy that governs mission steps can be computed off-board or onboard depending upon the availability of communication bandwidth and on-board computational resources. This paper considers a sample science mission, where the spacecraft collects data from celestial objects viewable only within a certain orbit true anomaly window. Science data collection requires the spacecraft to slew its instrument(s) toward each target, and continue pointing in the direction of the target while the spacecraft traverses its orbit. Robustness and stochastic optimization of scientific reward, is achieved at the cost of computational complexity. Approximate dynamic programming (ADP) is exploited to reduce the computational time and effort to manageable levels and to treat larger problem sizes. The proposed ADP algorithm partitions the state-space based on true anomaly regions, enabling grouping of adjacent science targets. Results of a simulation case study demonstrate that our proposed ADP approach performs quite well for reasonable ranges of key problem parameters.


international conference on intelligent systems | 2016

Incorporating artificial intelligence in shopping assistance robot using Markov Decision Process

Rida Gillani; Ali Nasir

There are many challenges involved in the realization of a shopping assistance robot (SAR). The specific challenge addressed in this paper is that of incorporating artificial intelligence or decision making capability in such robot. Markov Decision Process (MDP) based formulation of the problem has been presented for this purpose. The major advantage of the MDP based approach over simple search based artificial intelligence techniques is that it can incorporate uncertainty. The proposed MDP model has been solved for optimal policy using value iteration algorithm. Furthermore, it has been shown how the reward function influences the structure of the resulting policy. The results show encouraging potential in the use of MDP based formulation for SAR.

Collaboration


Dive into the Ali Nasir's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rida Gillani

University of Central Punjab

View shared research outputs
Top Co-Authors

Avatar

Arooj Mobasher Butt

University of Central Punjab

View shared research outputs
Top Co-Authors

Avatar

Haleema Asif

University of Central Punjab

View shared research outputs
Top Co-Authors

Avatar

Huma Rehman

University of Central Punjab

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge