Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mark Crowley is active.

Publication


Featured researches published by Mark Crowley.


International Journal of Wildland Fire | 2013

Allowing a wildfire to burn: estimating the effect on future fire suppression costs

Rachel Houtman; Claire A. Montgomery; Aaron R. Gagnon; David E. Calkin; Thomas G. Dietterich; Sean McGregor; Mark Crowley

Where a legacy of aggressive wildland fire suppression has left forests in need of fuel reduction, allowing wildland fire to burn may provide fuel treatment benefits, thereby reducing suppression costs from subsequent fires. The least-cost-plus-net-value-change model of wildland fire economics includes benefits of wildfire in a framework for evaluating suppression options. In this study, we estimated one component of that benefit - the expected present value of the reduction in suppression costs for subsequent fires arising from the fuel treatment effect of a current fire. To that end, weemployedMonteCarlomethodstogenerateasetofscenariosforsubsequentfireignitionandweatherevents,whichare referred to as sample paths, for a study area in central Oregon. We simulated fire on the landscape over a 100-year time horizon using existing models of fire behaviour, vegetation and fuels development, and suppression effectiveness, and we estimatedsuppressioncostsusinganexistingsuppressioncostmodel.Ourestimatessuggestthatthepotentialcostsavings may be substantial. Further research is needed to estimate the full least-cost-plus-net-value-change model. This line of research will extend the set of tools available for developing wildfire management plans for forested landscapes. Additional keywords: bio-economic modelling, forest economics, forest fire policy, wildland fire management.


IEEE Transactions on Computers | 2014

Using Equilibrium Policy Gradients for Spatiotemporal Planning in Forest Ecosystem Management

Mark Crowley

Spatiotemporal planning involves making choices at multiple locations in space over some planning horizon to maximize utility and satisfy various constraints. In Forest Ecosystem Management, the problem is to choose actions for thousands of locations each year including harvesting, treating trees for fire or pests, or doing nothing. The utility models could place value on sale of lumber, ecosystem sustainability or employment levels and incorporate legal and logistical constraints on actions such as avoiding large contiguous areas of clearcutting. Simulators developed by forestry researchers provide detailed dynamics but are generally inaccesible black boxes. We model spatiotemporal planning as a factored Markov decision process and present a policy gradient planning algorithm to optimize a stochastic spatial policy using simulated dynamics. It is common in environmental and resource planning to have actions at different locations be spatially interelated; this makes representation and planning challenging. We define a global spatial policy in terms of interacting local policies defining distributions over actions at each location conditioned on actions at nearby locations. Markov chain Monte Carlo simulation is used to sample landscape policies and estimate their gradients. Evaluation is carried out on a forestry planning problem with 1,880 locations using a variety of value models and constraints.


western canadian conference on computing education | 2010

Circuits and logic in the lab: toward a coherent picture of computation

Elizabeth Ann Patitsas; Kimberly D. Voll; Mark Crowley; Steven A. Wolfman

We describe extensive modifications made over time to a first year computer science course at the University of British Columbia covering logic and digital circuits (among other topics). Smoothly integrating the hardware-based labs with the more theory-based lectures into a cohesive picture of computation has always been a challenge in this course. The seeming disconnect between implementation and abstraction has historically led to frustration and dissatisfaction among students. We describe changes to the lab curriculum, equipment logistics, the style of in-lab activities and evaluation. We have also made logistical changes to the management and ongoing training of teaching assistants, allowing us to better anchor our larger course story into the lab curriculum. These changes have greatly improved student and TA opinions of the lab experience, as well as the overall course.


Archive | 2011

Equilibrium policy gradients for spatiotemporal planning

Mark Crowley

In spatiotemporal planning, agents choose actions at multiple locations in space over some planning horizon to maximize their utility and satisfy various constraints. In forestry planning, for example, the problem is to choose actions for thousands of locations in the forest each year. The actions at each location could include harvesting trees, treating trees against disease and pests, or doing nothing. A utility model could place value on sale of forest products, ecosystem sustainability or employment levels, and could incorporate legal and logistical constraints such as avoiding large contiguous areas of clearcutting and managing road access. Planning requires a model of the dynamics. Existing simulators developed by forestry researchers can provide detailed models of the dynamics of a forest over time, but these simulators are often not designed for use in automated planning. This thesis presents spatiotemoral planning in terms of factored Markov decision processes. A policy gradient planning algorithm optimizes a stochastic spatial policy using existing simulators for dynamics. When a planning problem includes spatial interaction between locations, deciding on an action to carry out at one location requires considering the actions performed at other locations. This spatial interdependence is common in forestry and other environmental planning problems and makes policy representation and planning challenging. We define a spatial policy in terms of local policies defined as distributions over actions at one location conditioned upon actions at other locations. A policy gradient planning algorithm using this spatial policy is presented which uses Markov Chain Monte Carlo simulation to sample the


Archive | 2005

Shielding against conditioning side effects in graphical models

Mark Crowley

When modelling uncertain beliefs with graphical models we are often presented with “natural” distributions that are hard to specify. An example is a distribution of which instructor is teaching a course when we know that someone must teach it. Such distributions over a set of nodes can be easily described if we condition on a child of these nodes as part of the specification. This conditioning is not an observation of a variable in the real world but by fixing the value of the node, existing inference algorithms perform the calculations needed to achieve the desired distribution automatically. Unfortunately, although it achieves this goal it has side effects that we claim are undesirable. These side effects create dependencies between other variables in the model. This can lead to different beliefs throughout the model, including the constrained variables, than would otherwise be expected if the constraint is meant to be local in its effect. We describe the use of conditioning for these types of distributions and illuminate the problem of side effects, which have received little attention in the literature. We then present a method that still allows specification of these distributions easily using conditioning but counterbalancing side effects by adding other nodes to the network.


canadian conference on artificial intelligence | 2018

Decision Assist for Self-driving Cars

Sriram Ganapathi Subramanian; Jaspreet Singh Sambee; Benyamin Ghojogh; Mark Crowley

Research into self-driving cars has grown enormously in the last decade primarily due to the advances in the fields of machine intelligence and image processing. An under-appreciated aspect of self-driving cars is actively avoiding high traffic zones, low visibility zones, and routes with rough weather conditions by learning different conditions and making decisions based on trained experiences. This paper addresses this challenge by introducing a novel hierarchical structure for dynamic path planning and experiential learning for vehicles. A multistage system is proposed for detecting and compensating for weather, lighting, and traffic conditions as well as a novel adaptive path planning algorithm named Checked State A3C. This algorithm improves upon the existing A3C Reinforcement Learning (RL) algorithm by adding state memory which provides the ability to learn an adaptive model of the best decisions to take from experience.


canadian conference on artificial intelligence | 2018

Combining MCTS and A3C for Prediction of Spatially Spreading Processes in Forest Wildfire Settings.

Sriram Ganapathi Subramanian; Mark Crowley

In recent years, Deep Reinforcement Learning (RL) algorithms have shown super-human performance in a variety Atari and classic board games like chess and GO. Research into applications of RL in other domains with spatial considerations like environmental planning are still in their nascent stages. In this paper, we introduce a novel combination of Monte-Carlo Tree Search (MCTS) and A3C algorithms on an online simulator of a wildfire, on a pair of forest fires in Northern Alberta (Fort McMurray and Richardson fires) and on historical Saskatchewan fires previously compared by others to a physics-based simulator. We conduct several experiments to predict fire spread for several days before and after the given spatial information of fire spread and ignition points. Our results show that the advancements in Deep RL applications in the gaming world have advantages in spatially spreading real-world problems like forest fires.


Frontiers in ICT | 2018

Using Spatial Reinforcement Learning to Build Forest Wildfire Dynamics Models From Satellite Images

Sriram Ganapathi Subramanian; Mark Crowley

Machine learning algorithms have increased tremendously in power in recent years but have yet to be fully utilized in many ecology and sustainable resource management domains such as wildlife reserve design, forest fire management and invasive species spread. One thing these domains have in common is that they contain dynamics that can be characterized as a Spatially Spreading Process (SSP) which requires many parameters to be set precisely to model the dynamics, spread rates and directional biases of the elements which are spreading. We present related work in Artificial Intelligence and Machine Learning for SSP sustainability domains including forest wildfire prediction. We then introduce a novel approach for learning in SSP domains using Reinforcement Learning (RL) where fire is the agent at any cell in the landscape and the set of actions the fire can take from a location at any point in time includes spreading North, South, East, West or not spreading. This approach inverts the usual RL setup since the dynamics of the corresponding Markov Decision Process (MDP) is a known function for immediate wildfire spread. Meanwhile, we learn an agent policy for a predictive model of the dynamics of a complex spatially-spreading process. Rewards are provided for correctly classifying which cells are on fire or not compared to satellite and other related data. We examine the behaviour of five RL algorithms on this problem: Value Iteration, Policy Iteration, Q-Learning, Monte Carlo Tree Search and Asynchronous Advantage Actor-Critic (A3C). We compare to a Gaussian process based supervised learning approach and discuss the relation of our approach to manually constructed, state-of-the-art methods from forest wildfire modelling. We also discuss the relation of our approach to manually constructed, state-of-the-art methods from forest wildfire modelling. We validate our approach with satellite image data of two massive wildfire events in Northern Alberta, Canada; the Fort McMurray fire of 2016 and the Richardson fire of 2011. The results show that we can learn predictive, agent-based policies as models of spatial dynamics using RL on readily available satellite images that other methods and have many additional advantages in terms of generalizability and interpretability.


Proceedings of SPIE | 2017

Application of probabilistically weighted graphs to image-based diagnosis of Alzheimer's disease using diffusion MRI

Syeda Maryam; Laura McCrackin; Mark Crowley; Yogesh Rathi; Oleg V. Michailovich

The world’s aging population has given rise to an increasing awareness towards neurodegenerative disorders, including Alzheimers Disease (AD). Treatment options for AD are currently limited, but it is believed that future success depends on our ability to detect the onset of the disease in its early stages. The most frequently used tools for this include neuropsychological assessments, along with genetic, proteomic, and image-based diagnosis. Recently, the applicability of Diffusion Magnetic Resonance Imaging (dMRI) analysis for early diagnosis of AD has also been reported. The sensitivity of dMRI to the microstructural organization of cerebral tissue makes it particularly well-suited to detecting changes which are known to occur in the early stages of AD. Existing dMRI approaches can be divided into two broad categories: region-based and tract-based. In this work, we propose a new approach, which extends region-based approaches to the simultaneous characterization of multiple brain regions. Given a predefined set of features derived from dMRI data, we compute the probabilistic distances between different brain regions and treat the resulting connectivity pattern as an undirected, fully-connected graph. The characteristics of this graph are then used as markers to discriminate between AD subjects and normal controls (NC). Although in this preliminary work we omit subjects in the prodromal stage of AD, mild cognitive impairment (MCI), our method demonstrates perfect separability between AD and NC subject groups with substantial margin, and thus holds promise for fine-grained stratification of NC, MCI and AD populations.


euromicro conference on real-time systems | 2016

Anomaly Detection Using Inter-Arrival Curves for Real-Time Systems

Mahmoud Salem; Mark Crowley; Sebastian Fischmeister

Real-time embedded systems are a significant class of applications, poised to grow even further as automated vehicles and the Internet of Things become a reality. An important problem for these systems is to detect anomalies during operation. Anomaly detection is a form of classification, which can be driven by data collected from the system at execution time. We propose inter-arrival curves as a novel analytic modelling technique for discrete event traces. Our approach relates to the existing technique of arrival curves and expands the technique to anomaly detection. Inter-arrival curves analyze the behaviour of events within a trace by providing upper and lower bounds to their inter-arrival occurrence. We exploit inter-arrival curves in a classification framework that detects deviations within these bounds for anomaly detection. Also, we show how inter-arrival curves act as good features to extract recurrent behaviour that these systems often exhibit. We demonstrate the feasibility and viability of the fully implemented approach with an industrial automotive case study (CAN traces) as well as a deployed aerospace case study (RTOS kernel traces).

Collaboration


Dive into the Mark Crowley's collaboration.

Top Co-Authors

Avatar

David Poole

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Elizabeth Ann Patitsas

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John Nelson

University of British Columbia

View shared research outputs
Researchain Logo
Decentralizing Knowledge