Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel J. Lizotte is active.

Publication


Featured researches published by Daniel J. Lizotte.


Machine Learning | 2011

Informing sequential clinical decision-making through reinforcement learning: an empirical study

Susan M. Shortreed; Eric B. Laber; Daniel J. Lizotte; T. Scott Stroup; Joelle Pineau; Susan A. Murphy

This paper highlights the role that reinforcement learning can play in the optimization of treatment policies for chronic illnesses. Before applying any off-the-shelf reinforcement learning methods in this setting, we must first tackle a number of challenges. We outline some of these challenges and present methods for overcoming them. First, we describe a multiple imputation approach to overcome the problem of missing data. Second, we discuss the use of function approximation in the context of a highly variable observation set. Finally, we discuss approaches to summarizing the evidence in the data for recommending a particular action and quantifying the uncertainty around the Q-function of the recommended policy. We present the results of applying these methods to real clinical trial data of patients with schizophrenia.


Electronic Journal of Statistics | 2014

Dynamic treatment regimes: Technical challenges and applications

Eric B. Laber; Daniel J. Lizotte; Min Qian; William E. Pelham; Susan A. Murphy

Dynamic treatment regimes are of growing interest across the clinical sciences because these regimes provide one way to operationalize and thus inform sequential personalized clinical decision making. Formally, a dynamic treatment regime is a sequence of decision rules, one per stage of clinical intervention. Each decision rule maps up-to-date patient information to a recommended treatment. We briefly review a variety of approaches for using data to construct the decision rules. We then review a critical inferential challenge that results from nonregularity, which often arises in this area. In particular, nonregularity arises in inference for parameters in the optimal dynamic treatment regime; the asymptotic, limiting, distribution of estimators are sensitive to local perturbations. We propose and evaluate a locally consistent Adaptive Confidence Interval (ACI) for the parameters of the optimal dynamic treatment regime. We use data from the Adaptive Pharmacological and Behavioral Treatments for Children with ADHD Trial as an illustrative example. We conclude by highlighting and discussing emerging theoretical problems in this area.


Journal of Global Optimization | 2012

An experimental methodology for response surface optimization methods

Daniel J. Lizotte; Russell Greiner; Dale Schuurmans

Response surface methods, and global optimization techniques in general, are typically evaluated using a small number of standard synthetic test problems, in the hope that these are a good surrogate for real-world problems. We introduce a new, more rigorous methodology for evaluating global optimization techniques that is based on generating thousands of test functions and then evaluating algorithm performance on each one. The test functions are generated by sampling from a Gaussian process, which allows us to create a set of test functions that are interesting and diverse. They will have different numbers of modes, different maxima, etc., and yet they will be similar to each other in overall structure and level of difficulty. This approach allows for a much richer empirical evaluation of methods that is capable of revealing insights that would not be gained using a small set of test functions. To facilitate the development of large empirical studies for evaluating response surface methods, we introduce a dimension-independent measure of average test problem difficulty, and we introduce acquisition criteria that are invariant to vertical shifting and scaling of the objective function. We also use our experimental methodology to conduct a large empirical study of response surface methods. We investigate the influence of three properties—parameter estimation, exploration level, and gradient information—on the performance of response surface methods.


conference on learning theory | 2004

The Budgeted Multi-armed Bandit Problem

Omid Madani; Daniel J. Lizotte; Russell Greiner

The following coins problem is a version of a multi-armed bandit problem where one has to select from among a set of objects, say classifiers, after an experimentation phase that is constrained by a time or cost budget. The question is how to spend the budget. The problem involves pure exploration only, differentiating it from typical multi-armed bandit problems involving an exploration/exploitation tradeoff [BF85]. It is an abstraction of the following scenarios: choosing from among a set of alternative treatments after a fixed number of clinical trials, determining the best parameter settings for a program given a deadline that only allows a fixed number of runs; or choosing a life partner in the bachelor/bachelorette TV show where time is limited. We are interested in the computational complexity of the coins problem and/or efficient algorithms with approximation guarantees.


international conference on smart grid communications | 2012

On hourly home peak load prediction

Rayman Preet Singh; Peter Xiang Gao; Daniel J. Lizotte

The Ontario electrical grid is sized to meet peak electricity load. A reduction in peak load would allow deferring large infrastructural costs of additional power plants, thereby lowering generation cost and electricity prices. Proposed solutions for peak load reduction include demand response and storage. Both these solutions require accurate prediction of a homes peak and mean load. Existing work has focused only on mean load prediction. We find that these methods exhibit high error when predicting peak load. Moreover, a homes historic peak load and occupancy is a better predictor of peak load than observable physical characteristics such as temperature and season. We explore the use of Seasonal Auto Regressive Moving Average (SARMA) for peak load prediction and find that it has 30% lower root mean square error than best known prior methods.


Machine Learning | 2014

Tracking people over time in 19th century Canada for longitudinal analysis

Luiza Antonie; Kris Inwood; Daniel J. Lizotte; J. Andrew Ross

Linking multiple databases to create longitudinal data is an important research problem with multiple applications. Longitudinal data allows analysts to perform studies that would be unfeasible otherwise. We have linked historical census databases to create longitudinal data that allow tracking people over time. These longitudinal data have already been used by social scientists and historians to investigate historical trends and to address questions about society, history and economy, and this comparative, systematic research would not be possible without the linked data. The goal of the linking is to identify the same person in multiple census collections. Data imprecision in historical census data and the lack of unique personal identifiers make this task a challenging one. In this paper we design and employ a record linkage system that incorporates a supervised learning module for classifying pairs of records as matches and non-matches. We show that our system performs large scale linkage producing high quality links and generating sufficient longitudinal data to allow meaningful social science studies. We demonstrate the impact of the longitudinal data through a study of the economic changes in 19th century Canada.


conference on computational natural language learning | 2006

Improved Large Margin Dependency Parsing via Local Constraints and Laplacian Regularization

Qin Iris Wang; Colin Cherry; Daniel J. Lizotte; Dale Schuurmans

We present an improved approach for learning dependency parsers from tree-bank data. Our technique is based on two ideas for improving large margin training in the context of dependency parsing. First, we incorporate local constraints that enforce the correctness of each individual link, rather than just scoring the global parse tree. Second, to cope with sparse data, we smooth the lexical parameters according to their underlying word similarities using Laplacian Regularization. To demonstrate the benefits of our approach, we consider the problem of parsing Chinese treebank data using only lexical features, that is, without part-of-speech tags or grammatical categories. We achieve state of the art performance, improving upon current large margin approaches.


international conference on smart grid communications | 2013

Critiquing Time-of-Use pricing in Ontario

Adedamola Adepetu; Elnaz Rezaei; Daniel J. Lizotte; Srinivasan Keshav

Since 2006, with the progressive deployment of Advanced Metering Infrastructure, jurisdictions in the Canadian province of Ontario have been increasingly using Time-Of-Use (TOU) pricing with the objective of reducing the mean peak-to-average load ratio and thus excess generation capacity. We analyse the hourly aggregate load data to study whether the choice of TOU parameters (i.e., number of seasons, season start and end times, and choice of peak and off-peak times) adequately reflects the aggregate load, and to study whether TOU pricing has actually resulted in a decrease in the mean peak-to-average ratio. We find that since the introduction of TOU pricing, not only has the mean peak-to-average load ratio actually increased but also that the currently implemented TOU parameters are far from optimal. Based on our findings, we make concrete recommendations to improve the TOU pricing scheme in Ontario.


Journal of the American Statistical Association | 2012

SMART Design Issues and the Consideration of Opposing Outcomes: Discussion of "Evaluation of Viable Dynamic Treatment Regimes in a Sequentially Randomized Trial of Advanced Prostate Cancer" by by Wang, Rotnitzky, Lin, Millikan, and Thall.

Daniel Almirall; Daniel J. Lizotte; Susan A. Murphy

Sequential treatments, in which treatments are adapted over time based on the changing clinical status of the patient, are often necessary because treatment effects are heterogeneous across patients: not all patients will respond (similarly) to treatment, calling for changes in treatment to achieve an acute response or to place all patients on a positive health trajectory. Further, a treatment that is effective now for one patient may not work as well in the future for the same patient, again necessitating a sequence of treatments. Moreover, it is often necessary to balance benefits (e.g., symptom reduction) with burden (e.g., toxicity), a trade-off that may unfold over time. As a result, in clinical practice clinicians often find themselves implicitly or explicitly using a sequence of treatments with the goal of optimizing both shortand long-term outcomes, or, as may be the case in cancer treatment, to prevent death. Dynamic treatment regimes (DTRs) operationalize such sequential decision making. A DTR individualizes treatment over time via decision rules that specify whether, how, or when to alter the intensity, type, or delivery of treatment at critical clinical decision points. Sequential multiple assignment randomized trials (SMARTs) or equivalently, sequentially randomized trials, have been developed explicitly for the purpose of constructing proposals for high-quality DTRs. In the article “Evaluation of Viable Dynamic Treatment Regimes in a Sequentially Randomized Trial of Advanced Prostate Cancer,” Wang et al. (2012, hereinafter WRLMT) provide an excellent and lucid re-analysis of data from a SMART study and both motivate and encourage a discussion about design and analysis issues around SMARTs. In our comment, we focus on two important ideas raised by WRLMT: (1) the design of SMARTs (as opposed to the analysis of SMARTs), and (2) the analysis of, and presentation of results based on, multiple outcomes.Sequential treatments, in which treatments are adapted over time based on the changing clinical status of the patient, are often necessary because treatment effects are heterogeneous across patients: not all patients will respond (similarly) to treatment, calling for changes in treatment in order to achieve an acute response or to place all patients on a positive health trajectory. Further, a treatment that is effective now for one patient may not work as well in the future for the same patient, again necessitating a sequence of treatments. Moreover, it is often necessary to balance benefits (e.g., symptom reduction) with burden (e.g., toxicity), a trade-off that may unfold over time. As a result, in clinical practice clinicians often find themselves implicitly or explicitly using a sequence of treatments with the goal of optimizing both short- and long-term outcomes, or, as may be the case in cancer treatment, to prevent death. Dynamic treatment regimes (DTRs) operationalize such sequential decision making. A DTR individualizes treatment over time via decision rules that specify whether, how, or when to alter the intensity, type, or delivery of treatment at critical clinical decision points. Sequential multiple assignment randomized trials (SMARTs) or equivalently, sequentially randomized trials, have been developed explicitly for the purpose of constructing proposals for high-quality DTRs. In the “Evaluation of Viable Dynamic Treatment Regimes in a Sequentially Randomized Trial of Advanced Prostate Cancer,” Wang, Rotnitzky, Lin, Millikan, and Thall (2012, hereinafter, WRLMT) provide an excellent and lucid re-analysis of data from a SMART study and both motivate and encourage a discussion about design and analysis issues around SMARTs. In our comment, we focus on two important ideas raised by WRLMT: (1) the design of SMARTs (as opposed to the analysis of SMARTs), and (2) the analysis of, and presentation of results based on, multiple outcomes.


international conference on machine learning and applications | 2012

Integrating Machine Learning Into a Medical Decision Support System to Address the Problem of Missing Patient Data

Atif Khan; John A. Doucette; Robin Cohen; Daniel J. Lizotte

In this paper, we present a framework which enables medical decision making in the presence of partial information. At its core is ontology-based automated reasoning, machine learning techniques are integrated to enhance existing patient datasets in order to address the issue of missing data. Our approach supports interoperability between different health information systems. This is clarified in a sample implementation that combines three separate datasets (patient data, drug-drug interactions and drug prescription rules) to demonstrate the effectiveness of our algorithms in producing effective medical decisions. In short, we demonstrate the potential for machine learning to support a task where there is a critical need from medical professionals by coping with missing or noisy patient data and enabling the use of multiple medical datasets.

Collaboration


Dive into the Daniel J. Lizotte's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tao Wang

University of Alberta

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rhiannon V. Rose

University of Western Ontario

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

William E. Pelham

Florida International University

View shared research outputs
Top Co-Authors

Avatar

Atif Khan

University of Waterloo

View shared research outputs
Researchain Logo
Decentralizing Knowledge