Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Maytal Saar-Tsechansky is active.

Publication


Featured researches published by Maytal Saar-Tsechansky.


Machine Learning | 2004

Active Sampling for Class Probability Estimation and Ranking

Maytal Saar-Tsechansky; Foster Provost

In many cost-sensitive environments class probability estimates are used by decision makers to evaluate the expected utility from a set of alternatives. Supervised learning can be used to build class probability estimates; however, it often is very costly to obtain training data with class labels. Active learning acquires data incrementally, at each phase identifying especially useful additional data for labeling, and can be used to economize on examples needed for learning. We outline the critical features of an active learner and present a sampling-based active learning method for estimating class probabilities and class-based rankings. BOOTSTRAP-LV identifies particularly informative new data for learning based on the variance in probability estimates, and uses weighted sampling to account for a potential examples informative value for the rest of the input space. We show empirically that the method reduces the number of data items that must be obtained and labeled, across a wide variety of domains. We investigate the contribution of the components of the algorithm and show that each provides valuable information to help identify informative examples. We also compare BOOTSTRAP-LV with UNCERTAINTY SAMPLING, an existing active learning method designed to maximize classification accuracy. The results show that BOOTSTRAP-LV uses fewer examples to exhibit a certain estimation accuracy and provide insights to the behavior of the algorithms. Finally, we experiment with another new active sampling algorithm drawing from both UNCERTAINTY SAMPLING and BOOTSTRAP-LV and show that it is significantly more competitive with BOOTSTRAP-LV compared to UNCERTAINTY SAMPLING. The analysis suggests more general implications for improving existing active sampling algorithms for classification.


Management Science | 2009

Active Feature-Value Acquisition

Maytal Saar-Tsechansky; Prem Melville; Foster Provost

Most induction algorithms for building predictive models take as input training data in the form of feature vectors. Acquiring the values of features may be costly, and simply acquiring all values may be wasteful or prohibitively expensive. Active feature-value acquisition (AFA) selects features incrementally in an attempt to improve the predictive model most cost-effectively. This paper presents a framework for AFA based on estimating information value. Although straightforward in principle, estimations and approximations must be made to apply the framework in practice. We present an acquisition policy, sampled expected utility (SEU), that employs particular estimations to enable effective ranking of potential acquisitions in settings where relatively little information is available about the underlying domain. We then present experimental results showing that, compared with the policy of using representative sampling for feature acquisition, SEU reduces the cost of producing a model of a desired accuracy and exhibits consistent performance across domains. We also extend the framework to a more general modeling setting in which feature values as well as class labels are missing and are costly to acquire.


international conference on data mining | 2004

Active feature-value acquisition for classifier induction

Prem Melville; Maytal Saar-Tsechansky; Foster Provost; Raymond J. Mooney

Many induction problems include missing data that can be acquired at a cost. For building accurate predictive models, acquiring complete information for all instances is often expensive or unnecessary, while acquiring information for a random subset of instances may not be most effective. Active feature-value acquisition tries to reduce the cost of achieving a desired model accuracy by identifying instances for which obtaining complete information is most informative. We present an approach in which instances are selected for acquisition based on the current models accuracy and its confidence in the prediction. Experimental results demonstrate that our approach can induce accurate models using substantially fewer feature-value acquisitions as compared to alternative policies.


Machine Learning | 2013

A reinforcement learning approach to autonomous decision-making in smart electricity markets

Markus Peters; Wolfgang Ketter; Maytal Saar-Tsechansky; Jennifer Collins

The vision of a Smart Electric Grid relies critically on substantial advances in intelligent decentralized control mechanisms. We propose a novel class of autonomous broker agents for retail electricity trading that can operate in a wide range of Smart Electricity Markets, and that are capable of deriving long-term, profit-maximizing policies. Our brokers use Reinforcement Learning with function approximation, they can accommodate arbitrary economic signals from their environments, and they learn efficiently over the large state spaces resulting from these signals. We show how feature selection and regularization can be leveraged to automatically optimize brokers for particular market conditions, and demonstrate the performance of our design in extensive experiments using real-world energy market data.


international conference on data mining | 2005

An expected utility approach to active feature-value acquisition

Prem Melville; Maytal Saar-Tsechansky; Foster Provost; Raymond J. Mooney

In many classification tasks, training data have missing feature values that can be acquired at a cost. For building accurate predictive models, acquiring all missing values is often prohibitively expensive or unnecessary, while acquiring a random subset of feature values may not be most effective. The goal of active feature-value acquisition is to incrementally select feature values that are most cost-effective for improving the models accuracy. We present an approach that acquires feature values for inducing a classification model based on an estimation of the expected improvement in model accuracy per unit cost. Experimental results demonstrate that our approach consistently reduces the cost of producing a model of a desired accuracy compared to random feature acquisitions.


Information Systems Research | 2007

Decision-Centric Active Learning of Binary-Outcome Models

Maytal Saar-Tsechansky; Foster Provost

It can be expensive to acquire the data required for businesses to employ data-driven predictive modeling---for example, to model consumer preferences to optimize targeting. Prior research has introduced “active-learning” policies for identifying data that are particularly useful for model induction, with the goal of decreasing the statistical error for a given acquisition cost (error-centric approaches). However, predictive models are used as part of a decision-making process, and costly improvements in model accuracy do not always result in better decisions. This paper introduces a new approach for active data acquisition that specifically targets decision making. The new decision-centric approach departs from traditional active learning by placing emphasis on acquisitions that are more likely to affect decision making. We describe two different types of decision-centric techniques. Next, using direct-marketing data, we compare various data-acquisition techniques. We demonstrate that strategies for reducing statistical error can be wasteful in a decision-making context, and show that one decision-centric technique in particular can improve targeting decisions significantly. We also show that this method is robust in the face of decreasing quality of utility estimations, eventually converging to uniform random sampling, and that it can be extended to situations where different data acquisitions have different costs. The results suggest that businesses should consider modifying their strategies for acquiring information through normal business transactions. For example, a firm such as Amazon.com that models consumer preferences for customized marketing may accelerate learning by proactively offering recommendations---not merely to induce immediate sales, but for improving recommendations in the future.


european conference on machine learning | 2005

Active learning for probability estimation using jensen-shannon divergence

Prem Melville; Stewart M. Yang; Maytal Saar-Tsechansky; Raymond J. Mooney

Active selection of good training examples is an important approach to reducing data-collection costs in machine learning; however, most existing methods focus on maximizing classification accuracy. In many applications, such as those with unequal misclassification costs, producing good class probability estimates (CPEs) is more important than optimizing classification accuracy. We introduce novel approaches to active learning based on the algorithms Bootstrap-LV and ActiveDecorate, by using Jensen-Shannon divergence (a similarity measure for probability distributions) to improve sample selection for optimizing CPEs. Comprehensive experimental results demonstrate the benefits of our approaches.


knowledge discovery and data mining | 2005

Economical active feature-value acquisition through Expected Utility estimation

Prem Melville; Foster Provost; Maytal Saar-Tsechansky; Raymond J. Mooney

In many classification tasks training data have missing feature values that can be acquired at a cost. For building accurate predictive models, acquiring all missing values is often prohibitively expensive or unnecessary, while acquiring a random subset of feature values may not be most effective. The goal of active feature-value acquisition is to incrementally select feature values that are most cost-effective for improving the models accuracy. We present two policies, Sampled Expected Utility and Expected Utility-ES, that acquire feature values for inducing a classification model based on an estimation of the expected improvement in model accuracy per unit cost. A comparison of the two policies to each other and to alternative policies demonstrate that Sampled Expected Utility is preferable as it effectively reduces the cost of producing a model of a desired accuracy and exhibits a consistent performance across domains.


Data Mining and Knowledge Discovery | 2008

Guest editorial: special issue on utility-based data mining

Gary M. Weiss; Bianca Zadrozny; Maytal Saar-Tsechansky

Data mining has increasingly been employed in a variety of data-rich domains. As is the case for many new fields of study, at its inception data mining focused on simple scenarios for which methods such as classification, clustering and association mining could provide satisfactory answers. However, the real world scenarios in which datadriven analysis can provide valuable insights are almost always more complex and entail different objectives than those commonly assumed by these data mining techniques. These complexities include opportunities to acquire additional data to improve induction or inference and to recommend decisions that optimize a domain-appropriate utility metric, such as profitability or return on investment. Indeed, as an applied field we should be concerned with how these complexities—and how the deficiencies of current methodologies in taking them into account—pose significant limitations to broadening the adoption of the field and undermine its impact in practice. UtilityBased Data Mining (UBDM) addresses this challenge by taking into account the complex economic environments in which data mining occurs. Our use of the term utility corresponds to its use in economics and in our specific context corresponds to the total measure of satisfaction, or expected satisfaction, associated with the entire


international conference on electronic commerce | 2006

Adaptive mechanism design: a metalearning approach

David Pardoe; Peter Stone; Maytal Saar-Tsechansky; Kerem Tomak

Auction mechanism design has traditionally been a largely analytic process, relying on assumptions such as fully rational bidders. In practice, however, bidders often exhibit unknown and variable behavior, making them difficult to model and complicating the design process. To address this challenge, we explore the use of an adaptive auction mechanism: one that learns to adjust its parameters in response to past empirical bidder behavior so as to maximize an objective function such as auctioneer revenue. In this paper, we give an overview of our general approach and then present an instantiation in a specific auction scenario. In addition, we show how predictions of possible bidder behavior can be incorporated into the adaptive mechanism through a metalearning process. The approach is fully implemented and tested. Results indicate that the adaptive mechanism is able to outperform any single fixed mechanism, and that the addition of metalearning improves performance substantially.

Collaboration


Dive into the Maytal Saar-Tsechansky's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter Stone

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Raymond J. Mooney

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Wolfgang Ketter

Erasmus University Rotterdam

View shared research outputs
Top Co-Authors

Avatar

Markus Peters

Erasmus University Rotterdam

View shared research outputs
Top Co-Authors

Avatar

David Pardoe

University of Texas at Austin

View shared research outputs
Researchain Logo
Decentralizing Knowledge