Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ashiqur R. KhudaBukhsh is active.

Publication


Featured researches published by Ashiqur R. KhudaBukhsh.


international joint conference on artificial intelligence | 2009

SATenstein: automatically building local search SAT solvers from components

Ashiqur R. KhudaBukhsh; Lin Xu; Holger H. Hoos; Kevin Leyton-Brown

Designing high-performance algorithms for computationally hard problems is a difficult and often time-consuming task. In this work, we demonstrate that this task can be automated in the context of stochastic local search (SLS) solvers for the propositional satisfiability problem (SAT). We first introduce a generalised, highly parameterised solver framework, dubbed SATenstein, that includes components gleaned from or inspired by existing high-performance SLS algorithms for SAT. The parameters of SATenstein control the selection of components used in any specific instantiation and the behaviour of these components. SATenstein can be configured to instantiate a broad range of existing high-performance SLS-based SAT solvers, and also billions of novel algorithms. We used an automated algorithm configuration procedure to find instantiations of SATenstein that perform well on several well-known, challenging distributions of SAT instances. Overall, we consistently obtained significant improvements over the previously best-performing SLS algorithms, despite expending minimal manual effort.


Artificial Intelligence | 2016

SATenstein: Automatically building local search SAT solvers from components

Ashiqur R. KhudaBukhsh; Lin Xu; Holger H. Hoos; Kevin Leyton-Brown

Abstract Designing high-performance solvers for computationally hard problems is a difficult and often time-consuming task. Although such design problems are traditionally solved by the application of human expertise, we argue instead for the use of automatic methods. In this work, we consider the design of stochastic local search (SLS) solvers for the propositional satisfiability problem (SAT). We first introduce a generalized, highly parameterized solver framework, dubbed SATenstein, that includes components drawn from or inspired by existing high-performance SLS algorithms for SAT. The parameters of SATenstein determine which components are selected and how these components behave; they allow SATenstein to instantiate many high-performance solvers previously proposed in the literature, along with trillions of novel solver strategies. We used an automated algorithm configuration procedure to find instantiations of SATenstein that perform well on several well-known, challenging distributions of SAT instances. Our experiments show that SATenstein solvers achieved dramatic performance improvements as compared to the previous state of the art in SLS algorithms; for many benchmark distributions, our new solvers also significantly outperformed all automatically tuned variants of previous state-of-the-art algorithms.


australasian joint conference on artificial intelligence | 2016

Proactive Skill Posting in Referral Networks

Ashiqur R. KhudaBukhsh; Jaime G. Carbonell; Peter J. Jansen

Distributed learning in expert referral networks is an emerging challenge in the intersection of Active Learning and Multi-Agent Reinforcement Learning, where experts—humans or automated agents—can either solve problems themselves or refer said problems to others with more appropriate expertise. Recent work demonstrated methods that can substantially improve the overall performance of a network and proposed a distributed referral-learning algorithm, DIEL (Distributed Interval Estimation Learning), for learning appropriate referral choices. This paper augments the learning setting with a proactive skill posting step where experts can report some of their top skills to their colleagues. We found that in this new learning setting with meaningful priors, a modified algorithm, proactive-DIEL, performed initially much better and reached its maximum performance sooner than DIEL on the same data set used previously. Empirical evaluations show that the learning algorithm is robust to random noise in an expert’s estimation of her own expertise, and there is little advantage in misreporting skills when the rest of the experts report truthfully, i.e., the algorithm is near Bayesian-Nash incentive-compatible.


conference on information and knowledge management | 2015

Building Effective Query Classifiers: A Case Study in Self-harm Intent Detection

Ashiqur R. KhudaBukhsh; Paul N. Bennett; Ryen W. White

Query-based triggers play a crucial role in modern search systems, e.g., in deciding when to display direct answers on result pages. We address a common scenario in designing such triggers for real-world settings where positives are rare and search providers possess only a small seed set of positive examples to learn query classification models. We choose the critical domain of self-harm intent detection to demonstrate how such small seed sets can be expanded to create meaningful training data with a sizable fraction of positive examples. Our results show that with our method, substantially more positive queries can be found compared to plain random sampling. Additionally, we explored the effectiveness of traditional active learning approaches on classification performance and found that maximum uncertainty performs the best among several other techniques that we considered.


Journal of Intelligent Information Systems | 2018

Robust learning in expert networks: a comparative analysis

Ashiqur R. KhudaBukhsh; Jaime G. Carbonell; Peter J. Jansen

Human experts as well as autonomous agents in a referral network must decide whether to accept a task or refer to a more appropriate expert, and if so to whom. In order for the referral network to improve over time, the experts must learn to estimate the topical expertise of other experts. This article extends concepts from Multi-agent Reinforcement Learning and Active Learning to referral networks for distributed learning in referral networks. Among a wide array of algorithms evaluated, Distributed Interval Estimation Learning (DIEL), based on Interval Estimation Learning, was found to be superior for learning appropriate referral choices, compared to 𝜖-Greedy, Q-learning, Thompson Sampling and Upper Confidence Bound (UCB) methods. In addition to a synthetic data set, we compare the performance of the stronger learning-to-refer algorithms on a referral network of high-performance Stochastic Local Search (SLS) SAT solvers where expertise does not obey any known parameterized distribution. An evaluation of overall network performance and a robustness analysis is conducted across the learning algorithms, with an emphasis on capacity constraints and evolving networks, where experts with known expertise drop off and new experts of unknown performance enter — situations that arise in real-world scenarios but were heretofore ignored.


Archive | 2018

Market-Aware Proactive Skill Posting

Ashiqur R. KhudaBukhsh; Jong Woo Hong; Jaime G. Carbonell

Referral networks consist of a network of experts, human or automated agent, with differential expertise across topics and can redirect tasks to appropriate colleagues based on their topic-conditioned skills. Proactive skill posting is a setting in referral networks, where agents are allowed a one-time local-network-advertisement of a subset of their skills. Heretofore, while advertising expertise, experts only considered their own skills and reported their strongest skills. However, in practice, tasks can have varying difficulty levels and reporting skills that are uncommon or rare may give an expert relative advantage over others, and the network as a whole better ability to solve problems. This work introduces market-aware proactive skill posting where experts report a subset of their skills that give them competitive advantages over their peers. Our proposed algorithm in this new setting, proactive-DIEL\(_{\varDelta }\), outperforms the previous state-of-the-art, proactive-DIEL\(_t\) during the early learning phase, while retaining important properties such as tolerance to noisy self-skill estimates, and robustness to evolving networks and strategic lying.


international syposium on methodologies for intelligent systems | 2017

Robust Learning in Expert Networks: A Comparative Analysis

Ashiqur R. KhudaBukhsh; Jaime G. Carbonell; Peter J. Jansen

Learning how to refer effectively in an expert-referral network is an emerging challenge at the intersection of Active Learning and Multi-Agent Reinforcement Learning. Distributed interval estimation learning (DIEL) was previously found to be promising for learning appropriate referral choices, compared to greedy and Q-learning methods. This paper extends these results in several directions: First, learning methods with several multi-armed bandit (MAB) algorithms are compared along with greedy variants, each optimized individually. Second, DIEL’s rapid performance gain in the early phase of learning proved equally convincing in the case of multi-hop referral, a condition not heretofore explored. Third, a robustness analysis across the learning algorithms, with an emphasis on capacity constraints and evolving networks (experts dropping out and new experts of unknown performance entering) shows rapid recovery. Fourth, the referral paradigm is successfully extended to teams of Stochastic Local Search (SLS) SAT solvers with different capabilities.


learning and intelligent optimization | 2016

Quantifying the Similarity of Algorithm Configurations

Lin Xu; Ashiqur R. KhudaBukhsh; Holger H. Hoos; Kevin Leyton-Brown

A natural way of attacking a new, computationally challenging problem is to find a novel way of combining design elements introduced in existing algorithms. For example, this approach was made systematic in SATenstein [15], a highly parameterized stochastic local search (SLS) framework for SAT that unifies techniques across a wide range of well-known SLS solvers. The focus of such work so far has been on building frameworks and identifying high-performing configurations. Here, we focus on analyzing such frameworks, a problem that currently requires considerable manual effort and domain expertise. We propose a quantitative alternative: a new metric that measures the similarity between a new configuration and previously known algorithm designs. We first introduce concept DAGs, a data structure that preserves the hierarchical structure of configurations induced by conditional parameter dependencies. We then quantify the degree of similarity between two configurations as the transformation cost between the respective concept DAGs. In the context of analyzing SATenstein configurations, we demonstrate that visualizations based on transformation costs can provide useful insights into the similarities and differences between existing SLS-based SAT solvers and novel solver configurations.


EUMAS/AT | 2016

Proactive- DIEL in Evolving Referral Networks

Ashiqur R. KhudaBukhsh; Jaime G. Carbonell; Peter J. Jansen

Distributed learning in expert referral networks is a new Active Learning paradigm where experts—humans or automated agents—solve problems if they can or refer said problems to others with more appropriate expertise. Recent work augmented the basic learning-to-refer method with proactive skill posting, where experts may report their top skills to their colleagues, and proposed a modified algorithm, proactive-DIEL (Distributed Interval Estimation Learning), that takes advantage of such one-time posting instead of using an uninformed prior. This work extends the method in three main directions: (1) Proactive-DIEL is shown to work on a referral network of automated agents, namely SAT solvers, (2) Proactive-DIEL’s reward mechanism is extended to another referral-learning algorithm, \(\epsilon \)-Greedy, with some appropriate modifications. (3) The method is shown robust with respect to evolving networks where experts join or drop off, requiring the learning method to recover referral expertise. In all cases the proposed method exhibits superiority to the state of the art.


national conference on artificial intelligence | 2014

Detecting Non-Adversarial Collusion in Crowdsourcing.

Ashiqur R. KhudaBukhsh; Jaime G. Carbonell; Peter J. Jansen

Collaboration


Dive into the Ashiqur R. KhudaBukhsh's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter J. Jansen

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Holger H. Hoos

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Kevin Leyton-Brown

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Lin Xu

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge