Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alexey Drutsa is active.

Publication


Featured researches published by Alexey Drutsa.


international world wide web conferences | 2015

Future User Engagement Prediction and Its Application to Improve the Sensitivity of Online Experiments

Alexey Drutsa; Gleb Gusev; Pavel Serdyukov

Modern Internet companies improve their services by means of data-driven decisions that are based on online controlled experiments (also known as A/B tests). To run more online controlled experiments and to get statistically significant results faster are the emerging needs for these companies. The main way to achieve these goals is to improve the sensitivity of A/B experiments. We propose a novel approach to improve the sensitivity of user engagement metrics (that are widely used in A/B tests) by utilizing prediction of the future behavior of an individual user. This problem of prediction of the exact value of a user engagement metric is also novel and is studied in our work. We demonstrate the effectiveness of our sensitivity improvement approach on several real online experiments run at Yandex. Especially, we show how it can be used to detect the treatment effect of an A/B test faster with the same level of statistical significance.


web search and data mining | 2015

Engagement Periodicity in Search Engine Usage: Analysis and its Application to Search Quality Evaluation

Alexey Drutsa; Gleb Gusev; Pavel Serdyukov

Nowadays, billions of people use the Web in connection with their daily needs. A significant part of the needs are constituted by search tasks that are usually addressed by search engines. Thus, daily search needs result in regular user engagement with a search engine. User engagement with web sites and services was studied in various aspects, but there appear to be no studies of its regularity and periodicity. In this paper, we studied periodicity of the user engagement with a popular search engine through applying spectrum analysis to temporal sequences of different engagement metrics. We found periodicity patterns of user engagement and revealed classes of users whose periodicity patterns do not change over a long period of time. In addition, we used the spectrum series as metrics to evaluate search quality.


conference on information and knowledge management | 2015

Practical Aspects of Sensitivity in Online Experimentation with User Engagement Metrics

Alexey Drutsa; Anna Ufliand; Gleb Gusev

Online controlled experiments, e.g., A/B testing, is the state-of-the-art approach used by modern Internet companies to improve their services based on data-driven decisions. The most challenging problem is to define an appropriate online metric of user behavior, so-called Overall Evaluation Criterion (OEC), which is both interpretable and sensitive. A typical OEC consists of a key metric and an evaluation statistic. Sensitivity of an OEC to the treatment effect of an A/B test is measured by a statistical significance test. We introduce the notion of Overall Acceptance Criterion (OAC) that includes both the components of an OEC and a statistical significance test. While existing studies on A/B tests are mostly concentrated on the first component of an OAC, its key metric, we widely study the two latter ones by comparison of several statistics and several statistical tests with respect to user engagement metrics on hundreds of A/B experiments run on real users of Yandex. We discovered that the application of the state-of-the-art Students t-tests to several main user engagement metrics may lead to an underestimation of the false-positive rate by an order of magnitude. We investigate both well-known and novel techniques to overcome this issue in practical settings. At last, we propose the entropy and the quantiles as novel OECs that reflect the diversity and extreme cases of user engagement.


knowledge discovery and data mining | 2016

Boosted Decision Tree Regression Adjustment for Variance Reduction in Online Controlled Experiments

Alexey Poyarkov; Alexey Drutsa; Andrey Khalyavin; Gleb Gusev; Pavel Serdyukov

Nowadays, the development of most leading web services is controlled by online experiments that qualify and quantify the steady stream of their updates achieving more than a thousand concurrent experiments per day. Despite the increasing need for running more experiments, these services are limited in their user traffic. This situation leads to the problem of finding a new or improving existing key performance metric with a higher sensitivity and lower variance. We focus on the problem of variance reduction for engagement metrics of user loyalty that are widely used in A/B testing of web services. We develop a general framework that is based on evaluation of the mean difference between the actual and the approximated values of the key performance metric (instead of the mean of this metric). On the one hand, it allows us to incorporate the state-of-the-art techniques widely used in randomized experiments of clinical and social research, but limitedly used in online evaluation. On the other hand, we propose a new class of methods based on advanced machine learning algorithms, including ensembles of decision trees, that, to the best of our knowledge, have not been applied earlier to the problem of variance reduction. We validate the variance reduction approaches on a very large set of real large-scale A/B experiments run at Yandex for different engagement metrics of user loyalty. Our best approach demonstrates


knowledge discovery and data mining | 2015

Extreme States Distribution Decomposition Method for Search Engine Online Evaluation

Kirill Nikolaev; Alexey Drutsa; Ekaterina Gladkikh; Alexander Ulianov; Gleb Gusev; Pavel Serdyukov

63\%


international acm sigir conference on research and development in information retrieval | 2015

Sign-Aware Periodicity Metrics of User Engagement for Online Search Quality Evaluation

Alexey Drutsa

average variance reduction (which is equivalent to 63% saved user traffic) and detects the treatment effect in


ACM Transactions on The Web | 2017

Periodicity in User Engagement with a Search Engine and Its Application to Online Controlled Experiments

Alexey Drutsa; Gleb Gusev; Pavel Serdyukov

2


web search and data mining | 2017

Learning Sensitive Combinations of A/B Test Metrics

Eugene Kharitonov; Alexey Drutsa; Pavel Serdyukov

times more A/B experiments.


international world wide web conferences | 2017

Horizon-Independent Optimal Pricing in Repeated Auctions with Truthful and Strategic Buyers

Alexey Drutsa

Nowadays, the development of most leading web services is controlled by online experiments that qualify and quantify the steady stream of their updates. The challenging problem is to define an appropriate online metric of user behavior, so-called Overall Evaluation Criterion (OEC), which is both interpretable and sensitive. The state-of-the-art approach is to choose a type of entities to observe in the behavior data, to define a key metric for these observations, and to estimate the average value of this metric over the observations in each of the system versions. A significant disadvantage of the OEC obtained in this way is that the average value of the key metric does not necessarily change, even if its distribution changes significantly. The reason is that the difference between the mean values of the key metric over the two variants of the system does not necessarily reflect the character of the change in the distribution. We develop a novel method of quantifying the change in the distribution of the key metric, which is (1) interpretable, (2) is based on the analysis of the two distributions as a whole, and, for this reason, is sensitive to more ways the two distributions may actually differ. We provide a thorough theoretical analysis of our approach and show experimentally that, other things being equal, it produces more sensitive OEC than the average.


web search and data mining | 2018

Consistent Transformation of Ratio Metrics for Efficient Online Controlled Experiments

Roman Budylin; Alexey Drutsa; Ilya Vladimirovich Katsev; Valeriya Tsoy

Modern Internet companies improve evaluation criteria of their data-driven decision-making that is based on online controlled experiments (also known as A/B tests). The amplitude metrics of user engagement are known to be well sensitive to service changes, but they could not be used to determine, whether the treatment effect is positive or negative. We propose to overcome this sign-agnostic issue by paying attention to the phase of the corresponding DFT sine wave. We refine the amplitude metrics of the first frequency by the phase ones and formalize our intuition in several novel overall evaluation criteria. These criteria are then verified over A/B experiments on real users of Yandex. We find that our approach holds the sensitivity level of the amplitudes and makes their changes sign-aware w.r.t. the treatment effect.

Researchain Logo
Decentralizing Knowledge