Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rad Niazadeh is active.

Publication


Featured researches published by Rad Niazadeh.


IEEE Transactions on Signal Processing | 2012

On the Achievability of Cramér–Rao Bound in Noisy Compressed Sensing

Rad Niazadeh; Massoud Babaie-Zadeh; Christian Jutten

Recently, it has been proved in Babadi [B. Babadi, N. Kalouptsidis, and V. Tarokh, “Asymptotic achievability of the Cramér-Rao bound for noisy compressive sampling,” IEEE Trans. Signal Process., vol. 57, no. 3, pp. 1233-1236, 2009] that in noisy compressed sensing, a joint typical estimator can asymptotically achieve the Cramér-Rao lower bound of the problem. To prove this result, Babadi used a lemma, which is provided in Akçakaya and Tarokh [M. Akçakaya and V. Trarokh, “Shannon theoretic limits on noisy compressive sampling,” IEEE Trans. Inf. Theory, vol. 56, no. 1, pp. 492-504, 2010] that comprises the main building block of the proof. This lemma is based on the assumption of Gaussianity of the measurement matrix and its randomness in the domain of noise. In this correspondence, we generalize the results obtained in Babadi by dropping the Gaussianity assumption on the measurement matrix. In fact, by considering the measurement matrix as a deterministic matrix in our analysis, we find a theorem similar to the main theorem of Babadi for a family of randomly generated (but deterministic in the noise domain) measurement matrices that satisfy a generalized condition known as “the concentration of measures inequality.” By this, we finally show that under our generalized assumptions, the Cramér-Rao bound of the estimation is achievable by using the typical estimator introduced in Babadi et al.


foundations of computer science | 2015

Optimal Auctions vs. Anonymous Pricing

Saeed Alaei; Jason D. Hartline; Rad Niazadeh; Emmanouil Pountourakis; Yang Yuan

For selling a single item to agents with independent but non-identically distributed values, the revenue optimal auction is complex. With respect to it, Hartline and Rough garden showed that the approximation factor of the second-price auction with an anonymous reserve is between two and four. We consider the more demanding problem of approximating the revenue of the ex ante relaxation of the auction problem by posting an anonymous price (while supplies last) and prove that their worst-case ratio is e. As a corollary, the upper-bound of anonymous pricing or anonymous reserves versus the optimal auction improves from four to e. We conclude that, up to an e factor, discrimination and simultaneity are unimportant for driving revenue in single-item auctions.


symposium on the theory of computing | 2015

Secretary Problems with Non-Uniform Arrival Order

Thomas Kesselheim; Robert Kleinberg; Rad Niazadeh

For a number of problems in the theory of online algorithms, it is known that the assumption that elements arrive in uniformly random order enables the design of algorithms with much better performance guarantees than under worst-case assumptions. The quintessential example of this phenomenon is the secretary problem, in which an algorithm attempts to stop a sequence at the moment it observes the maximum value in the sequence. As is well known, if the sequence is presented in uniformly random order there is an algorithm that succeeds with probability 1/e, whereas no non-trivial performance guarantee is possible if the elements arrive in worst-case order. In many of the applications of online algorithms, it is reasonable to assume there is some randomness in the input sequence, but unreasonable to assume that the arrival ordering is uniformly random. This work initiates an investigation into relaxations of the random-ordering hypothesis in online algorithms, by focusing on the secretary problem and asking what performance guarantees one can prove under relaxed assumptions. Toward this end, we present two sets of properties of distributions over permutations as sufficient conditions, called the (p,q,δ)-block-independence property} and (k,δ)-uniform-induced-ordering property}. We show these two are asymptotically equivalent by borrowing some techniques from the celebrated approximation theory. Moreover, we show they both imply the existence of secretary algorithms with constant probability of correct selection, approaching the optimal constant 1/e as the related parameters of the property tend towards their extreme values. Both of these properties are significantly weaker than the usual assumption of uniform randomness; we substantiate this by providing several constructions of distributions that satisfy (p,q,δ)-block-independence. As one application of our investigation, we prove that Θ(log log n) is the minimum entropy of any permutation distribution that permits constant probability of correct selection in the secretary problem with


international conference on latent variable analysis and signal separation | 2010

An alternating minimization method for sparse channel estimation

Rad Niazadeh; Massoud Babaie-Zadeh; Christian Jutten

n


workshop on internet and network economics | 2014

Simple and Near-Optimal Mechanisms for Market Intermediation

Rad Niazadeh; Yang Yuan; Robert Kleinberg

elements. While our block-independence condition is sufficient for constant probability of correct selection, it is not necessary; however, we present complexity-theoretic evidence that no simple necessary and sufficient criterion exists. Finally, we explore the extent to which the performance guarantees of other algorithms are preserved when one relaxes the uniform random ordering assumption to (p,q,δ)-block-independence, obtaining a negative result for the weighted bipartite matching algorithm of Korula and Pal.


economics and computation | 2017

Online Auctions and Multi-scale Online Learning

Sébastien Bubeck; Nikhil R. Devanur; Zhiyi Huang; Rad Niazadeh

The problem of estimating a sparse channel, i.e. a channel with a few non-zero taps, appears in many fields of communication including acoustic underwater or wireless transmissions. In this paper, we have developed an algorithm based on Iterative Alternating Minimization technique which iteratively detects the location and the value of the channel taps. In fact, at each iteration we use an approximate Maximum A posteriori Probability (MAP) scheme for detection of the taps, while a least square method is used for estimating the values of the taps at each iteration. For approximate MAP detection, we have proposed three different methods leading to three variants for our algorithm. Finally, we experimentally compared the new algorithms to the Cramer-Rao lower bound of the estimation based on knowing the locations of the taps. We experimentally show that by selecting appropriate preliminaries for our algorithm, one of its variants almost reaches the Cramer-Rao bound for high SNR, while the others always achieve good performance.


international conference on latent variable analysis and signal separation | 2010

Adaptive and non-adaptive ISI sparse channel estimation based on SL0 and its application in ML sequence-by-sequence equalization

Rad Niazadeh; Massoud Babaie-Zadeh; Christian Jutten

A prevalent market structure in the Internet economy consists of buyers and sellers connected by a platform (such as Amazon or eBay) that acts as an intermediary and keeps a share of the revenue of each transaction. While the optimal mechanism that maximizes the intermediary’s profit in such a setting may be quite complicated, the mechanisms observed in reality are generally much simpler, e.g., applying an affine function to the price of the transaction as the intermediary’s fee. [7, 8] initiated the study of such fee-setting mechanisms in two-sided markets, and we continue this investigation by addressing the question of when an affine fee schedule is approximately optimal for worst-case seller distribution. On one hand our work supplies non-trivial sufficient conditions on the buyer side (i.e. linearity of marginal revenue function, or MHR property of value and value minus cost distributions) under which an affine fee schedule can obtain a constant fraction of the intermediary’s optimal profit for all seller distributions. On the other hand we complement our result by showing that proper affine fee-setting mechanisms (e.g. those used in eBay and Amazon selling plans) are unable to extract a constant fraction of optimal profit in the worst-case seller distribution. As subsidiary results we also show there exists a constant gap between maximum surplus and maximum revenue under the aforementioned conditions. Most of the mechanisms that we propose are also prior-independent with respect to the seller, which signifies the practical implications of our result.


economics and computation | 2018

Fast Core Pricing for Rich Advertising Auctions

Jason D. Hartline; Nicole Immorlica; Mohammad Reza Khani; Brendan Lucier; Rad Niazadeh

We consider revenue maximization in online auctions and pricing. A seller sells an identical item in each period to a new buyer, or a new set of buyers. For the online posted pricing problem, we show regret bounds that scale with the best fixed price, rather than the range of the values. We also show regret bounds that are almost scale free, and match the offline sample complexity, when comparing to a benchmark that requires a lower bound on the market share. These results are obtained by generalizing the classical learning from experts and multi-armed bandit problems to their multi-scale versions. In this version, the reward of each action is in a different range, and the regret w.r.t. a given action scales with its own range, rather than the maximum range.


economics and computation | 2017

Truth and Regret in Online Scheduling

Shuchi Chawla; Nikhil R. Devanur; Janardhan Kulkarni; Rad Niazadeh

In this paper, we firstly propose an adaptive method based on the idea of Least Mean Square (LMS) algorithm and the concept of smoothed l0 (SL0) norm presented in [1] for estimation of sparse Inter Symbol Interface (ISI) channels which will appear in wireless and acoustic underwater transmissions. Afterwards, a new non-adaptive fast channel estimation method based on SL0 sparse signal representation is proposed. ISI channel estimation will have a direct effect on the performance of the ISI equalizer at the receiver. So, in this paper we investigate this effect in the case of optimal Maximum Likelihood Sequence-by-sequence Equalizer (MLSE) [2]. In order to implement this equalizer, we propose a new method called pre-filteredParallel ViterbiAlgorithm(or pre-filteredPVA) for general ISI sparse channels which has much less complexity than ordinary Viterbi Algorithm (VA) and also with no considerable loss of optimality, which we have examined by doing some experiments. Indeed, Simulation results clearly showthat the proposed concatenated estimation-equalization methods havemuch better performance than the usual equalization methods such as Linear Mean Square Equalization (LMSE) for ISI sparse channels, while preserving simplicity at the receiver with the use of PVA.


ACM Crossroads Student Magazine | 2017

Algorithms versus mechanisms: how to cope with strategic input?

Rad Niazadeh

As online ad offerings become increasingly complex, with multiple size configurations and layouts available to advertisers, the sale of web advertising space increasingly resembles a combinatorial auction with complementarities. Standard ad auction formats do not immediately extend to these settings, and truthful combinatorial auctions, such as the Vickrey-Clarke-Groves auction, can yield unacceptably low revenue. Core selecting auctions, which apply to combinatorial markets, boost revenue by setting prices so that no group of agents, including the auctioneer, can jointly improve their utilities by switching to a different allocation and payments. Among outcomes in the core, bidder-optimal core points have been the most widely studied due to their incentive properties, such as being implementable at natural equilibria. Prior work in economics has studied heuristics and algorithms for computing approximate bidder-optimal core points, given oracle access to the welfare optimization problem. Relative to prior work, our goal is to develop an algorithm with asymptotically fewer oracle calls while maintaining theoretical performance guarantees. Our main result is a combinatorial algorithm that finds an approximate bidder-optimal core point with almost linear number of calls to the welfare maximization oracle. Our algorithm is faster than previously-proposed heuristics, it has theoretical guarantees, and it reveals some useful structural properties of the core polytope. We take a two-pronged approach to evaluating our core-pricing method. We first consider a theoretical treatment of the problem of finding a bidder-optimal core point in a general combinatorial auction setting. We then consider an experimental treatment of deploying our algorithm in a highly time-sensitive advertising auction platforms.

Collaboration


Dive into the Rad Niazadeh's collaboration.

Top Co-Authors

Avatar

Christian Jutten

Institut Universitaire de France

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zhiyi Huang

University of Hong Kong

View shared research outputs
Researchain Logo
Decentralizing Knowledge