Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Albert No is active.

Publication


Featured researches published by Albert No.


information theory workshop | 2012

Reference based genome compression

Bobbie Chern; Idoia Ochoa; Alexandros Manolakos; Albert No; Kartik Venkat; Tsachy Weissman

DNA sequencing technology has advanced to a point where storage is becoming the central bottleneck in the acquisition and mining of more data. Large amounts of data are vital for genomics research, and generic compression tools, while viable, cannot offer the same savings as approaches tuned to inherent biological properties. We propose an algorithm to compress a target genome given a known reference genome. The proposed algorithm first generates a mapping from the reference to the target genome, and then compresses this mapping with an entropy coder. As an illustration of the performance: applying our algorithm to James Watsons genome with hg18 as a reference, we are able to reduce the 2991 megabyte (MB) genome down to 6.99 MB, while Gzip compresses it to 834.8 MB.


IEEE Transactions on Information Theory | 2014

Information Measures: The Curious Case of the Binary Alphabet

Jiantao Jiao; Thomas A. Courtade; Albert No; Kartik Venkat; Tsachy Weissman

Four problems related to information divergence measures defined on finite alphabets are considered. In three of the cases we consider, we illustrate a contrast that arises between the binary-alphabet and larger alphabet settings. This is surprising in some instances, since characterizations for the larger alphabet settings do not generalize their binary-alphabet counterparts. In particular, we show that f-divergences are not the unique decomposable divergences on binary alphabets that satisfy the data processing inequality, thereby clarifying claims that have previously appeared in the literature. We also show that Kullback-Leibler (KL) divergence is the unique Bregman divergence, which is also an f-divergence for any alphabet size. We show that KL divergence is the unique Bregman divergence, which is invariant to statistically sufficient transformations of the data, even when nondecomposable divergences are considered. Like some of the problems we consider, this result holds only when the alphabet size is at least three.


IEEE Transactions on Information Theory | 2016

Strong Successive Refinability and Rate-Distortion-Complexity Tradeoff

Albert No; Amir Ingber; Tsachy Weissman

We investigate the second order asymptotics (source dispersion) of the successive refinement problem. Similar to the classical definition of a successively refinable source, we say that a source is strongly successively refinable if successive refinement coding can achieve the second order optimum rate (including the dispersion terms) at both decoders. We establish a sufficient condition for strong successive refinability. We show that any discrete source under Hamming distortion and the Gaussian source under quadratic distortion are strongly successively refinable. We also demonstrate how successive refinement ideas can be used in point-to-point lossy compression problems in order to reduce complexity. We give two examples, the binary-Hamming and Gaussian-quadratic cases, in which a layered code construction results in a low complexity scheme that attains optimal performance. For example, when the number of layers grows with the block length n, we show how to design an O(nlog(n)) algorithm that asymptotically achieves the rate-distortion bound.


allerton conference on communication, control, and computing | 2014

Rateless lossy compression via the extremes

Albert No; Tsachy Weissman

We begin by presenting a simple lossy compressor operating at near-zero rate: The encoder merely describes the indices of the few maximal source components, while the decoders reconstruction is a natural estimate of the source components based on this information. This scheme turns out to be near-optimal for the memoryless Gaussian source in the sense of achieving the zero-rate slope of its distortion-rate function. Motivated by this finding, we then propose a scheme comprising of iterating the above lossy compressor on an appropriately transformed version of the difference between the source and its reconstruction from the previous iteration. The proposed scheme achieves the rate distortion function of the Gaussian memoryless source (under squared error distortion) when employed on any finite-variance ergodic source. It further possesses desirable properties we respectively refer to as infinitesimal successive refinability, ratelessness, and complete separability. Its storage and computation requirements are of order no more than n2/logβn per source symbol for β > 0 at both the encoder and decoder. Though the details of its derivation, construction, and analysis differ considerably, we discuss similarities between the proposed scheme and the recently introduced SPARC of Venkataramanan et al.


international symposium on information theory | 2013

Minimax filtering regret via relations between information and estimation

Albert No; Tsachy Weissman

We investigate the problem of continuous-time causal estimation under a minimax criterion. Let XT = {Xt, 0 ≤ t ≤ T} be governed by probability law Pθ from some class of possible laws indexed by θ ∈ S, and YT be the noise corrupted observations of XT available to the estimator. We characterize the estimator minimizing the worst case regret, where regret is the difference between the expected loss of the estimator and that optimized for the true law of XT. We then relate this minimax regret to the channel capacity when the channel is either Gaussian or Poisson. In this case, we characterize the minimax regret and the minimax estimator more explicitly. If we assume that the uncertainty set consists of deterministic signals, the worst case regret is exactly equal to the corresponding channel capacity, namely the maximal mutual information attainable across the channel among all possible distributions on the uncertainty set of signals. Also, the optimum minimax estimator is the Bayesian estimator assuming the capacity-achieving prior. Moreover, we show that this minimax estimator is not only minimizing the worst case regret but also essentially minimizing the regret for “most” of the other sources in the uncertainty set. We present a couple of examples for the construction of an approximately minimax filter via an approximation of the associated capacity achieving distribution.


IEEE Transactions on Information Theory | 2014

Minimax Filtering Regret via Relations Between Information and Estimation

Albert No; Tsachy Weissman

We investigate the problem of continuous-time causal estimation under a minimax criterion. Let X T = (X t ,0 ≤ t ≤ T) be governed by the probability law P θ from a class of possible laws indexed by θ ∈ A, and Y T be the noise corrupted observations of X T available to the estimator. We characterize the estimator minimizing the worst case regret, where regret is the difference between the causal estimation loss of the estimator and that of the optimum estimator. One of the main contributions of this paper is characterizing the minimax estimator, showing that it is in fact a Bayesian estimator. We then relate minimax regret to the channel capacity when the channel is either Gaussian or Poisson. In this case, we characterize the minimax regret and the minimax estimator more explicitly. If we further assume that the uncertainty set consists of deterministic signals, the worst case regret is exactly equal to the corresponding channel capacity, namely the maximal mutual information attainable across the channel among all possible distributions on the uncertainty set of signals. The corresponding minimax estimator is the Bayesian estimator assuming the capacity-achieving prior. Using this relation, we also show that the capacity achieving prior coincides with the least favorable input. In addition, we show that this minimax estimator is not only minimizing the worst case regret, but also essentially minimizing regret for most of the other sources in the uncertainty set. We present a couple of examples for the construction of a minimax filter via an approximation of the associated capacity achieving distribution.


allerton conference on communication, control, and computing | 2013

Complexity and rate-distortion tradeoff via successive refinement

Albert No; Amir Ingber; Tsachy Weissman

We demonstrate how successive refinement ideas can be used in point-to-point lossy compression problems in order to reduce complexity. We show two examples, the binary-Hamming and quadratic-Gaussian cases, in which a layered code construction results in a low complexity scheme that attains optimal performance. For example, when the number of layers grows with the block length n, we show how to design an O(nlog(n)) algorithm that asymptotically achieves the rate distortion bound. We then show that with the same scheme, used with a fixed number of layers, successive refinement is achieved in the classical sense, and at the same time the second order performance (i.e. dispersion) is also tight.


IEEE Transactions on Information Theory | 2016

Rateless Lossy Compression via the Extremes

Albert No; Tsachy Weissman

We begin by presenting a simple lossy compressor operating at near-zero rate: The encoder merely describes the indices of the few maximal source components, while the decoders reconstruction is a natural estimate of the source components based on this information. This scheme turns out to be near-optimal for the memoryless Gaussian source in the sense of achieving the zero-rate slope of its distortion-rate function. Motivated by this finding, we then propose a scheme comprising of iterating the above lossy compressor on an appropriately transformed version of the difference between the source and its reconstruction from the previous iteration. The proposed scheme achieves the rate distortion function of the Gaussian memoryless source (under squared error distortion) when employed on any finite-variance ergodic source. It further possesses desirable properties we respectively refer to as infinitesimal successive refinability, ratelessness, and complete separability. Its storage and computation requirements are of order no more than n2/logβn per source symbol for β > 0 at both the encoder and decoder. Though the details of its derivation, construction, and analysis differ considerably, we discuss similarities between the proposed scheme and the recently introduced SPARC of Venkataramanan et al.


international symposium on information theory | 2015

Universality of logarithmic loss in lossy compression

Albert No; Tsachy Weissman

We establish two strong senses of universality of logarithmic loss as a distortion criterion in lossy compression: For any fixed length lossy compression problem under an arbitrary distortion criterion, we show that there is an equivalent lossy compression problem under logarithmic loss. In the successive refinement problem, if the first decoder operates under logarithmic loss, we show that any discrete memoryless source is successively refinable under an arbitrary distortion criterion for the second decoder.


Entropy | 2018

Information Geometric Approach on Most Informative Boolean Function Conjecture

Albert No

Let Xn be a memoryless uniform Bernoulli source and Yn be the output of it through a binary symmetric channel. Courtade and Kumar conjectured that the Boolean function f:{0,1}n→{0,1} that maximizes the mutual information I(f(Xn);Yn) is a dictator function, i.e., f(xn)=xi for some i. We propose a clustering problem, which is equivalent to the above problem where we emphasize an information geometry aspect of the equivalent problem. Moreover, we define a normalized geometric mean of measures and interesting properties of it. We also show that the conjecture is true when the arithmetic and geometric mean coincide in a specific set of measures.

Collaboration


Dive into the Albert No's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hyuk-Jae Lee

Seoul National University

View shared research outputs
Researchain Logo
Decentralizing Knowledge