John Z. Sun
Massachusetts Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by John Z. Sun.
international symposium on information theory | 2009
John Z. Sun; Vivek K Goyal
Quantization is an important but often ignored consideration in discussions about compressed sensing. This paper studies the design of quantizers for random measurements of sparse signals that are optimal with respect to mean-squared error of the lasso reconstruction. We utilize recent results in high-resolution functional scalar quantization and homotopy continuation to approximate the optimal quantizer. Experimental results compare this quantizer to other practical designs and show a noticeable improvement in the operational distortion-rate performance.
Mobile Computing and Communications Review | 2009
Stefan Geirhofer; John Z. Sun; Lang Tong; Brian M. Sadler
Wireless services in the unlicensed bands are proliferating but frequently face high interference from other devices due to a lack of coordination among heterogeneous technologies. In this paper we study how cognitive radio concepts enable systems to sense and predict interference patterns and adapt their spectrum access accordingly. This leads to a new cognitive coexistence paradigm, in which cognitive radio implicitly coordinates the spectrum access of heterogeneous systems. Within this framework, we investigate coexistence with a set of parallel WLAN bands: based on predicting WLAN activity, the cognitive radio dynamically hops between the bands to avoid collisions and reduce interference. The development of a real-time test bed is presented, and used to corroborate theoretical results and model assumptions. Numerical results show a good fit between theory and experiment and demonstrate that sensing and prediction can mitigate interference effectively.
international conference on acoustics, speech, and signal processing | 2012
John Z. Sun; Kush R. Varshney; Karthik Subbian
Matrix factorization from a small number of observed entries has recently garnered much attention as the key ingredient of successful recommendation systems. One unresolved problem in this area is how to adapt current methods to handle changing user preferences over time. Recent proposals to address this issue are heuristic in nature and do not fully exploit the time-dependent structure of the problem. As a principled and general temporal formulation, we propose a dynamical state space model of matrix factorization. Our proposal builds upon probabilistic matrix factorization, a Bayesian model with Gaussian priors. We utilize results in state tracking, i.e. the Kalman filter, to provide accurate recommendations in the presence of both process and measurement noise. We show how system parameters can be learned via expectation-maximization and provide comparisons to current published techniques.
IEEE Transactions on Signal Processing | 2014
John Z. Sun; Dhruv Parthasarathy; Kush R. Varshney
We propose a new algorithm for estimation, prediction, and recommendation named the collaborative Kalman filter. Suited for use in collaborative filtering settings encountered in recommendation systems with significant temporal dynamics in user preferences, the approach extends probabilistic matrix factorization in time through a state-space model. This leads to an estimation procedure with parallel Kalman filters and smoothers coupled through item factors. Learning of global parameters uses the expectation-maximization algorithm. The method is compared to existing techniques and performs favorably on both generated data and real-world movie recommendation data.
data compression conference | 2011
John Z. Sun; Vivek K Goyal
Quantizers for probabilistic sources are usually optimized for mean-squared error. In many applications, maintaining low relative error is a more suitable objective. This measure has previously been heuristically connected with the use of logarithmic companding in perceptual coding. We derive optimal companding quantizers for fixed rate and variable rate under high-resolution assumptions. The analysis shows logarithmic companding is optimal for variable-rate quantization but generally not for fixed-rate quantization. Naturally, the improvement in relative error from using a correctly optimized quantizer can be arbitrarily large. We extend this framework for a large class of nondifference distortions.
conference on information sciences and systems | 2011
Szymon Jakubczak; John Z. Sun; Dina Katabi; Vivek K Goyal
Streaming degradable content (such as video or audio) over a wireless channel presents new challenges to modern digital design, which was founded on the separation theorem of Shannon theory. Joint source-channel coding (JSCC) has recently received increasing interest to address the varying nature of the wireless channel conditions (under mobility or multicast). The conventional approach to JSCC which combines successive refinement with superposition coding is still digital and separable. However, for a white Gaussian source on a white Gaussian channel, it is outperformed by uncoded, linear scaling, which achieves Shannons distortion limit. Practical degradable content sources are not white, but better approximated by multivariate / non-white Gaussian models. We investigate the performance of a linear, uncoded communication scheme for such sources. We find that there exist regimes where the uncoded scheme is near-optimal for point-to-point communication and provides significant gains over the conventional digital design for broadcast.
IEEE Transactions on Signal Processing | 2013
John Z. Sun; Vinith Misra; Vivek K Goyal
Distributed functional scalar quantization (DFSQ) theory provides optimality conditions and predicts performance of data acquisition systems in which a computation on acquired data is desired. We address two limitations of previous works: prohibitively expensive decoder design and a restriction to source distributions with bounded support. We show that a much simpler decoder has equivalent asymptotic performance to the conditional expectation estimator studied previously, thus reducing decoder design complexity. The simpler decoder features decoupled communication and computation blocks. Moreover, we extend the DFSQ framework with the simpler decoder to source distributions with unbounded support. Finally, through simulation results, we demonstrate that performance at moderate coding rates is well predicted by the asymptotic analysis, and we give new insight on the rate of convergence.
allerton conference on communication, control, and computing | 2012
John Z. Sun; Vivek K Goyal
Several key results in source coding offer the intuition that distributed encoding via vector-quantize-and-bin is only slightly suboptimal to joint encoding and oftentimes is just as good. However, when source acquisition requires the block length to be small, collaboration between sensors can greatly reduce distortion. For a distributed acquisition network where sensors are allowed to “chat” using a side channel, we provide exact characterization of distortion performance and quantizer design in the high-resolution (low-distortion) regime using a framework called distributed functional scalar quantization (DFSQ). The key result is that chatting can dramatically improve performance even when the intersensor communication is at very low rate. We also solve the rate allocation problem when communication links have heterogeneous costs and provide examples to demonstrate that this theory predicts performance at practical communication rates.
international symposium on information theory | 2013
John Z. Sun; Vivek K Goyal
For point-to-point and distributed communication of continuous sources, high-resolution quantization theory provides an achievable rate-distortion trade-off that is simple to compute and motivates practical compression architectures. Moreover, high-resolution analysis gives good inner bounds for the Shannon rate-distortion region when a more general characterization is difficult. In this paper, we analyze the sum-rate gap between coded nonuniform scalar quantization and the Shannon rate- distortion region for a system that requires fidelity in a computation applied to the source variables. We find that the loss can be as low as 0.255 bits/sample, which has previously been observed in the point-to-point setting, and it is achieved using a simple architecture of nonuniform quantization followed by Slepian-Wolf coding.
Journal of Mathematical Psychology | 2012
John Z. Sun; Grace I. Wang; Vivek K Goyal; Lav R. Varshney