Shai Dekel
Tel Aviv University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Shai Dekel.
Siam Journal on Imaging Sciences | 2012
Amir Averbuch; Shai Dekel; Shay Deutsch
In recent years, the theory of compressed sensing has emerged as an alternative for the Shannon sampling theorem, suggesting that compressible signals can be reconstructed from far fewer samples than required by the Shannon sampling theorem. In fact the theory advocates that nonadaptive, “random” functionals are in some sense optimal for this task. However, in practice, compressed sensing is very difficult to implement for large data sets, particularly because the recovery algorithms require significant computational resources. In this work, we present a new alternative method for simultaneous image acquisition and compression called adaptive compressed sampling. We exploit wavelet tree structures found in natural images to replace the “universal” acquisition of incoherent measurements with a direct and fast method for adaptive wavelet tree acquisition. The main advantages of this direct approach are that no complex recovery algorithm is in fact needed and that it allows more control over the compressed image quality, in particular, the sharpness of edges. Our experimental results show, by way of software simulations, that our adaptive algorithms perform better than existing nonadaptive methods in terms of image quality and speed.
SIAM Journal on Numerical Analysis | 2005
Shai Dekel; D. Leviatan
The binary space partition (BSP) technique is a simple and efficient method to adaptively partition an initial given domain to match the geometry of a given input function. As such, the BSP technique has been widely used by practitioners, but up until now no rigorous mathematical justification for it has been offered. Here we attempt to put the technique on sound mathematical foundations, and we offer an enhancement of the BSP algorithm in the spirit of what we are going to call geometric wavelets. This new approach to sparse geometric representation is based on recent developments in the theory of multivariate nonlinear piecewise polynomial approximation. We provide numerical examples of n-term geometric wavelet approximations of known test images and compare them with dyadic wavelet approximation. We also discuss applications to image denoising and compression.
IEEE Transactions on Signal Processing | 2015
Tamir Bendory; Shai Dekel; Arie Feuer
This paper considers the problem of recovering an ensemble of Diracs on a sphere from its low resolution measurements. The Diracs can be located at any location on the sphere, not necessarily on a grid. We show that under a separation condition, one can recover the ensemble with high precision by a three-stage algorithm, which consists of solving a semi-definite program, root finding and least-square fitting. The algorithms computation time depends solely on the number of measurements, and not on the required solution accuracy. We also show that in the special case of non-negative ensembles, a sparsity condition is sufficient for recovery. Furthermore, in the discrete setting, we estimate the recovery error in the presence of noise as a function of the noise level and the super-resolution factor.
IEEE Transactions on Image Processing | 2007
Roman Kazinnik; Shai Dekel; Nira Dyn
We present a new image coding algorithm, the geometric piecewise polynomials (GPP) method, that draws on recent developments in the theory of adaptive multivariate piecewise polynomials approximation. The algorithm relies on a segmentation stage whose goal is to minimize a functional that is conceptually similar to the Mumford-Shah functional except that it measures the smoothness of the segmentation instead of the length. The initial segmentation is ldquoprunedrdquo and the remaining curve portions are lossy encoded. The image is then further partitioned and approximated by low order polynomials on the subdomains. We show examples where our algorithm outperforms state-of-the-art wavelet coding in the low bit-rate range. The GPP algorithm significantly outperforms wavelet based coding methods on graphic and cartoon images. Also, at the bit rate 0.05 bits per pixel, the GPP algorithm achieves on the test image Cameraman, which has a geometric structure, a PSNR of 21.5 dB, while the JPEG2000 Kakadu software obtains PSNR of 20 dB. For the test image Lena, the GPP algorithm obtains the same PSNR as JPEG2000, but with better visual quality at 0.03 bpp.
Journal of Approximation Theory | 2014
Tamir Bendory; Shai Dekel; Arie Feuer
Abstract In this work we consider the problem of recovering non-uniform splines from their projection onto spaces of algebraic polynomials. We show that under a certain Chebyshev-type separation condition on its knots, a spline whose inner-products with a polynomial basis and boundary conditions are known, can be recovered using Total Variation norm minimization. The proof of the uniqueness of the solution uses the method of ‘dual’ interpolating polynomials and is based on Candes and Fernandez-Granda (2014), where the theory was developed for trigonometric polynomials. We also show results for the multivariate case.
Numerische Mathematik | 2007
Wolfgang Dahmen; Shai Dekel; Pencho Petrushev
This paper is concerned with the construction and analysis of multilevel Schwarz preconditioners for partition of unity methods applied to elliptic problems. We show under which conditions on a given multilevel partition of unity hierarchy (MPUM) one even obtains uniformly bounded condition numbers and how to realize such requirements. The main anlytical tools are certain norm equivalences based on two-level splits providing frames that are stable under taking subsets.
Foundations of Computational Mathematics | 2004
Shai Dekel; D. Leviatan
Abstract We prove the following Whitney estimate. Given 0 < p \le \infty, r \in N, and d \ge 1, there exists a constant C(d,r,p), depending only on the three parameters, such that for every bounded convex domain Ω \subset Rd, and each function f \in Lp(Ω), Er-1(f,Ω)p \le C(d,r,p)ωr(f, diam(Ω))p, where Er-1(f,Ω)p is the degree of approximation by polynomials of total degree, r – 1, and ωr(f,·)p is the modulus of smoothness of order r. Estimates like this can be found in the literature but with constants that depend in an essential way on the geometry of the domain, in particular, the domain is assumed to be a Lipschitz domain and the above constant C depends on the minimal head-angle of the cones associated with the boundary. The estimates we obtain allow us to extend to the multivariate case, the results on bivariate Skinny B-spaces of Karaivanov and Petrushev on characterizing nonlinear approximation from nested triangulations. In a sense, our results were anticipated by Karaivanov and Petrushev.
IEEE Transactions on Signal Processing | 2016
Tamir Bendory; Avinoam Bar-Zion; Dan Adam; Shai Dekel; Arie Feuer
This paper considers the problem of estimating the delays of a weighted superposition of pulses, called stream of pulses, in a noisy environment. We show that the delays can be estimated using a tractable convex optimization problem with a localization error proportional to the square root of the noise level. Furthermore, all false detections produced by the algorithm have small amplitudes. Numerical and in-vitro ultrasound experiments corroborate the theoretical results and demonstrate their applicability for the ultrasound imaging signal processing.
computer vision and pattern recognition | 2013
Oren Barkan; Jonathan Weill; Amir Averbuch; Shai Dekel
One of the main challenges in Computed Tomography (CT) is how to balance between the amount of radiation the patient is exposed to during scan time and the quality of the CT image. We propose a mathematical model for adaptive CT acquisition whose goal is to reduce dosage levels while maintaining high image quality at the same time. The adaptive algorithm iterates between selective limited acquisition and improved reconstruction, with the goal of applying only the dose level required for sufficient image quality. The theoretical foundation of the algorithm is nonlinear Ridgelet approximation and a discrete form of Ridgelet analysis is used to compute the selective acquisition steps that best capture the image edges. We show experimental results where for the same number of line projections, the adaptive model produces higher image quality, when compared with standard limited angle, non-adaptive acquisition algorithms.
Advances in Computational Mathematics | 2004
Shai Dekel; D. Leviatan
It is well known that it is possible to enhance the approximation properties of a kernel operator by increasing its support size. There is an obvious tradeoff between higher approximation order of a kernel and the complexity of algorithms that employ it. A question is then asked: how do we compare the efficiency of kernels with comparable support size? We follow Blu and Unser and choose as a measure of the efficiency of the kernels the first leading constant in a certain error expansion. We use time domain methods to treat the case of globally supported kernels in Lp(Rd), 1≤p≤∞.