Boaz Ophir
Technion – Israel Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Boaz Ophir.
IEEE Journal of Selected Topics in Signal Processing | 2011
Boaz Ophir; Michael Lustig; Michael Elad
In this paper, we present a multi-scale dictionary learning paradigm for sparse and redundant signal representations. The appeal of such a dictionary is obvious-in many cases data naturally comes at different scales. A multi-scale dictionary should be able to combine the advantages of generic multi-scale representations (such as Wavelets), with the power of learned dictionaries, in capturing the intrinsic characteristics of a family of signals. Using such a dictionary would allow representing the data in a more efficient, i.e., sparse, manner, allowing applications to take a more global look at the signal. In this paper, we aim to achieve this goal without incurring the costs of an explicit dictionary with large atoms. The K-SVD using Wavelets approach presented here applies dictionary learning in the analysis domain of a fixed multi-scale operator. This way, sub-dictionaries at different data scales, consisting of small atoms, are trained. These dictionaries can then be efficiently used in sparse coding for various image processing applications, potentially outperforming both single-scale trained dictionaries and multi-scale analytic ones. In this paper, we demonstrate this construction and discuss its potential through several experiments performed on fingerprint and coastal scenery images.
international conference on image processing | 2007
Boaz Ophir; David Malah
Show-Through is a common occurrence when scanning duplex printed documents. The back-side printing shows through the paper, contaminating the front side image. Previous work modeled the problem as a non-linear convolutive mixture of images and offered solutions based on decorrelation. In this work we propose a cleaning process based on a Blind Source Separation approach. We define a cost function incorporating the non-linear mixing model in a mean-squared error term, along with a regularization term based on Total-Variation. We propose a location dependent regularization tradeoff, preserving image edges while removing show-through edges. The images and mixing parameters are estimated using an alternating minimization process, with each stage using only convex optimization methods. The resulting images exhibit significantly lower show-through, both visibly and in objective measures.
IEEE Transactions on Signal Processing | 2016
Jeremias Sulam; Boaz Ophir; Michael Zibulevsky; Michael Elad
Sparse representation has shown to be a very powerful model for real world signals, and has enabled the development of applications with notable performance. Combined with the ability to learn a dictionary from signal examples, sparsity-inspired algorithms are often achieving state-of-the-art results in a wide variety of tasks. These methods have traditionally been restricted to small dimensions mainly due to the computational constraints that the dictionary learning problem entails. In the context of image processing, this implies handling small image patches. In this work we show how to efficiently handle bigger dimensions and go beyond the small patches in sparsity-based signal and image processing methods. We build our approach based on a new cropped Wavelet decomposition, which enables a multi-scale analysis with virtually no border effects. We then employ this as the base dictionary within a double sparsity model to enable the training of adaptive dictionaries. To cope with the increase of training data, while at the same time improving the training performance, we present an Online Sparse Dictionary Learning (OSDL) algorithm to train this model effectively, enabling it to handle millions of examples. This work shows that dictionary learning can be up-scaled to tackle a new level of signal dimensions, obtaining large adaptable atoms that we call Trainlets.
international conference on image processing | 2014
Jeremias Sulam; Boaz Ophir; Michael Elad
Over the last decade, a number of algorithms have shown promising results in removing additive white Gaussian noise from natural images, and though different, they all share in common a patch based strategy by locally denoising overlapping patches. While this lowers the complexity of the problem, it also causes noticeable artifacts when dealing with large smooth areas. In this paper we present a patch-based denoising algorithm relying on a sparsity-inspired model (K-SVD), which uses a multi-scale analysis framework. This allows us to overcome some of the disadvantages of the popular algorithms. We look for a sparse representation under an already sparsifying wavelet transform by adaptively training a dictionary on the different decomposition bands of the noisy image itself, leading to a multi-scale version of the K-SVD algorithm. We then combine the single scale and multi-scale approaches by merging both outputs by weighted joint sparse coding of the images. Our experiments on natural images indicate that our method is competitive with state of the art algorithms in terms of PSNR while giving superior results with respect to visual quality.
international conference on document analysis and recognition | 2011
Sivan Gleichman; Boaz Ophir; Amir Geva; Mattias Marder; Ella Barkan; Eli Packer
Various software applications deal with analyzing the textual content of screen captures. Interpreting these images as text poses several challenges, relative to images traditionally handled by optical character recognition (OCR) engines. One such challenge is caused by text antialiasing, a technique which blurs the edges of characters, to reduce jagged appearance. This blurring changes the character images according to context, and can sometimes fuse them together. In this paper, we offer a low-cost method that can be used as a preprocessing stage, prior to OCR. Our method locates antialiased text in a screen image and segments it into separate character images. Our proposed algorithm significantly improves OCR results, particularly in images with colored text of small font size, such as in graphic user interface (GUI) screens.
international conference on document analysis and recognition | 2011
Yaakov Navon; Vladimir Kluzner; Boaz Ophir
Document analysis of images photographed by camera-equipped mobile phones is a growing challenge. These photos are often poor-quality compound images, composed of various objects and text, this makes automatic analysis complicated, thereby limiting the usefulness of the images. Existing image processing techniques do not manage to clearly decipher the text in such pictures. We developed a method for precisely locating text in complex scene images that can then be further processed by OCR systems. A text kernel operator roughly locates the text in an image. This information then serves as an initialization for the active contour method. This technique enhances the convergence process of the active contour and significantly speeds up the overall process. Moreover, our initialization settings enable systems to easily distinguish between the inside and outside parts of contours. Our experimental results show a significant improvement in the ability to locate and preprocess text.
european signal processing conference | 2011
Boaz Ophir; Michael Elad; Nancy Bertin; Mark D. Plumbley
Archive | 2007
Netta Aizenbud-Reshef; Ella Barkan; Eran Belinsky; Jonathan Joseph Mamou; Yaakov Navon; Boaz Ophir
Ibm Journal of Research and Development | 2015
Pavel Kisilev; Eugene Walach; Ella Barkan; Boaz Ophir; Sharon Alpert; Sharbell Y. Hashoul
Archive | 2014
Dan Shmuel Chevion; Pavel Kisilev; Boaz Ophir; Eugene Walach