Bo Martins
Technical University of Denmark
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Bo Martins.
IEEE Transactions on Circuits and Systems for Video Technology | 1998
Paul G. Howard; Faouzi Kossentini; Bo Martins; Søren Forchhammer; William J. Rucklidge
The Joint Bi-Level Image Experts Group (JBIG), an international study group affiliated with ISO/IEC and ITU-T, is in the process of drafting a new standard for lossy and lossless compression of bilevel images. The new standard, informally referred to as JBIG2, will support model-based coding for text and halftones to permit compression ratios up to three times those of existing standards for lossless compression. JBIG2 will also permit lossy preprocessing without specifying how it is to be done, In this case, compression ratios up to eight times those of existing standards may be obtained with imperceptible loss of quality. It is expected that JBIG2 will become an international standard by 2000.
IEEE Transactions on Image Processing | 1998
Bo Martins; Søren Forchhammer
Presently, sequential tree coders are the best general purpose bilevel image coders and the best coders of halftoned images. The current ISO standard, Joint Bilevel Image Experts Group (JBIG), is a good example. A sequential tree coder encodes the data by feeding estimates of conditional probabilities to an arithmetic coder. The conditional probabilities are estimated from co-occurrence statistics of past pixels, the statistics are stored in a tree. By organizing the code length calculations properly, a vast number of possible models (trees) reflecting different pixel orderings can be investigated within reasonable time prior to generating the code. A number of general-purpose coders are constructed according to this principle. Rissanens one-pass algorithm, context, is presented in two modified versions. The baseline is proven to be a universal coder. The faster version, which is one order of magnitude slower than JBIG, obtains excellent and highly robust compression performance. A multipass free tree coding scheme produces superior compression results for all test images. A multipass free template coding scheme produces significantly better results than JBIG for difficult images such as halftones. By utilizing randomized subsampling in the template selection, the speed becomes acceptable for practical image coding.
data compression conference | 1998
Bo Martins; Søren Forchhammer
Summary form only given. We investigate lossless coding of video using predictive coding and motion compensation. The new coding methods combine state-of-the-art lossless techniques as JPEG (context based prediction and bias cancellation, Golomb coding), with high resolution motion field estimation, 3D predictors, prediction using one or multiple (k) previous images, predictor dependent error modelling, and selection of motion field by code length. We treat the problem of precision of the motion field as one of choosing among a number of predictors. This way, we can incorporate 3D-predictors and intra-frame predictors as well. As proposed by Ribas-Corbera (see PhD thesis, University of Michigan, 1996), we use bi-linear interpolation in order to achieve sub-pixel precision of the motion field. Using more reference images is another way of achieving higher accuracy of the match. The motion information is coded with the same algorithm as is used for the data. For slow pan or slow zoom sequences, coding methods that use multiple previous images perform up to 20% better than motion compensation using a single previous image and up to 40% better than coding that does not utilize motion compensation.
IEEE Transactions on Image Processing | 1999
Bo Martins; Søren Forchhammer
We present general and unified algorithms for lossy/lossless coding of bilevel images. The compression is realized by applying arithmetic coding to conditional probabilities. As in the current JBIG standard the conditioning may be specified by a template. For better compression, the more general free tree may be used. Loss may be introduced in a preprocess on the encoding side to increase compression. The primary algorithm is a rate-distortion controlled greedy flipping of pixels. Though being general, the algorithms are primarily aimed at material containing half-toned images as a supplement to the specialized soft pattern matching techniques that work better for text. Template based refinement coding is applied for lossy-to-lossless refinement. Introducing only a small amount of loss in half-toned test images, compression is increased by up to a factor of four compared with JBIG. Lossy, lossless, and refinement decoding speed and lossless encoding speed are less than a factor of two slower than JBIG. The (de)coding method is proposed as part of JBIG2, an emerging international standard for lossless/lossy compression of bilevel images.
IEEE Transactions on Circuits and Systems for Video Technology | 2002
Bo Martins; Søren Forchhammer
The quality and spatial resolution of video can be improved by combining multiple pictures to form a single superresolution picture. We address the special problems associated with pictures of variable but somehow parameterized quality such as MPEG-decoded video. Our algorithm provides a unified approach to restoration, chrominance upsampling, deinterlacing, and resolution enhancement. A decoded MPEG-2 sequence for interlaced standard definition television (SDTV) in 4:2:0 is converted to: (1) improved quality interlaced SDTV in 4:2:0; (2) interlaced SDTV in 4:4:4; (3) progressive SDTV in 4:4:4; (4) interlaced high-definition TV (HDTV) in 4:2:0; (5) progressive HDTV in 4:2:0. These conversions also provide features such as freeze frame and zoom. The algorithm is mainly targeted at bit rates of 4-8 Mb/s. The algorithm is based on motion-compensated spatial upsampling from multiple images and decimation to the desired format. The processing involves an estimated quality of individual pixels based on MPEG image type and local quantization value. The mean-squared error (MSE) is reduced, compared to the directly decoded sequence, and annoying ringing artifacts, including mosquito noise, are effectively suppressed. The superresolution pictures obtained by the algorithm are of much higher visual quality and have lower MSE than superresolution pictures obtained by simple spatial interpolation.
Journal of Electronic Imaging | 2000
Bo Martins; Søren Forchhammer
The emerging international standard for compression of bilevel images and bilevel documents, JBIG2, provides a mode dedicated for lossy coding of halftones. The encoding procedure involves descreening of the bilevel image into gray scale, encoding of the gray-scale image, and construction of a halftone pattern dic- tionary. The decoder first decodes the gray-scale image. Then for each gray-scale pixel the decoder looks up the corresponding half- tone pattern in the dictionary and places it in the reconstruction bit- map at the position corresponding to the gray-scale pixel. The cod- ing method is inherently lossy and care must be taken to avoid introducing artifacts in the reconstructed image. We describe how to apply this coding method for halftones created by periodic ordered dithering, by clustered dot screening (offset printing), and by tech- niques which in effect dithers with blue noise, e.g., error diffusion. Besides descreening and construction of the dictionary, we address graceful degradation and artifact removal.
internaltional ultrasonics symposium | 2016
Ramin Moshavegh; Bo Martins; Kristoffer Lindskov Hansen; Thor Bechsgaard; Michael Bachmann Nielsen; Jørgen Arendt Jensen
Vector Flow Imaging (VFI) has received an increasing attention in the scientific field of ultrasound, as it enables angle independent visualization of blood flow. VFI can be used in volume flow estimation, but a vessel segmentation is needed to make it fully automatic. A novel vessel segmentation procedure is crucial for wall-to-wall visualization, automation of adjustments, and quantification of flow in state-of-the-art ultrasound scanners. We propose and discuss a method for accurate vessel segmentation that fuses VFI data and B-mode for robustly detecting and delineating vessels. The proposed method implements automated VFI flow measures such as peak systolic velocity (PSV) and volume flow. An evaluation of the performance of the segmentation algorithm relative to expert manual segmentation of 60 frames randomly chosen from 6 ultrasound sequences (10 frame randomly chosen from each sequence) is also presented. Dice coefficient denoting the similarity between segmentations is used for the evaluation. The coefficient ranges between 0 and 1, where 1 indicates perfect agreement and 0 indicates no agreement. The Dice coefficient was 0.91 indicating to a very agreement between automated and manual expert segmentations. The flowrig results also demonstrated that the PSVs measured from VFI had a mean relative error of 14.5% in comparison with the actual PSVs. The error for the PSVs measured from spectral Doppler was 29.5%, indicating that VFI is 15% more precise than spectral Doppler in PSV measurement.
international conference on image processing | 2000
Bo Martins; Søren Forchhammer
The quality and the spatial resolution of video can be improved by combining multiple pictures to form a single superresolution picture. We address the special problems associated with pictures of variable but somehow parameterized quality such as MPEG-decoded video. Our algorithm provides a unified approach to restoration, chrominance upsampling, deinterlacing and superresolution as e.g. HDTV. The algorithm is mainly targeted at improving MPEG-2 decoding at high bit rates (4-8 Mbit/s). The mean squared error is reduced, compared to the directly decoded sequence, and annoying ringing artifacts including mosquito noise are effectively suppressed. The superresolution pictures obtained by the algorithm are of much higher visual quality and has lower mean squared error than superresolution pictures obtained by simple spatial interpolation.
internaltional ultrasonics symposium | 2015
Ramin Moshavegh; Martin Christian Hemmsen; Bo Martins; Kristoffer Lindskov Hansen; Caroline Ewertsen; Andreas Hjelm Brandt; Thor Bechsgaard; Michael Bachmann Nielsen; Jørgen Arendt Jensen
Automatic gain adjustments are necessary on the state-of-the-art ultrasound scanners to obtain optimal scan quality, while reducing the unnecessary user interactions with the scanner. However, when large anechoic regions exist in the scan plane, the sudden and drastic variation of attenuations in the scanned media complicates the gain compensation. This paper presents an advanced and automated gain adjustment method that precisely compensate for the gains on scans and dynamically adapts to the drastic attenuation variations between different media. The proposed algorithm makes use of several ultrasonic physical estimates such as scattering strength, focus gain, acoustic attenuation, and noise level to gain a more quantitative understanding of the scanned media and to provide an intuitive adjustment of gains on the scan. The proposed algorithm was applied to a set of 45 in-vivo movie sequences each containing 50 frames. The scans are acquired using a recently commercialized BK3000 ultrasound scanner (BK Ultrasound, Denmark). Matching pairs of in-vivo sequences, unprocessed and processed with the proposed method were visualized side by side and evaluated by 4 radiologists for image quality. Wilcoxon signed-rank test was then applied to the ratings provided by radiologists. The average VAS score was highly positive 12.16 (p-value: 2.09×10-23) favoring the gain-adjusted scans with the proposed algorithm.
Proceedings of SPIE | 2015
Ramin Moshavegh; Martin Christian Hemmsen; Bo Martins; Andreas Hjelm Brandt; Kristoffer Lindskov Hansen; Michael Bachmann Nielsen; Jørgen Arendt Jensen
Time gain compensation (TGC) is essential to ensure the optimal image quality of the clinical ultrasound scans. When large fluid collections are present within the scan plane, the attenuation distribution is changed drastically and TGC compensation becomes challenging. This paper presents an automated hierarchical TGC (AHTGC) algorithm that accurately adapts to the large attenuation variation between different types of tissues and structures. The algorithm relies on estimates of tissue attenuation, scattering strength, and noise level to gain a more quantitative understanding of the underlying tissue and the ultrasound signal strength. The proposed algorithm was applied to a set of 44 in vivo abdominal movie sequences each containing 15 frames. Matching pairs of in vivo sequences, unprocessed and processed with the proposed AHTGC were visualized side by side and evaluated by two radiologists in terms of image quality. Wilcoxon signed-rank test was used to evaluate whether radiologists preferred the processed sequences or the unprocessed data. The results indicate that the average visual analogue scale (VAS) is positive ( p-value: 2.34 × 10-13) and estimated to be 1.01 (95% CI: 0.85; 1.16) favoring the processed data with the proposed AHTGC algorithm.