Peter E. Tischer
Monash University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Peter E. Tischer.
The Computer Journal | 1993
Peter E. Tischer; Roderick T. Worley; Anthony J. Maeder; M. Goodwin
The two-dimensional method of Langdon and Rissanen for compression of black and white images is extended to handle the exact lossless compression of grey-scale images. Neighbouring pixel values are used to define contexts and probabilities associated with these contexts are used to compress the image. The problem of restricting the number of contexts, both to limit the storage requirements and to be able to obtain sufficient data to generate meaningful probabilities, is addressed. Investigations on a variety of images are carried out using the JPEG lossless mode predictors
data compression conference | 1997
Torsten Seemann; Peter E. Tischer
Summary form only given. In differential pulse code modulation (DPCM) we make a prediction f/spl circ/=/spl Sigma/a(i)-f(i) of the next pixel using a linear combination of neighbouring pixels f(i). It is possible to have the coefficients a(i)s constant over a whole image, but better results can be obtained by adapting the a(i)s to the local image behaviour as the image is encoded. One difficulty with present schemes is that they can only produce predictors with positive a(i)s. This is desirable in the presence of noise, but in regions where the intensity varies smoothly, we require at least one negative coefficient to properly estimate a gradient. However, if we consider the four neighbouring pixels as four local sub-predictors W, N, NW and NE, and the gradient measure as the sum of absolute prediction errors of those sub-predictors within the local neighbourhood, then we can use any sub-predictors we choose, even nonlinear ones. In our experiments, we chose to use three additional linear predictors suited for smooth regions, each having one negative coefficient. Results were computed for three versions of the standard JPEG test set and some 12 bpp medical images.
data compression conference | 1998
Bernd Meyer; Peter E. Tischer
We present a general purpose lossless greyscale image compression method, TMW, that is based on the use of linear predictors and implicit segmentation. We then proceed to extend the presented methods to cover near lossless image compression. In order to achieve competitive compression, the compression process is split into an analysis step and a coding step. In the first step, a set of linear predictors and other parameters suitable for the image is calculated, which is included in the compressed file and subsequently used for the coding step. This adaption allows TMW to perform well over a very wide range of image types. Other significant features of TMW are the use of a one-parameter probability distribution, probability calculations based on unquantized prediction values, blending of multiple probability distributions instead of prediction values, and implicit image segmentation. For lossless image compression, the method has been compared to CALIC on a selection of test images, and typically outperforms it by between 2 and 10 percent. For near lossless image compression, the method has been compared to LOCO (Weinberger et al. 1996). Especially for larger allowed deviations from the original image the proposed method can significantly outperform LOCO. In both cases the improvement in compression is achieved at the cost of considerably higher computational complexity.
international conference on pattern recognition | 1998
Torsten Seemann; Peter E. Tischer
The trend in modern image noise filtering algorithms has been toward structure preservation by using only those neighbouring pixels which are similar to the current pixel in some way. In this paper we introduce a technique, call FUELS (filtering using explicit local segmentation), which explicitly segments the m /spl times/ m region encompassing the current pixel and filters using only those pixels from the same segment. By exploiting mask overlap an effective mask size of(2m-1)/spl times/(2m-1) is obtained, as well as robustness in regions which do not fit the image model. The algorithm can be iterated, and our results show FUELS to outperform existing algorithms both quantitatively and qualitatively.
international symposium on intelligent signal processing and communication systems | 2004
Mieng Quoc Phu; Peter E. Tischer; Hong Wu
In this paper, state-of-the-art interpolation algorithms for deinterlacing within a single frame are investigated. Based on Chens directional measurements and on the median filter, a novel interpolation algorithm for deinterlacing is proposed. By efficiently estimating the diagonal and vertical directional correlations of the neighboring pixels, the proposed method performs better than existing techniques on different images and video sequences, for both subjective and objective measurements. Additionally, the proposed method has a simple structure with low computation complexity, which therefore makes it simple to implement in hardware.
european conference on machine learning | 2008
Rocío Alaiz-Rodríguez; Nathalie Japkowicz; Peter E. Tischer
Classifier performance evaluation typically gives rise to a multitude of results that are difficult to interpret. On the one hand, a variety of different performance metrics can be applied, each adding a little bit more information about the classifiers than the others; and on the other hand, evaluation must be conducted on multiple domains to get a clear view of the classifiers general behaviour.
australasian joint conference on artificial intelligence | 2009
Ben Goodrich; David W. Albrecht; Peter E. Tischer
Geometric interpretations of Support Vector Machines (SVMs) have introduced the concept of a reduced convex hull. A reduced convex hull is the set of all convex combinations of a set of points where the weight any single point can be assigned is bounded from above by a constant. This paper decouples reduced convex hulls from their origins in SVMs and allows them to be constructed independently. Two algorithms for the computation of reduced convex hulls are presented --- a simple recursive algorithm for points in the plane and an algorithm for points in an arbitrary dimensional space. Upper bounds on the number of vertices and facets in a reduced convex hull are used to analyze the worst-case complexity of the algorithms.
international conference on tools with artificial intelligence | 2008
Rocío Alaiz-Rodríguez; Nathalie Japkowicz; Peter E. Tischer
Classifier performance evaluation typically gives rise to vast numbers of results that are difficult to interpret. On the one hand, a variety of different performance metrics can be applied; and on the other hand, evaluation must be conducted on multiple domains to get a clear view of the classifiers general behaviour. In this paper, we present a visualization technique that allows a user to study the results from a domain point of view and from a classifier point of view. We argue that classifier evaluation should be done on an exploratory basis. In particular, we suggest that, rather than pre-selecting a few metrics and domains to conduct our evaluation on, we should use as many metrics and domains as possible and mine the results of this study to draw valid and relevant knowledge about the behaviour of our algorithms. The technique presented in this paper will enable such a process.
european conference on machine learning | 2008
Nathalie Japkowicz; Pritika Sanghi; Peter E. Tischer
In this paper, we propose approaching the problem of classifier evaluation in terms of a projection from a high-dimensional space to a visualizable two-dimensional one. Rather than collapsing confusion matrices into a single measure the way traditional evaluation methods do, we consider the vector composed of the entries of the confusion matrix (or the confusion matrices in case several domains are considered simultaneously) as the performance evaluation vector, and project it into a two dimensional space using a recently proposed distance-preserving projection method. This approach is shown to be particularly useful in the case of comparison of several classifiers on many domains as well as in the case of multiclass classification. Furthermore, by providing simultaneous multiple views of the same evaluation data, it allows for a quick and accurate assessment of classifier performance.
annual acis international conference on computer and information science | 2007
Mieng Quoc Phu; Peter E. Tischer; Hong Ren Wu
In the area of color image restoration, many state-of-the-art filters consist of two main processes, classification and reconstruction. Classification is used to separate clean pixels from corrupted pixels. Reconstruction involves using values from corrupted pixels to interpolate values for pixels believed to have been corrupted. In this paper, two statistical analyses are carried out to determine how salt and pepper and random impulse noise behave in images. By computing the cluster count and probability of occurrence in a database of 1000 single color noisy images for each noise model, we found that the results will benefit the classification and reconstruction process in color image filters.