Jason E. Fritts
University of Washington
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jason E. Fritts.
Computer Vision and Image Understanding | 2008
Hui Zhang; Jason E. Fritts; Sally A. Goldman
Image segmentation is an important processing step in many image, video and computer vision applications. Extensive research has been done in creating many different approaches and algorithms for image segmentation, but it is still difficult to assess whether one algorithm produces more accurate segmentations than another, whether it be for a particular image or set of images, or more generally, for a whole class of images. To date, the most common method for evaluating the effectiveness of a segmentation method is subjective evaluation, in which a human visually compares the image segmentation results for separate segmentation algorithms, which is a tedious process and inherently limits the depth of evaluation to a relatively small number of segmentation comparisons over a predetermined set of images. Another common evaluation alternative is supervised evaluation, in which a segmented image is compared against a manually-segmented or pre-processed reference image. Evaluation methods that require user assistance, such as subjective evaluation and supervised evaluation, are infeasible in many vision applications, so unsupervised methods are necessary. Unsupervised evaluation enables the objective comparison of both different segmentation methods and different parameterizations of a single method, without requiring human visual comparisons or comparison with a manually-segmented or pre-processed reference image. Additionally, unsupervised methods generate results for individual images and images whose characteristics may not be known until evaluation time. Unsupervised methods are crucial to real-time segmentation evaluation, and can furthermore enable self-tuning of algorithm parameters based on evaluation results. In this paper, we examine the unsupervised objective evaluation methods that have been proposed in the literature. An extensive evaluation of these methods are presented. The advantages and shortcomings of the underlying design mechanisms in these methods are discussed and analyzed through analytical evaluation and empirical evaluation. Finally, possible future directions for research in unsupervised evaluation are proposed.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2008
Rouhollah Rahmani; Sally A. Goldman; Hui Zhang; Sharath R. Cholleti; Jason E. Fritts
We define localized content-based image retrieval as a CBIR task where the user is only interested in a portion of the image, and the rest of the image is irrelevant. In this paper we present a localized CBIR system, Accio, that uses labeled images in conjunction with a multiple-instance learning algorithm to first identify the desired object and weight the features accordingly, and then to rank images in the database using a similarity measure that is based upon only the relevant portions of the image. A challenge for localized CBIR is how to represent the image to capture the content. We present and compare two novel image representations, which extend traditional segmentation-based and salient point-based techniques respectively, to capture content in a localized CBIR setting.
electronic imaging | 2003
Hui Zhang; Jason E. Fritts; Sally A. Goldman
Accurate image segmentation is important for many image, video and computer vision applications. Over the last few decades, many image segmentation methods have been proposed. However, the results of these segmentation methods are usually evaluated only visually, qualitatively, or indirectly by the effectiveness of the segmentation on the subsequent processing steps. Such methods are either subjective or tied to particular applications. They do not judge the performance of a segmentation method objectively, and cannot be used as a means to compare the performance of different segmentation techniques. A few quantitative evaluation methods have been proposed, but these early methods have been based entirely on empirical analysis and have no theoretical grounding. In this paper, we propose a novel objective segmentation evaluation method based on information theory. The new method uses entropy as the basis for measuring the uniformity of pixel characteristics (luminance is used in this paper) within a segmentation region. The evaluation method provides a relative quality score that can be used to compare different segmentations of the same image. This method can be used to compare both various parameterizations of one particular segmentation method as well as fundamentally different segmentation techniques. The results from this preliminary study indicate that the proposed evaluation method is superior to the prior quantitative segmentation evaluation techniques, and identify areas for future research in objective segmentation evaluation.
multimedia information retrieval | 2005
Rouhollah Rahmani; Sally A. Goldman; Hui Zhang; John Krettek; Jason E. Fritts
We define localized content-based image retrieval as a CBIR task where the user is only interested in a portion of the image, and the rest of the image is irrelevant. In this paper we present a localized CBIR system, ACCIO, that uses labeled images in conjunction with a multiple-instance learning algorithm to first identify the desired object and weight the features accordingly, and then to rank images in the database using a similarity measure that is based upon only the relevant portions of the image. A challenge for localized CBIR is how to represent the image to capture the content. We present and compare two novel image representations, which extend traditional segmentation-based and salient point-based techniques respectively, to capture content in a localized CBIR setting.
international parallel processing symposium | 1997
Angelos Bilas; Jason E. Fritts; Jaswinder Pal Singh
The growing demand for high quality compressed video has led to an increasing need for real-time MPEG decoding at greater resolutions and picture sizes. With the widespread availability of small-scale multiprocessors, a parallel software implementation may provide an effective solution to the decoding problem. We present a parallel decoder for the MPEG standard, implemented on a shared memory multiprocessor. Goal of this work is to provide an all-software solution for real-time, high-quality video decoding and to investigate the important properties of this application as they pertain to multiprocessor systems. Both coarse and fine grained implementations are considered for parallelizing the decoder. The coarse-grained approach exploits parallelism at the group of pictures level, while the fine-grained approach parallelizes within pictures, at the slice level. A comparative evaluation of these methods is made, with results presented in terms of speedup, memory requirements, load balance, synchronization time, and temporal and spatial locality. Both methods demonstrate very good speedups and locality properties.
international conference on computer design | 1998
Kemal Ebcioglu; Jason E. Fritts; Stephen V. Kosonocky; Michael Karl Gschwind; Erik R. Altman; Krishnan K. Kailas; Terry Bright
Presented is an 8-issue tree-VLIW processor designed for efficient support of dynamic binary translation. This processor confronts two primary problems faced by VLIW architectures: binary compatibility and branch performance. Binary compatibility with existing architectures is achieved through dynamic binary translation which translates and schedules PowerPC instructions to take advantage of the available instruction level parallelism. Efficient branch performance is achieved through tree instructions that support multi-way path and branch selection within a single VLIW instruction. The processor architecture is described, along with design details of the branch unit, pipeline, register file and memory hierarchy for a 0.25 micron standard-cell design. Performance simulations show that the simplicity of a VLIW architecture allows a wide-issue processor to operate at high frequencies.
electronic imaging | 2005
Jason E. Fritts; Frederick W. Steiling; Joseph Tucek
The first step towards the design of video processors and video systems is to achieve an accurate understanding of the major video applications, including not only the fundamentals of the many video compression standards, but also the workload characteristics of those applications. Introduced in 1997, the MediaBench benchmark suite provided the first set of full application-level benchmarks for studying video processing characteristics, and has consequently enabled significant research in computer architecture and compiler research for multimedia systems. To expedite the next generation of systems research, the MediaBench Consortium is developing the MediaBench II benchmark suite, incorporating benchmarks from the latest multimedia technologies, and providing both a single composite benchmark suite as well as separate benchmark suites for each area of multimedia. In the area of video, MediaBench II Video includes both the popular mainstream video compression standards, such as Motion-JPEG, H.263, and MPEG-2, and the more recent next-generation standards, including MPEG-4, Motion-JPEG2000, and H.264. This paper introduces MediaBench II Video and provides a comprehensive workload evaluation of its major processing characteristics.
computer analysis of images and patterns | 2007
David Letscher; Jason E. Fritts
This paper presents a new hybrid split-and-merge image segmentation method based on computational geometry and topology using persistent homology. The algorithm uses edge-directed topology to initially split the image into a set of regions based on the Delaunay triangulations of the points in the edge map. Persistent homology is used to generate three types of regions: p-persistent regions, p-transient regions, and d-triangles. The p-persistent regions correspond to core objects in the image, while p-transient regions and d-triangles are smaller regions that may be combined in the merge phase, either with p-persistent regions to refine the core or with other p-transient and d-triangles regions to potentially form new core objects. Performing image segmentation based on topology and persistent homology guarantees several nice properties, and initial results demonstrate high quality image segmentation.
international conference on multimedia and expo | 2002
Jason E. Fritts
This paper presents a multi-level memory prefetch hierarchy for media and stream processing applications. Two major bottlenecks in the performance of multimedia and network applications are long memory latencies and limited off-chip processor bandwidth. Aggressive prefetching can be used to mitigate the memory latency problem, but overly aggressive prefetching may overload the limited external processor bandwidth. To accommodate both problems, we propose multilevel memory prefetching. The multi-level organization enables conservative prefetching on-chip and more aggressive prefetching off-chip. The combination provides aggressive prefetching while minimally impacting off-chip bandwidth, enabling more efficient memory performance for media and stream processing. This paper presents preliminary results for multi-level memory prefetching, which show that combining prefetching at the L1 and DRAM memory levels provides the most effective prefetching with minimal extra bandwidth.
IEEE Transactions on Circuits and Systems for Video Technology | 2006
Wei Yu; Fangting Sun; Jason E. Fritts
In this paper, we present two new methods for efficient rate control and entropy coding in lossy image compression using JPEG-2000. These two methods enable significant improvements in computation complexity and power consumption over the traditional JPEG-2000 algorithms. First, we propose a greedy heap-based rate-control algorithm (GHRaC), which achieves efficient postcompression rate control by implementing a greedy marginal analysis method using the heap sort algorithm. Second, we propose an integrated rate-control and entropy-coding (IREC) algorithm that reduces the computation complexity of entropy coding by selectively entropy coding only the image data that is likely to be included in the final bitstream, as opposed to entropy coding all image data. Together, these two methods enable significant savings in computation time and power consumption. For example, the GHRaC method demonstrates 16 /spl times/ speedup for rate control when encoding the Lena color image using a target compression ratio of 128:1, one quality layer, and code blocks of 32 /spl times/ 32 pixels. The IREC method expands upon GHRaC to perform entropy coding in conjunction with rate control. Using an enhanced version of IREC, these two methods jointly achieve a speedup in execution time of 14 /spl times/ over traditional rate control and entropy coding, which first entropy codes all image coefficients and then separately performs postcompression rate control using the generalized Lagrange multiplier method to select which data are included in the final bitstream. Both theoretical analysis and empirical results are presented in validating the advantages of the proposed methods.