Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Carl Staelin is active.

Publication


Featured researches published by Carl Staelin.


symposium on operating systems principles | 1995

The HP AutoRAID hierarchical storage system

John Wilkes; Richard A. Golding; Carl Staelin; Tim Sullivan

Con@uring redundant disk arrays is a black art. To configure an array properly, a system administrator must understand the details of both the array and the workload it will support. Incorrect understanding of either, or changes in the workload over time, can lead to poor performance, We present a solution to this problem: a two-level storage hierarchy implemented inside a single disk-array controller. In the upper level of this hierarchy, two copies of active data are stored to provide full redundancy and excellent performance. In the lower level, RAID 5 parity protection is used to provide excellent storage cost for inactive data, at somewhat lower performance. The technology we describe in this article, known as HP AutoRAID, automatically and transparently manages migration of data blocks between these two levels as access patterns change. The result is a fully redundant storage system that is extremely easy to use, is suitable for a wide variety of workloads, is largely insensitive to dynamic workload changes, and performs much better than disk arrays with comparable numbers of spindles and much larger amounts of front-end RAM cache, Because the implementation of the HP AutoRAID technology is almost entirely in software, the additional hardware cost for these benefits is very small. We describe the HP AutoRAID technology in detail, provide performance data for an embodiment of it in a storage array, and summarize the results of simulation studies used to choose algorithms implemented in the array.


Journal of Systems Architecture | 2008

Memory hierarchy performance measurement of commercial dual-core desktop processors

Lu Peng; Jih-Kwon Peir; Tribuvan K. Prakash; Carl Staelin; Yen-Kuang Chen; David M. Koppelman

As chip multiprocessor (CMP) has become the mainstream in processor architectures, Intel and AMD have introduced their dual-core processors. In this paper, performance measurement on an Intel Core 2 Duo, an Intel Pentium D and an AMD Athlon 64x2 processor are reported. According to the design specifications, key derivations exist in the critical memory hierarchy architecture among these dual-core processors. In addition to the overall execution time and throughput measurement using both multi-program-med and multi-threaded workloads, this paper provides detailed analysis on the memory hierarchy performance and on the performance scalability between single and dual cores. Our results indicate that for better performance and scalability, it is important to have (1) fast cache-to-cache communication, (2) large L2 or shared capacity, (3) fast L2 to core latency, and (4) fair cache resource sharing. Three dual-core processors that we studied have shown benefits of some of these factors, but not all of them. Core 2 Duo has the best performance for most of the workloads because of its microarchitecture features such as the shared L2 cache. Pentium D shows the worst performance in many aspects due to its technology-remap of Pentium 4 without taking the advantage of on-chip communication.


Software - Practice and Experience | 2005

lmbench: an extensible micro-benchmark suite

Carl Staelin

lmbench is a powerful and extensible suite of micro‐benchmarks that measures a variety of important aspects of system performance. It has a powerful timing harness that manages most of the ‘housekeeping’ chores associated with benchmarking, making it easy to create new benchmarks that analyze systems or components of specific interest to the user. In many ways lmbench is a Swiss army knife for performance analysis. It includes an extensive suite of micro‐benchmarks that give powerful insights into system performance. For those aspects of system or application performance not covered by the suite, it is generally a simple task to create new benchmarks using the timing harness. lmbench is written in ANSI‐C and uses POSIX interfaces, so it is portable across a wide variety of systems and architectures. It also includes powerful new tools that measure performance under scalable loads to analyze SMP and clustered system performance. Copyright


IEEE Transactions on Image Processing | 2013

Clustered-Dot Halftoning With Direct Binary Search

Puneet Goyal; Madhur Gupta; Carl Staelin; Mani Fischer; Omri Shacham; Jan P. Allebach

In this paper, we present a new algorithm for aperiodic clustered-dot halftoning based on direct binary search (DBS). The DBS optimization framework has been modified for designing clustered-dot texture, by using filters with different sizes in the initialization and update steps of the algorithm. Following an intuitive explanation of how the clustered-dot texture results from this modified framework, we derive a closed-form cost metric which, when minimized, equivalently generates stochastic clustered-dot texture. An analysis of the cost metric and its influence on the texture quality is presented, which is followed by a modification to the cost metric to reduce computational cost and to make it more suitable for screen design.


Journal of Electronic Imaging | 2011

Automatic visual inspection and defect detection on variable data prints

Marie Vans; Sagi Schein; Carl Staelin; Pavel Kisilev; Steven J. Simske; Ram Dagan; Shlomo Harush

We present a system for automatic, on-line visual inspection and print defect detection for variable data printing (VDP). This system can be used to automatically stop the printing process and alert the operator to problems. We lay out the components required for constructing a vision-based inspection system and show that our approach is novel for the high-speed detection of defects on variable data. When implemented in a high-speed digital printing press, the system allows a single skilled operator to monitor and maintain several presses, reducing the number of operators required to run a shop floor of presses as well as reduce wasted consumables when a defect goes undetected.


International Journal on Document Analysis and Recognition | 2007

Biblio: automatic meta-data extraction

Carl Staelin; Michael Elad; Darryl Greig; Oded Shmueli; Marie Vans

Biblio is an adaptive system that automatically extracts meta-data from semi-structured and structured scanned documents. Instead of using hand-coded templates or other methods manually customized for each given document format, it uses example-based machine learning to adapt to customer-defined document and meta-data types. We provide results from experiments on the recognition of document information in two document corpuses: a set of scanned journal articles and a set of scanned legal documents. The first set is semi-structured, as the different journals use a variety of flexible layouts. The second set is largely free-form text based on poor quality scans of FAX-quality legal documents. We demonstrate accuracy on the semi-structured document set roughly comparable to hand-coded systems, and much worse performance on the legal documents.


Proceedings of SPIE | 2011

Cost function analysis for stochastic clustered-dot halftoning based on direct binary search

Puneet Goyal; Madhur Gupta; Carl Staelin; Mani Fischer; Omri Shacham; Jan P. Allebach

Most electrophotographic printers use periodic, clustered-dot screening for rendering smooth and stable prints. However, periodic, clustered-dot screening suffers from the problem of periodic moir´e resulting from interference between the component periodic screens superposed for color printing. There has been proposed an approach, called CLU-DBS for stochastic, clustered-dot halftoning and screen design based on direct binary search. This method deviates from conventional DBS in its use of different filters in different phases of the algorithm. In this paper, we derive a closed-form expression for the cost metric which is minimized in CLU-DBS. The closed-form expression provides us with a clearer insight on the relationship between input parameters and processes, and the output texture, thus enabling us generate better quality texture. One of the limitations of the CLU-DBS algorithm proposed earlier is the inversion in the distribution of clusters and voids in the final halftone with respect to the initial halftone. In this paper, we also present a technique for avoiding the inversion by negating the sign of one of the error terms in the newly derived cost metric, which is responsible for clustering. This not only simplifies the CLU-DBS screen design process, but also significantly reduces the number of iterations required for optimization.


international conference on image processing | 2011

Electro-photographic model based stochastic clustered-dot halftoning with direct binary search

Puneet Goyal; Madhur Gupta; Carl Staelin; Mani Fischer; Omri Shacham; Tamar Kashti; Jan P. Allebach

Most electrophotographic printers use periodic, clustered-dot screening for rendering smooth and stable prints. However, when used for color printing, this approach suffers from the problem of periodic moire´ resulting from interference between the periodic halftones of individual color planes. There has been proposed an approach, called CLU-DBS for stochastic, clustered-dot halftoning and screen design based on direct binary search. We propose a methodology to embed a printer model within this halftoning algorithm to account for dot-gain and dot-loss effects. Without accounting for these effects, the printed image will not have the appearance predicted by the halftoning algorithm. We incorporate a measurement-based stochastic model for dot interactions of an electro-photographic printer within the iterative CLU-DBS binary halftoning algorithm. The stochastic model developed is based on microscopic absorptance and variance measurements. The experimental results show that electrophotography-model based stochastic clustered dot halftoning improves the homogeneity and reduces the graini-ness of printed halftone images.


Proceedings of SPIE | 2011

Design of color screen tile vector sets

Jin-Young Kim; Yung-Yao Chen; Mani Fischer; Omri Shacham; Carl Staelin; Kurt R. Bengtson; Jan P. Allebach

For electrophotographic printers, periodic clustered screens are preferable due to their homogeneous halftone texture and their robustness to dot gain. In traditional periodic clustered-dot color halftoning, each color plane is independently rendered with a different screen at a different angle. However, depending on the screen angle and screen frequency, the final halftone may have strong visible moiré due to the interaction of the periodic structures, associated with the different color planes. This paper addresses issues on finding optimal color screen sets which produce the minimal visible moiré and homogeneous halftone texture. To achieve these goals, we propose new features including halftone microtexture spectrum analysis, common periodicity, and twist factor. The halftone microtexture spectrum is shown to predict the visible moiré more accurately than the conventional moiré-free conditions. Common periodicity and twist factor are used to determine whether the halftone texture is homogeneous. Our results demonstrate significant improvements to clustered-dot screens in minimizing visible moiré and having smooth halftone texture.


Proceedings of SPIE | 2010

Electro-photographic-model-based halftoning

Puneet Goyal; Madhur Gupta; Doron Shaked; Carl Staelin; Mani Fischer; Omri Shacham; Rodolfo Jodra; Jan P. Allebach

Most halftoning algorithms assume there is no interaction between neighboring dots or if there is, it is additive. Without accounting for dot-gain effect, the printed image will not have the appearance predicted by the halftoning algorithm. Thus, there is need to embed a printer model in the halftoning algorithm which can predict such deviations and develop a halftone accordingly. The direct binary search (DBS) algorithm employs a search heuristic to minimize the mean squared perceptually filtered error between the halftone and continuous-tone original images. We incorporate a measurementbased stochastic model for dot interactions of an electro-photographic printer within the iterative DBS binary halftoning algorithm. The stochastic model developed is based on microscopic absorptance and variance measurements. We present an efficient strategy to estimate the impact of 5×5 neighborhood pixels on the central pixel absorptance. By including the impact of 5×5 neighborhood pixels, the average relative error between the predicted tone and tone observed is reduced from around 21% to 4%. Also, the experimental results show that electrophotography-model based halftoning reduces the mottle and banding artifacts.

Collaboration


Dive into the Carl Staelin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge