Norman Wittels
Worcester Polytechnic Institute
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Norman Wittels.
ieee visualization | 1990
Jeffrey LeBlanc; Matthew O. Ward; Norman Wittels
The authors present a tool for the display and analysis of N-dimensional data based on a technique called dimensional stacking. This technique is described. The primary goal is to create a tool that enables the user to project data of arbitrary dimensions onto a two-dimensional image. Of equal importance is the ability to control the viewing parameters, so that one can interactively adjust what ranges of values each dimension takes and the form in which the dimensions are displayed. This will allow an intuitive feel for the data to be developed as the database is explored. The system uses dimensional stacking, to collapse and N-dimension space down into a 2-D space and then render the values contained therein. Each value can then be represented as a pixel or rectangular region on a 2-D screen whose intensity corresponds to the data value at that point.<<ETX>>
Optical Engineering | 1993
Michael A. Gennert; Norman Wittels; Gary L. Leatherman
A method is presented for placing line, point, and ring light sources to produce uniform illumination of planar surfaces. Optimization consists in setting to zero as many terms as possible in a two-dimensional Taylor series expansion of the surface illumination. We analyze several practical lighting arrangements and discuss the significance of symmetry in lighting design. Four conditions are sufficient to produce optimal illumination: (1) place all lamps in a single plane parallel to the illuminated surface; (2) arrange linear sources in parallel pairs; (3) arrange point-source lamps to achieve fourfold symmetry with respect to any two orthogonal axes lying in the surface, this requires a minimum of four lamps; and (4) select lamp heights so that a line connecting each lamp to the center of the illuminated surface forms specific angles with the surface normal: 30 deg for line sources and approximately 39 deg for point and ring sources. Any lamp arrangement meeting these conditions produces an illumination function in which at least three orders of terms of the Taylor series expansion are zero. We also discuss circumstances under which some of these conditions can be relaxed.
Optics, Illumination, and Image Sensing for Machine Vision III | 1989
Norman Wittels; James R. McClellan; Katherine Cushing; Willard Howard Iii; Ann Palmer
Solid state (CCD, CID, or multiplexed photosensor) television cameras are the most widely used input devices in machine vision, because they are relatively inexpensive, rugged, and reliable. However, the design, specification, and testing of these cameras typically are geared to their primary use in producing images that will ultimately be observed by humans; the intended applications for these cameras are as diverse as parking lot security and home entertainment. Because the video information produced by the camera is not used in the same ways by people and machine vision systems, there is no a priori reason to expect that a camera designed for one use will be optimal for another. In our work we have examined what makes a camera suitable for machine vision use. This paper describes which characteristics are important to the cameras performance machine vision applications and why. We show how these characteristics can be measured and standardized using simple tests suitable for production screening or more extensive tests suitable for use in the laboratory. Tests for important camera characteristics, including transfer function, noise, and resolution, are described and test results for representative solid state cameras are presented. Finally, we discuss how such measurements can be useful in designing or selecting the components of a machine vision system: the video capture systems, the cameras, and the image processing algorithms.
Optics, Illumination, and Image Sensing for Machine Vision II | 1988
Stanley H. Zisk; Norman Wittels
Edge location is an important machine vision task. Machine vision systems perform mathematical operations on rectangular arrays of numbers that are intended to faithfully represent the spatial distribution of scene luminance. The numbers are produced by periodic sampling and quantization of the cameras video output. This sequence can cause artifacts to appear in the data with a noise spectrum that is high in power at high spatial frequencies. This is a problem because most edge detection algorithms are preferentially sensitive to the high-frequency content in an image. Solid state cameras can introduce errors because of the spatial periodicity of their sensor elements. This can result in problems when image edges are aligned with camera pixel boundaries: (a) some cameras introduce transients into the video signal while switching between sensor elements; (b) most cameras use analog low-pass filters to minimize sampling artifacts and these introduce video phase delays that shift the locations of edges. The problems compound when the vision system samples asynchronously with the cameras pixel rate. Moire patterns (analogous to beat frequencies) can result. In this paper, we examine and model quantization effects in a machine vision system with particular emphasis on edge detection performance. We also compare our models with experimental measurements.
Optics, Illumination, and Image Sensing for Machine Vision | 1987
Norman Wittels; Stanley H. Zisk
The purpose of the lighting in an industrial machine vision application is to produce an image that is well matched to the camera and vision system. The brightest areas of the scene should cause a sensor illuminance that is just below the cameras saturation level and the video signal from the darkest significant areas of the scene should lie just above the vision systems noise level. This paper describes the fundamental principles and the techniques used to design lighting that meets these requirements. The cameras transfer function, the vision systems noise level, and the relative lens aperture are used to calculate optimal luminances for the brightest and darkest areas in the scene. The necessary reflectivity coefficients for the objects in the scene are measured and lighting is designed which produces the correct object luminances in both the specular highlights and the diffuse background areas of the scene. We show how to specify the light sources required to produce the lighting and present an example of printed circuit board inspection. The burden that this design method places on the vision algorithms is also discussed.
Imaging and Illumination for Metrology and Inspection | 1994
Norman Wittels; Michael A. Gennert
Machine vision applications frequently require uniform or near-uniform illumination of the object being imaged. Optimizing the illumination uniformity requires a precise definition of the criterion to be optimized. Previous work has considered the smoothest possible illumination at the center of the field of illumination. In this paper, we find the lighting placement that maximizes the surface area within which the illumination is uniform to within some tolerance. This definition of optimality comes closer to what is desired in practice, since one generally wishes to illuminate objects having non-zero extents. Several different types of lighting are considered: area, line, point, and ring sources. Symmetry arguments are used to reduce the complexity of the analysis in many cases. Of particular interest is how to select a lighting design to optimize the illumination uniformity across regions that can be described by simple geometric models, such as circles, squares, rectangles, regular polygons, etc. Keywords: illumination, machine vision
Human Vision, Visual Processing, and Digital Display II | 1991
Buming Bian; Norman Wittels
Radiosity methods produce very striking interreflection effects for an enclosed diffuse environment. The quality of the images synthesized by radiosity methods varies depending on the calculation accuracy of the form factor, which is a geometric factor depending only on the relative orientation and position between two surfaces. Hemicube projection has been proposed to check the visibility between surfaces and calculate the form factor efficiently, but there is no analytical solution due to the double area integration. This paper presents a new form of the form factor and gives a theoretical derivation from the point view of illumination engineering. The new form factor contains only one area integration, so it can be computed analytically. An energy conservation condition applies to verify the correctness of the new model. The error introduced by using reciprocity as part of the form factor calculation has been eliminated, but the principle of reciprocity is maintained. The hemisphere projection method is used as an analytical solution of the new form factor. Finally, images generated by the new radiosity method are presented. Interreflection effects can be clearly seen around the intersection of the surfaces.
Imaging and Illumination for Metrology and Inspection | 1994
Norman Wittels; Tahar El-Korchi; Yinhong Li; Michael A. Gennert
The design of a high-speed range camera capable of providing on-the-fly range map assessment of printed circuit boards is described. The system uses an HeNe laser and time- space coding to achieve 180,000 range measurements per second. System spatial resolution is 0.001 X 0.001 in., and the range resolution is 0.00066 in., with a working depth of range of 3/8 in. We discuss the electro-optic design and present results from various imaging experiments using the instrument.
Optics, Illumination, and Image Sensing for Machine Vision VIII | 1994
Nabil I. Hachem; Michael A. Gennert; Norman Wittels
In this paper we discuss how to design a practical height gauging system based on the structured lighting approaches described. We show how some of the system limitations caused by the illumination and observation geometry affect component specifications. Examples of a machine vision inspection applications designed using these design principles is presented and analyzed. Finally, we discuss methods to extend this method.
visual communications and image processing | 1990
Jay McClellan; Norman Wittels; Allison Gotkin; Cathy Hepp; Patrick King
A reflectometer was designed to measure reflectivities of sample areas ranging from 10 microns to 1 millimeter. It is capable of illuminating the sample from any angle between 0 and 45 degrees relative to the surface normal, with the observation angle always normal to the surface. The instrument was calibrated and tested using reflectivity standards. Plots of reflectivity versus illumination angle are presented for some common materials.