Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Edward M. Riseman is active.

Publication


Featured researches published by Edward M. Riseman.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1986

Extracting Straight Lines

J. Brian Burns; Allen R. Hanson; Edward M. Riseman

This paper presents a new approach to the extraction of straight lines in intensity images. Pixels are grouped into line-support regions of similar gradient orientation, and then the structure of the associated intensity surface is used to determine the location and properties of the edge. The resulting regions and extracted edge parameters form a low-level representation of the intensity variations in the image that can be used for a variety of purposes. The algorithm appears to be more effective than previous techniques for two key reasons: 1) the gradient orientation (rather than gradient magnitude) is used as the initial organizing criterion prior to the extraction of straight lines, and 2) the global context of the intensity variations associated with a straight line is determined prior to any local decisions about participating edge elements.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1999

Textfinder: an automatic system to detect and recognize text in images

Victor Wu; R. Manmatha; Edward M. Riseman

A robust system is proposed to automatically detect and extract text in images from different sources, including video, newspapers, advertisements, stock certificates, photographs, and checks. Text is first detected using multiscale texture segmentation and spatial cohesion constraints, then cleaned up and extracted using a histogram-based binarization algorithm. An automatic performance evaluation scheme is also proposed.


acm international conference on digital libraries | 1997

Finding text in images

Victor Wu; R. Manmatha; Edward M. Riseman

There are many applications in which the automatic detection and recognition of text embedded in images is useful. These applications include digad libraries, multimedia systems, and Geographical Information Systems. When machine generated text is prdnted against clean backgrounds, it can be converted to a computer readable form (ASCII) using current Optical Character Recognition (OCR) technology. However, text is often printed against shaded or textured backgrounds or is embedded in images. Examples include maps, advertisements, photographs, videos and stock certificates. Current document segmentation and recognition technologies cannot handle these situafons well. In this paper, a four-step system which automaticnlly detects and extracts text in images i& proposed. First, a texture segmentation scheme is used to focus attention on regions where text may occur. Second, strokes are extracted from the segmented text regions. Using reasonable heuristics on text strings such as height similarity, spacing and alignment, the extracted strokes are then processed to form rectangular boxes surrounding the corresponding ttzt strings. To detect text over a wide range of font sizes, the above steps are first applied to a pyramid of images generated from the input image, and then the boxes formed at each resolution level of the pyramid are fused at the image in the original resolution level. Third, text is extracted by cleaning up the background and binarizing the detected ted strings. Finally, better text bounding boxes are generated by srsiny the binarized text as strokes. Text is then cleaned and binarized from these new boxes, and can then be passed through a commercial OCR engine for recognition if the text is of an OCR-recognizable font. The system is stable, robust, and works well on imayes (with or without structured layouts) from a wide van’ety of sources, including digitized video frames, photographs, *This material is based on work supported in part by the National Science Foundation, Library of Congress and Department of Commerce under cooperative agreement number EEC9209623, in part by the United States Patent and mademark Office and Defense Advanced Research Projects Agency/IT0 under ARPA order number D468, issued by ESC/AXS contract number F19628-96-C-0235, in part by the National Science Foundation under grant number IF&9619117 and in part by NSF Multimedia CDA-9502639. Any opinions, findings and conclusions or recommendations expressed in this material are the author(s) and do not necessarily reflect those of the sponsors. Prrmission to make digital/hard copies ofall or part oflhis material for personal or clrrssroom use is granted without fee provided that the copies are not made or distributed for profit or commercial advantage, the copyright notice, the title ofthe publication and its date appear, and notice is given that copyright is by permission of the ACM, Inc. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires specific permission and/or fe DL 97 Philadelphia PA, USA Copyright 1997 AChi 0-89791~868-1197/7..


International Journal of Computer Vision | 1988

The Schema System

Bruce A. Draper; Robert T. Collins; John Brolio; Allen R. Hanson; Edward M. Riseman

3.50 newspapers, advertisements, stock certifimtes, and personal checks. All parameters remain the same for-all the experiments.


IEEE Transactions on Computers | 1972

The Inhibition of Potential Parallelism by Conditional Jumps

Edward M. Riseman; Caxton C. Foster

THE SCHEMA SYSTEM EMBODIES A KNOWLEDGE-BASED APPROACH TO SCENE INTERPRE- TATION. LOW-LEVEL ROUTINES ARE APPLIED TO EXTRACT IMAGE DESCRIPTORS CALLED TOKENS, AND THESE TOKENS ARE FURTHER ORGANIZED BY INTERMEDIATE-LEVEL ROUT- INES INTO MORE ABSTRACT STRUCTURES THAT CAN BE ASSOCIATED WITH OBJECT INST- ANCES. THE THOUSANDS OF TOKENS THAT ARE EXTRACTED FROM AN IMAGE CAN BE GROUPED IN A COMBINATORIALLY EXPLOSIVE MANNER. THEREFORE, KNOWLEDGE IN THE SCHEMA SYSTEM IS NOT LIMITED TO THE DESCRIPTIONS OF OBJECTS; IT INCLUDES INFORMATION ABOUT HOW EACH OBJECT CAN BE RECOGNIZED. OBJECT SCHEMAS CONTROL THE INVOCATION AND EXECUTION OF THE LOW-LEVEL AND INTERMEDIATE-LEVEL ROUT- INES WITH THE GOAL OF FORMING HYPOTHESES ABOUT OBJECTS IN THE SCENE. THE SYSTEM DESCRIBED PRODUCES IMAGE INTERPRETATIONS BASED ON TWO-DIMENSIONAL REASONING, ALTHOUGH NOTHING IN THE SYSTEM ORGANIZATION AND CONTROL STRATEG- IES PRECLUDE THE INCLUSION OF THREE-DIMENSIONAL INFORMATION. THE SCHEMA FRAMEWORK EXPLOITS COARSE-GRAINED PARALLELISM IN A COOPERA- TIVE INTERPRETATION PROCESS. SCHEMA INSTANCES RUN CONCURRENTLY, AND AN OB- JECT SCHEMA OFTEN HAS AVAILABLE A VARIETY OF STRATEGIES FOR IDENTIFICATION, EACH ONE INVOKING KNOWLEDGE SOURCES TO GATHER SUPPORT FOR THE PRESENCE OF A HYPOTHESIZED OBJECT. INTER-SCHEMA COMMUNICATION IS CARRIED OUT ASYNCHRON- OUSLY THROUGH A GLOBAL BLACKBOARD. IN THIS WAY SCHEMA INSTANCES COOPERATE TO IDENTIFY AND LOCATE THE SIGNIFICANT OBJECTS PRESENT IN THE SCENE.


IEEE Control Systems Magazine | 1992

Image-based homing

Jiawei Hong; Xiaonan Tan; Brian Pinette; Richard S. Weiss; Edward M. Riseman

This note reports the results of an examination of seven programs originally written for execution on a conventional computer (CDC-3600). We postulate an infinite machine, one with an infinite memory and instruction stack, infinite registers and memory, and an infinite number of functional units. This machine wiU exectite a program in parallel at maximum speed by executing each instruction at the earliest possible moment.


International Journal of Computer Vision | 1987

The Image Understanding Architecture

Chip Weems; Steven P. Levitan; Allen R. Hanson; Edward M. Riseman; J.G. Nash; David B. Shu

A system that allows a robot to use a model of its environment to navigate is reported. The system maps the environment as a set of snapshots of the world taken at target locations. The robot uses an image-based local homing algorithm to navigate between neighboring and target locations. The approach has an imaging system that acquires a compact, 360 degrees representation of the environment as an intensity waveform, and an image-based, qualitative homing algorithm that allows the robot to navigate without explicitly inferring three-dimensional structure from the image. The result of an experiment in a typical indoor environment are reported. They show that image-based navigation is a feasible alternative to approaches using 3-D models and more complex model-based vision algorithms. >


computer vision and pattern recognition | 1996

Word spotting: a new approach to indexing handwriting

R. Manmatha; Chengfeng Han; Edward M. Riseman

This paper provides an overview of the Image Understanding Architecture (IUA), a massively parallel, multilevel system for supporting real-time image understanding applications and research in knowledge-based computer vision. The design of the IUA is motivated by considering the architectural requirements for integrated real-time vision in terms of the type of processing element, control of processing, and communication between processing elements.The IUA integrates parallel processors operating simultaneously at three levels of computational granularity in a tightly coupled architecture. Each level of the IUA is a parallel processor that is distinctly different from the other two levels, designed to best meet the processing needs at each of the corresponding levels of abstraction in the interpretation process. Communication between levels takes place via parallel data and control paths. The processing elements within each level can also communicate with each other in parallel, via a different mechanism at each level that is designed to meet the specific communication needs of each level of abstraction.An associative processing paradigm has been utilized as the principle control mechanism at the low and intermediate levels. It provides a simple yet general means of managing massive parallelism, through rapid responses to queries involving partial matches of processor memory to broadcast values. This has been enhanced with hardware operations that provide for global broadcast, local compare, Some/None response, responder count, and single responder select. To demonstrate how the IUA may be used for vision processing, several sample algorithms and a typical interpretation scenario on the IUA are presented.We believe that the IUA represents a major step toward the development of a proper combination of integrated processing power, communication, and control required for real-time computer vision. A proof-of-concept prototype of 1/64th of the IUA is currently being constructed by the University of Massachusetts and Hughes Research Laboratories.


International Journal of Computer Vision | 1987

Segmenting Images Using Localized Histograms and Region Merging

J. R. Beveridge; J. S. Griffith; Ralf R. Kohler; Allen R. Hanson; Edward M. Riseman

There are many historical manuscripts written in a single hand which it would be useful to index. Examples include the W.B. DuBois collection at the University of Massachusetts and the early Presidential libraries at the Library of Congress. Since Optical Character Recognition (OCR) does not work well on handwriting, an alternative scheme based on matching the images of the words is proposed for indexing such texts. The current paper deals with the matching aspects of this process. Two different techniques for matching words are discussed. The first method matches words assuming that the transformation between the words may be modelled by a translation (shift). The second method matches words assuming that the transformation between the words may be modelled by an affine transform. Experiments are shown demonstrating the feasibility of the approach for indexing handwriting. The method should also be applicable to retrieving previously stored material from personal digital assistants (PDAs).


systems man and cybernetics | 1989

Token-based extraction of straight lines

Michael Boldt; Richard S. Weiss; Edward M. Riseman

A working system for segmenting images of complex scenes is presented. The system integrates techniques that have evolved out of many years of research in low-level image segmentation at the University of Massachusetts and elsewhere. This paper documents the result of this historical evolution. Segmentations produced by the system are used extensively in related image interpretation research.The system first produces segmentations based upon an analysis of spatially localized feature histograms. These initial segmentations are then simplified using a region merging algorithm. Parameter selection for the local histogram segmentation algorithm is facilitated by mapping the multidimensional parameter space to a one-dimensional parameter which regulates region fragmentation. An extension of this algorithm to multiple features is also presented. Experience with roughly 100 images from different domains has shown the system to be robust and effective. Samples of these results are included.

Collaboration


Dive into the Edward M. Riseman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Howard Schultz

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar

Bruce A. Draper

Colorado State University

View shared research outputs
Top Co-Authors

Avatar

Zhigang Zhu

City College of New York

View shared research outputs
Top Co-Authors

Avatar

Robert T. Collins

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

R. Manmatha

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar

Frank Stolle

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar

Charles C. Weems

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Richard S. Weiss

University of Massachusetts Amherst

View shared research outputs
Researchain Logo
Decentralizing Knowledge