Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Siegfried Martens is active.

Publication


Featured researches published by Siegfried Martens.


Remote Sensing of Environment | 1999

A Neural Network Method for Mixture Estimation for Vegetation Mapping

Gail A. Carpenter; Sucharita Gopal; Scott A. Macomber; Siegfried Martens; Curtis E. Woodcock

While most forest maps identify only the dominant vegetation class in delineated stands, individual stands are often better characterized by a mix of vegetation types. Many land management applications, including wildlife habitat studies, can benefit from knowledge of mixes. This article examines various algorithms that use data from the Landsat Thematic Mapper (TM) satellite to estimate mixtures of vegetation types within forest stands. Included in the study are maximum likelihood classification and linear mixture models as well as a new methodology based on the ARTMAP neural network. Two paradigms are considered: classification methods, which describe stand-level vegetation mixtures as mosaics of pixels, each identified with its primary vegetation class; and mixture methods, which treat samples as blends of vegetation, even at the pixel level. Comparative analysis of these mixture estimation methods, tested on data from the Plumas National Forest, yields the following conclusions: 1) Accurate estimates of proportions of hardwood and conifer cover within stands can be obtained, particularly when brush is not present in the understory; 2) ARTMAP outperforms statistical methods and linear mixture models in both the classification and the mixture paradigms; 3) topographic correction fails to improve mapping accuracy; and 4) the new ARTMAP mixture system produces the most accurate overall results. The Plumas data set has been made available to other researchers for further development of new mapping methods and comparison with the quantitative studies presented here, which establish initial benchmark standards.


Neural Networks | 2005

Self-organizing information fusion and hierarchical knowledge discovery: a new framework using ARTMAP neural networks

Gail A. Carpenter; Siegfried Martens; Ogi Ogas

Classifying novel terrain or objects from sparse, complex data may require the resolution of conflicting information from sensors working at different times, locations, and scales, and from sources with different goals and situations. Information fusion methods can help resolve inconsistencies, as when evidence variously suggests that an objects class is car, truck, or airplane. The methods described here address a complementary problem, supposing that information from sensors and experts is reliable though inconsistent, as when evidence suggests that an objects class is car, vehicle, and man-made. Underlying relationships among classes are assumed to be unknown to the automated system or the human user. The ARTMAP information fusion system uses distributed code representations that exploit the neural networks capacity for one-to-many learning in order to produce self-organizing expert systems that discover hierarchical knowledge structures. The fusion system infers multi-level relationships among groups of output classes, without any supervised labeling of these relationships. The procedure is illustrated with two image examples, but is not limited to the image domain.


international symposium on neural networks | 2005

Self-organizing hierarchical knowledge discovery by an ARTMAP information fusion system

Gail A. Carpenter; Siegfried Martens

Classifying terrain or objects may require the resolution of conflicting information from sensors working at different times, locations, and scales, and from users with different goals and situations. Current fusion methods can help resolve such inconsistencies, as when evidence variously suggests that an object is a car, a truck, or an airplane. The methods described here define a complementary approach to the information fusion problem, considering the case where sensors and sources are both nominally inconsistent and reliable, as when evidence suggests that an object is a car, a vehicle, and man-made. Underlying relationships among classes are assumed to be unknown to the automated system or the human user. The ARTMAP self-organizing rule discovery procedure is illustrated with an image example, but is not limited to the image domain.


computational intelligence in robotics and automation | 1998

Mobile robot sensor integration with fuzzy ARTMAP

Siegfried Martens; Paolo Gaudiano; Gail A. Carpenter

The raw sensory input available to a mobile robot suffers from a variety of shortcomings. Sensor fusion can yield a percept more veridical than is available from any single sensor input. In this project, the fuzzy ARTMAP neural network is used to fuse sonar and visual sonar on a B14 mobile robot. The neural network learns to associate specific sensory inputs with a corresponding distance metric. Once trained, the network yields more accurate predictions of range to obstacles than those provided by either sensor type alone. This improvement holds across all distances and impact angles tested.


applied imagery pattern recognition workshop | 2004

Biologically inspired approaches to automated feature extraction and target recognition

Gail A. Carpenter; Siegfried Martens; Ennio Mingolla; Ogi Ogas; Chaitanya Sai

Ongoing research at Boston University has produced computational models of biological vision and learning that embody a growing corpus of scientific data and predictions. Vision models perform long-range grouping and figure/ground segmentation, and memory models create attentionally controlled recognition codes that intrinsically combine bottom-up activation and top-down learned expectations. These two streams of research form the foundation of novel dynamically integrated systems for image understanding. Simulations using multispectral images illustrate road completion across occlusions in a cluttered scene and information fusion from input labels that are simultaneously inconsistent and correct. The CNS Vision and Technology Labs (cns.bu.edu/visionlab and cns.bu.edu/iechlab) are further integrating science and technology through analysis, testing, and development of cognitive and neural models for large-scale applications, complemented by software specification and code distribution.


Archive | 2003

Information Fusion and Hierarchical Knowledge Discovery by ARTMAP Neural Networks

Gail A. Carpenter; Siegfried Martens; Ogi Ogas; Bradley J. Rhodes


Archive | 1998

Mobile Robot Sensor Fusion with Fuzzy ARTMAP

Siegfried Martens; Paolo Gaudiano; Gail A. Carpenter


Archive | 1999

Neurobotics Lab Research: Learning, Vision and Sonar Recognition with Mobile Robots

Paolo Gaudiano; Carolina Chang; Ihsan Ecemis; Siegfried Martens; Erol Sahin; William W. Streilein; Robert A. Wagner


Archive | 1998

Neural network sensor fusion for spatial visualization

Siegfried Martens; Gail A. Carpenter; Paolo Gaudiano


Archive | 1999

Neural networks for satellite remote sensing and robotic sensor interpretation

Siegfried Martens; Gail A. Carpenter

Collaboration


Dive into the Siegfried Martens's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge