Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where James Vradenburg Miller is active.

Publication


Featured researches published by James Vradenburg Miller.


computer vision and pattern recognition | 1996

MUSE: robust surface fitting using unbiased scale estimates

James Vradenburg Miller; Charles V. Stewart

Despite many successful applications of robust statistics, they have yet to be completely adapted to many computer vision problems. Range reconstruction, particularly in unstructured environments, requires a robust estimator that not only tolerates a large outlier percentage but also tolerates several discontinuities, extracting multiple surfaces in an image region. Observing that random outliers and/or points from across discontinuities increase a hypothesized fits scale estimate (standard deviation of the noise), our new operator; called MUSE (Minimum Unbiased Scale Estimator), evaluates a hypothesized fit over potential inlier sets via an objective function of unbiased scale estimates. MUSE extracts the single best fit from the data by minimizing its objective function over a set of hypothesized fits and can sequentially extract multiple surfaces from an image region. We show MUSE to be effective on synthetic data modelling small scale discontinuities and in preliminary experiments on complicated range data.


ieee visualization | 1990

Extracting geometric models through constraint minimization

James Vradenburg Miller; David E. Breen; Michael J. Wozny

The authors propose a methodology that will extract a topologically closed geometric model from a two-dimensional image. This is accomplished by starting with a simple model that is already topologically closed and deforming the model, based on a set of constraints, so that the model grows (shrinks) to fit the feature within the image while maintaining its closed and locally simple nature. The initial model is a non-self-intersecting polygon that is either embedded in the feature or surrounds the feature. There is a cost function associated with every vertex that quantifies its deformation, the properties of simple polygons, and the relationship between noise and feature. The constraints embody local properties of simple polygons and the nature of the relationship between noise and the features in the image.<<ETX>>


computer vision and pattern recognition | 1997

Prediction intervals for surface growing range segmentation

James Vradenburg Miller; Charles V. Stewart

The surface growing framework presented by P. Besl and R. Jain (1988) has served as the basis for many range segmentation techniques. It has been augmented with alternative fitting techniques, model selection criteria, and solid modelling components. All of these approaches, however require global thresholds and large isolated seed regions. Range scenes typically do not satisfy the global threshold assumption since it requires data noise characteristics to be constant throughout the scene. Furthermore, as scene complexity increases, the number of surfaces, discontinuities, and outliers increase, hindering the identification of large seed regions. We present statistical criteria based on multivariate regression to replace the traditional decision criteria used in surface growing. We use local estimates and their uncertainties to construct criteria which capture the uncertainty in extrapolating estimated fits. We restrict surface expansion to very localized extrapolations, increasing the sensitivity to discontinuities and allowing regions to refine their estimates and uncertainties. Our approach uses a small number of parameters which are either statistical thresholds or cardinality measures, i.e. we do not use thresholds defined by specific range distances or orientation angles.


international conference on computer graphics and interactive techniques | 2001

Visualization toolkit extreme testing

Bill Hibbard; Bill Lorensen; James Vradenburg Miller

Each evening, at 8 p.m. Eastern Time, a computer process wakes up in a research lab in upstate New York and initiates a night of compilation and testing on 11 different configurations of operating systems and hardware. The subject of this automated build and test process is the Visualization Toolkit, vtk (http://visualizationtoolkit.org/ vtk.html).Vtk is an open source C++ class library of visualization and imaging algorithms for UNIX, Linux and Windows.The software began in 1993 as a sample implementation to illustrate the algorithms and architectures described in the textbook, The Visualization Toolkit : An Object-Or iented Approach to Computer Graphics . Since then, vtk has become a powerful, high quality project, supported by a large global user community and software developers from the U.S., Canada, England and France. Today, vtk has more than 600 C++ classes and about 100,000 executable lines of code. From the start, the vtk developers recognized the need to support regression testing. Visualization continues to be an active field of research with new techniques being introduced yearly. And even well-designed software like vtk requires enhancements to its underlying architecture. Regression testing compares the results of software after changes have been made. The purpose is to identify changes to software that affect the output of the software. The vtk regression testing compares the images generated by the visualization algorithms with baseline images that developers of the algorithms have deemed as valid.The image comparison allows control over how well the images must match. Images created using OpenGL need not match pixel for pixel, while those produced by imaging algorithms must strictly match. Until 1998, the vtk regression tests were run manually on an ad hoc basis and prior to a major software release. The time between releases was about six months and there were hundreds of changes made in the software, either to correct bugs or add new capabilities. Before the release, many of the regression tests would fail and it was an onerous task to determine which of the hundreds of changes had caused the differences between the generated and baseline images. In January 1998, a General Electric Company quality initiative motivated the vtk development team to increase the number of regression tests and perform the tests more frequently and automatically. By the summer of 1998, more than 14 quality procedures were added to the vtk test suite. The procedures were placed under control of a master build and test script that is run nightly at the GE Corporate Research and Development Center.At the end of the tests, the master script generates an html Dashboard that summarizes the results of the quality procedures. The automated, nightly build and test suites proceeds as follows: 1. The Master queries the source code repository to see which files have changed since the previous evening. Summaries of the changes are kept in a file that is accessible from the Dashboard (see Figure 1). 2. The Master initiates a build on each hardware/software configuration and each build stores the compiler logs in separate files. Combinations of operating systems and compilers assure the vtk remains portable every day. 3. The Master runs the regression suite on each configuration. The suite includes image based C++ and tcl/tk tests as well as C++ and tcl/tk text tests. The results are accumulated in logs for each platform. Each test reports whether it passed or failed. One of the test platforms includes a dynamic memory analysis using a commercial tool applied to each regression test. The dynamic analysis detects memory leaks and il legal access of memory.The memory analysis log is saved in a file for later processing and reporting. 4. After all the builds and tests complete, the Master scans the build log files for defects. Vtk is probably the most widely used visualization system, so I asked Bill Lorensen to contribute a VisFiles column about the reasons for that success. His article describes their automated testing.


international conference on computer graphics and interactive techniques | 1991

Geometrically deformed models: a method for extracting closed geometric models form volume data

James Vradenburg Miller; David E. Breen; William Edward Lorensen; Robert M. O'Bara; Michael J. Wozny


Archive | 2003

Methods and apparatus for processing image data to aid in detecting disease

Matthew William Turek; Joseph L. Mundy; Tony Chishao Pan; Peter Henry Tu; James Vradenburg Miller; Robert August Kaucic; Xiaoye Wu; Paulo Ricardo Mendonca


Archive | 2004

Method and apparatus for efficient calculation and use of reconstructed pixel variance in tomography images

Samit Kumar Basu; Bruno Kristiaan Bernard De Man; Peter Michael Edic; Ricardo Scott Avila; James Vradenburg Miller; Colin Craig McCulloch; Deborah Walter; Paulo Ricardo Mendonca; William Macomber Leue; Thomas B. Sebastian


Archive | 2003

Method and apparatus for registration of lung image data

Matthew William Turek; William Edward Lorensen; James Vradenburg Miller


Archive | 2009

Method for computed tomography motion estimation and compensation

Jed Douglas Pack; Peter Michael Edic; Bernhard Erich Hermann Claus; Maria Iatrou; James Vradenburg Miller


Archive | 2009

SYSTEM, PROGRAM PRODUCT, AND RELATED METHODS FOR REGISTERING THREE-DIMENSIONAL MODELS TO POINT DATA REPRESENTING THE POSE OF A PART

Wesley David Turner; James Vradenburg Miller; William Edward Lorensen

Collaboration


Dive into the James Vradenburg Miller's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

William Edward Lorensen

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James C. Ross

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge