Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Philip Amburn is active.

Publication


Featured researches published by Philip Amburn.


IEEE Engineering in Medicine and Biology Magazine | 1993

Towards statistically optimal interpolation for 3D medical imaging

Rob W. Parrott; Martin R. Stytz; Philip Amburn; David Robinson

The use of a statistical estimation technique called kriging, which produces estimation error measurements and analyzes the volumetric grid to determine sample value variability, is described. The use of interpolation in 3D medical imaging are first reviewed. Several different interpolation techniques, including linear trilinear, and tricubic interpolation techniques, are described and assessed. The kriging statistical estimation process is presented, and the results of applying it to slice interpolation and surface visualization are reported. The results indicate the potential of kriging for interpolation in 3D medical imaging and point out the need for further work.<<ETX>>


Critical Care Medicine | 2004

Use of artificial intelligence to identify cardiovascular compromise in a model of hemorrhagic shock

Todd F. Glass; Jason Knapp; Philip Amburn; Bruce A. Clay; Steven K. Rogers; Victor F. Garcia

ObjectiveTo determine whether a prototype artificial intelligence system can identify volume of hemorrhage in a porcine model of controlled hemorrhagic shock. DesignProspective in vivo animal model of hemorrhagic shock SettingResearch foundation animal surgical suite; computer laboratories of collaborating industry partner. SubjectsNineteen, juvenile, 25- to 35-kg, male and female swine. InterventionsAnesthetized animals were instrumented for arterial and systemic venous pressure monitoring and blood sampling, and a splenectomy was performed. Following a 1-hr stabilization period, animals were hemorrhaged in aliquots to 10, 20, 30, 35, 40, 45, and 50% of total blood volume with a 10-min recovery between each aliquot. Data were downloaded directly from a commercial monitoring system into a proprietary PC-based software package for analysis. Measurements and Main ResultsArterial and venous blood gas values, glucose, and cardiac output were collected at specified intervals. Electrocardiogram, electroencephalogram, mixed venous oxygen saturation, temperature (core and blood), mean arterial pressure, pulmonary artery pressure, central venous pressure, pulse oximetry, and end-tidal CO2 were continuously monitored and downloaded. Seventeen of 19 animals (89%) died as a direct result of hemorrhage. Stored data streams were analyzed by the prototype artificial intelligence system. For this project, the artificial intelligence system identified and compared three electrocardiographic features (R-R interval, QRS amplitude, and R-S interval) from each of nine unknown samples of the QRS complex. We found that the artificial intelligence system, trained on only three electrocardiographic features, identified hemorrhage volume with an average accuracy of 91% (95% confidence interval, 84–96%). ConclusionsThese experiments demonstrate that an artificial intelligence system, based solely on the analysis of QRS amplitude, R-R interval, and R-S interval of an electrocardiogram, is able to accurately identify hemorrhage volume in a porcine model of lethal hemorrhagic shock. We suggest that this technology may represent a noninvasive means of assessing the physiologic state during and immediately following hemorrhage. Point of care application of this technology may improve outcomes with earlier diagnosis and better titration of therapy of shock.


Presence: Teleoperators & Virtual Environments | 1992

A prototype visual and audio display

Eric Scarborough; J. Brandt; Steven K. Rogers; Philip Amburn; Dennis W. Ruck; M. Ericson

A display is described that provides a three-dimensional perspective view with spatially correlated audio. The system is developed around an optics device that projects a three-dimensional perspective view from a CRT to a concave mirror that focuses the energy at an image plane above the mirror. The result is that the objects displayed on the CRT appear to be floating in space. The directional audio is provided from an audio localization cue synthesizer that encodes pinna filtering and an interaural time delay onto an input audio signal. A magnetic head tracker is used to keep the audio images stable. The optical system is presented along with the graphics methods that were used to generate the visual cues. Then the audio localization cue synthesizer is described with emphasis on using this device to provide spatialized audio for use in synthetic environments.


computer-based medical systems | 1992

Statistically optimal interslice value interpolation in 3D medical imaging: theory and implementation

Rob W. Parrott; Martin R. Stytz; Philip Amburn; David Robinson

Describes a technique for statistically optimal interslice interpolation of scalar values for use in three-dimensional medical image rendering. The interpolation technique is based upon kriging, which is known to be the best linear unbiased estimation technique for spatially distributed data. The authors present the results obtained using kriging in the object space preprocessing operation of slice interpolation by slice-value interpolation. As a byproduct of the technique, kriging calculates the estimation error for the interslice values. This makes it possible to quantify the interpolation error in slices computed by the estimation technique.<<ETX>>


national aerospace and electronics conference | 1993

Measurement of distance perception using virtual audio

W. D'Angelo; M. Ericson; E. Scarborough; Steven K. Rogers; Philip Amburn; Dennis W. Ruck

Relative auditory distance perception of a direct path signal by itself, with synthetic reflection(s), reverberation, presented over headphones, was measured using a 2AFC task. The stimulus, either a 500 millisecond pink noise burst or a two second phrase of male speech, was presented twice, first at a reference distance and second at an incremental distance from the reference. Reference distances include five, fourteen, and twenty-two feet, and the incremental distances included multiples of 0.25, 0.5, and 1 foot, respectively. Stimulus pairs at each of the three distances were presented from four directions: front, back, left, and right. For each stimulus pair, the task of three subjects was to indicate which of the two sounds appeared closer to himself/herself. From histograms for each condition, the just noticeable difference (JND) was calculated by determining the minimum interval at which the subjects could respond correctly seventy-five percent of the time. Virtual audio may help a pilot to perform a variety of functions in the military cockpit. The visual workload may be reduced by providing a spatial auditory beacon for navigation waypoints.<<ETX>>


Teleoperators and Virtual Environments | 1995

Virtual environments research in the air force institute of technology virtual environments, 3-d medical imaging, and computer graphics laboratory

Martin R. Stytz; Philip Amburn; Patricia K. Lawlis; Keith Shomper

The Air Force Institute of Technology Virtual Environments, 3-D Medical Imaging, and Computer Graphics Laboratory is investigating the 3-D computer graphics, user-interface design, networking protocol, and software architecture aspects of distributed virtual environments. In this paper we describe the research projects that are underway in the laboratory. These projects include the development of an aircraft simulator for a distributed virtual environment, projects for observing, analyzing, and understanding virtual environments, a space virtual environment, a project that incorporates live aircraft range data into a distributed virtual environment, a virtual environment application framework, and a project for use in a hospital emergency department. We also discuss the research equipment infrastructure in the laboratory, recent publications, and the educational services we provide.


national aerospace and electronics conference | 1993

Applications of virtual audio

M. Ericson; W. D'Angelo; E. Scarborough; Steven K. Rogers; Philip Amburn; Dennis W. Ruck

Technology for electronically simulating spatial sound over loudspeakers and headphones has matured in the past few decades to facilitate many new applications of virtual audio. Electronic simulation of directional and distance auditory cues has greatly expanded the areas of application of virtual audio. Some potential aerospace applications include monitoring spatially separated speech communication signals to increase understanding, navigating by an auditory beacon, and acquiring visual targets with the aid of directional audio signals. Potential non-aerospace applications include navigational aids for the blind and enhanced virtual reality simulation for entertainment and education. In the future, more applications may use virtual audio technology to display spatial auditory information. Past, present, and future applications are discussed for aerospace, research, and entertainment industries. Techniques for creating virtual audio over loudspeakers and headphones are described. It is concluded that virtual audio systems are more flexible than previous physical simulations and can improve low fidelity.<<ETX>>


Archive | 2001

Method for combining automated detections from medical images with observed detections of a human interpreter

Steven K. Rogers; Philip Amburn; Telford S. Berkey; Randy P. Broussard; Martin P. DeSimio; Jeffrey W. Hoffmeister; Edward M. Ochoa; Thomas P. Rathbun; John E. Rosenstengel


Archive | 1998

Method and system for automated detection of clustered microcalcifications from digital mammograms

Steven K. Rogers; Philip Amburn; Telford S. Berkey; Randy P. Broussard; Martin P. DeSimio; Jeffrey W. Hoffmeister; Edward M. Ochoa; Thomas P. Rathbun; John E. Rosenstengel


Archive | 1999

Method and system for segmenting desired regions in digital mammograms

Steven K. Rogers; Philip Amburn; Telford S. Berkey; Randy P. Broussard; Martin P. DeSimio; Jeffrey W. Hoffmeister; Edward M. Ochoa; Thomas F. Rathbun; John E. Rosenstengel

Collaboration


Dive into the Philip Amburn's collaboration.

Top Co-Authors

Avatar

Steven K. Rogers

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Martin P. DeSimio

University of Dayton Research Institute

View shared research outputs
Top Co-Authors

Avatar

Jeffrey W. Hoffmeister

Air Force Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Randy P. Broussard

Air Force Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Dennis W. Ruck

Air Force Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Martin R. Stytz

Air Force Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

David Robinson

Air Force Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

E. Scarborough

Air Force Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Rob W. Parrott

Air Force Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge