Nathan E. Brener
Louisiana State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Nathan E. Brener.
ieee international conference on high performance computing data and analytics | 2000
Weian Deng; S. Sitharama Iyengar; Nathan E. Brener
This paper investigates the skeletonization problem using parallel thinning techniques and proposes a new one-pass parallel asymmetric thinning algorithm (OPATA8). Wu and Tsai presented a one-pass parallel asymmetric thinning algorithm (OPATA4) that implemented 4-distance, or city block distance, skeletonization. However, city block distance is not a good approximation of Euclidean distance. By applying 8-distance, or chessboard distance, this new algorithm improves not only the quality of the resulting skeletons but also the efficiency of the computation. This algorithm uses 18 patterns. The algorithm has been implemented, and has been compared to both algorithm OPATA4 and Zhang and Suen’s two-pass parallel thinning algorithm. The results show that the proposed OPATA8 has good noise resistance, perfectly 8-connected skeleton output, and a faster speed without serious erosion.
International Journal on Artificial Intelligence Tools | 1996
John R. Benton; S. Sitharama Iyengar; Weian Deng; Nathan E. Brener; V. S. Subrahmanian
This paper defines a new approach and investigates a fundamental problem in route planners. This capability is important for robotic vehicles(Martian Rovers, etc.) and for planning off-road military maneuvers. The emphasis throughout this paper will be on the design and analysis and hieiaichical implementation of our route planner. This work was motivated by anticipation of the need to search a grid of a trillion points for optimum routes. This cannot be done simply by scaling upward from the algorithms used to search a grid of 10,000 points. Algorithms sufficient for the small grid are totally inadequate for the large grid. Soon, the challenge will be to compute off-road routes more than 100 km long and with a one or two-meter grid. Previous efforts are reviewed and the data structures, decomposition methods and search algorithms are analyzed and limitations are discussed. A detailed discussion of a hieraichical implementation is provided and the experimental results are analyzed.
Journal of Applied Physics | 1988
Nathan E. Brener; G. Fuster; J. Callaway; J. L. Fry; Zhao Yz
Self‐consistent band‐structure calculations are used to obtain the ferromagnetic moment as a function of lattice constant for bcc and fcc Mn. The ferromagnetic moment of bcc Mn is found to change discontinuously from a small to a large value as the lattice constant increases, while the fcc moment is found to change discontinuously from zero to a large value with increasing atomic volume. In the bcc case, the transition occurs inside a narrow double moment region in which the low spin and high spin states coexist. Information from magnetic susceptibility calculations on bcc and fcc Mn is used to predict whether the ground state is ferromagnetic or antiferromagnetic in certain ranges of lattice constant.
signal processing systems | 2011
Hua Cao; Nathan E. Brener; Bahram Khoobehi; S. Sitharama Iyengar
A high performance adaptive fidelity approach for multi-modality Optic Nerve Head (ONH) image fusion is presented. The new image fusion method, which consists of the Adaptive Fidelity Exploratory Algorithm (AFEA) and the Heuristic Optimization Algorithm (HOA), is reliable and time efficient. It has achieved an optimal fusion result by giving the visualization of fundus image with a maximum angiogram overlay. Control points are detected at the vessel bifurcations using the AFEA. Shape similarity criteria are used to match the control points that represent same salient features of different images. HOA adjusts the initial good-guess of control points at the sub-pixel level in order to maximize the objective function Mutual-Pixel-Count (MPC). In addition, the performance of the AFEA and HOA algorithms was compared to the Centerline Control Point Detection Algorithm, Root Mean Square Error (RMSE) minimization objective function employed by the traditional Iterative Closest Point (ICP) algorithm, Genetic Algorithm, and some other existing image fusion approaches. The evaluation results strengthen the AFEA and HOA algorithms in terms of novelty, automation, accuracy, and efficiency.
international conference on tools with artificial intelligence | 1995
John R. Benton; S. Sitharama Iyengar; Weian Deng; Nathan E. Brener; V. S. Subrahmanian
The paper defines a new approach and investigates a fundamental problem in route planners. This capability is important for robotic vehicles (Martian Rovers, etc.) and for planning off road military manoeuvres. The emphasis throughout the paper is on the design and analysis and hierarchical implementation of our route planner. The work was motivated by anticipation of the need to search a grid of a trillion points for optimum routes. This cannot be done simply by scaling upward from the algorithms used to search a grid of 10,000 points. Algorithms sufficient for the small grid are totally inadequate for the large grid. Soon, the challenge will be to compute off road routes more then 100 km long and with a one or two meter grid. Previous efforts are reviewed and the data structures, decomposition methods and search algorithms are analyzed and limitations are discussed. A detailed discussion of a hierarchical implementation is provided and the experimental results are analyzed. The principal contributions of the paper are: new algorithms for decomposing the map and new search methods; analysis of new approaches; and the use of expert systems, deductive databases and mediators. Experimental results are included of a detailed implementation.
high performance distributed computing | 2010
Harsha Bhagawaty; Lei Jiang; Sreekanth Pothanis; Gabrielle Allen; Nathan E. Brener; Tevfik Kosar
With many researchers now having easy access to supercomputers, coastal scientists are able to develop and run simulations that model the physical and ecological processes in ocean or nearshore areas in a distributed and collaborative environment. However, the increase in capacity of computational resources does not lead directly to a rapid improvement of the simulations themselves. Instead, it brings a new challenge that motivates scientists to fully utilize the huge amount of simulation data created in supercomputers, thus fostering advanced scientific research. Driven by the urgent need for in-depth investigations in Louisiana coastal areas, especially during hurricane seasons, a data center, which provides research communities with scientific data resources on demand, is imperative. In this paper, we present the design, implementation and use of such a simulation data archive for coastal science. The simulation data archive is capable of providing interfaces based on the requirements of user groups and its application incorporates multiple use cases. The enabling technology, as well as the challenges in the development of this simulation data archive, are also described in this paper.
acm symposium on applied computing | 1995
Kosmas Karadimitriou; John M. Tyler; Nathan E. Brener
This is a case study that documents the reverse engineering and reengineering of a twenty year old system, BNDPKG. BNDPKG is a large software system written in FORTRAN, and still used in Physics for Electronic Structure calculations. BNDPKG was changed from a sequential-serial system that required IBM mainframes to a scalable parallel version that can use workstations and message passing. The message-passing for the distributed computing was implemented using PVM, the message-passing tool from Oak Ridge National Laboratory. This parallel version of BNDPKG can actually run on any homogeneous or heterogeneous set of machines, requiring only the existence of UNIX and a PVM daemon on each machine in the set. Because of inadequate documentation of the original BNDPKG, it was necessary to go through a reverse engineering process to determine its design and how various components interacted. Using the information obtained, BNDPKG was reengineered into a modern parallel version. Test runs of the parallel BNDPKG have been performed on a cluster of IBM RISC/6000 workstations to estimate the speedup that was obtained from parallelism. The problems, the solutions, and the methods used for the reverse engineering and for the reengineering of BNDPKG are presented.
International Journal of Intelligent Computing and Cybernetics | 2009
Hua Cao; Nathan E. Brener; S. Sitharama Iyengar
Purpose – The purpose of this paper is to develop a 3D route planner, called 3DPLAN, which employs the Fast‐Pass A* algorithm to find optimum paths in the large grid.Design/methodology/approach – The Fast‐Pass A* algorithm, an improved best‐first search A* algorithm, has a major advantage compared to other search methods because it is guaranteed to give the optimum path.Findings – In spite of this significant advantage, no one has previously used A* in 3D searches. Most researchers think that the computational cost of using A* for 3D route planning would be prohibitive. This paper shows that it is quite feasible to use A* for 3D searches if one employs the new mobility and threat heuristics that have been developed.Practical implications – This paper reviews the modification of the previous 3DPLAN in the ocean dynamical environment. The test mobility map is replaced with more realistic mobility map that consists of travel times of each grid point to each of its 26 neighbors using the actual current veloci...
southeastern symposium on system theory | 2008
Hua Cao; Nathan E. Brener; Hilary W. Thompson; S. Sitharama Iyengar; Zhengmao Ye
Biomedical image fusion is generally scene dependent, which requires intensive computational effort. A novel approach of feature-based registration and area-based heuristic optimization fusion of multi-modality retinal images is proposed in this paper. The new algorithm, which is reliable, robust, and time-efficient, has an automatic adaptation from frame to frame with few tunable threshold parameters. The registration approach is based on the retinal vasculature extraction using Canny Edge Detector, and control point identification at the vessel bifurcations using the adaptive exploratory algorithm. The shape similarity criteria are employed to fit the control points. The new fusion approach implements the Mutual-Pixel-Count (MPC) maximization based heuristic optimization procedure, which adjusts the control points at the sub-pixel level. MPC is the new measurement criteria for fusion accuracy being proposed. This study achieved a global maxima equivalent result by calculating MPC local maxima with an efficient computation cost. The new method is a promising step towards useful clinical tools for retinopathy diagnosis, and thus forms a good foundation for further development.
southeastern symposium on system theory | 2008
Hua Cao; Nathan E. Brener; Hilary W. Thompson; S. Sitharama Iyengar; Zhengmao Ye
A novel automated approach of the multi-modality retinal image registration and fusion has been developed. The new algorithm, which is reliable, robust, and time-efficient, has an automatic adaptation from frame to frame with few tunable threshold parameters. The registration is based on retinal vasculature extraction using canny edge detector, and control point identification at the vessel bifurcations using adaptive exploratory algorithm. Shape similarity criteria are employed to match the control points. MPC maximization based optimization has been developed to adjust the control points at the sub-pixel level. MPC, which is initially introduced by this study into the biomedical image fusion area, is the new measurement criteria for fusion accuracy. A global maxima equivalent result is achieved by calculating MPC local maxima with an efficient computation cost. The comparative study has shown the advantage of the new approach in terms of novelty, efficiency, and accuracy.