Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jacob Barhen is active.

Publication


Featured researches published by Jacob Barhen.


IEEE Transactions on Knowledge and Data Engineering | 2004

On computing mobile agent routes for data fusion in distributed sensor networks

Qishi Wu; Nageswara S. V. Rao; Jacob Barhen; S.S. Iyenger; Vijay K. Vaishnavi; Hairong Qi; Krishnendu Chakrabarty

The problem of computing a route for a mobile agent that incrementally fuses the data as it visits the nodes in a distributed sensor network is considered. The order of nodes visited along the route has a significant impact on the quality and cost of fused data, which, in turn, impacts the main objective of the sensor network, such as target classification or tracking. We present a simplified analytical model for a distributed sensor network and formulate the route computation problem in terms of maximizing an objective function, which is directly proportional to the received signal strength and inversely proportional to the path loss and energy consumption. We show this problem to be NP-complete and propose a genetic algorithm to compute an approximate solution by suitably employing a two-level encoding scheme and genetic operators tailored to the objective function. We present simulation results for networks with different node sizes and sensor distributions, which demonstrate the superior performance of our algorithm over two existing heuristics, namely, local closest first and global closest first methods.


Journal of Optimization Theory and Applications | 1993

Terminal Repeller Unconstrained Subenergy Tunneling (TRUST) for fast global optimization

B.C. Cetin; Jacob Barhen; Joel W. Burdick

A new method for unconstrained global function optimization, acronymedtrust, is introduced. This method formulates optimization as the solution of a deterministic dynamical system incorporating terminal repellers and a novel subenergy tunneling function. Benchmark tests comparing this method to other global optimization procedures are presented, and thetrust algorithm is shown to be substantially faster. Thetrust formulation leads to a simple stopping criterion. In addition, the structure of the equations enables an implementation of the algorithm in analog VLSI hardware, in the vein of artificial neural networks, for further substantial speed enhancement.


IEEE Computer | 1989

Neutral learning of constrained nonlinear transformations

Jacob Barhen; Sandeep Gulati; Michail Zak

Two issues that are fundamental to developing autonomous intelligent robots, namely rudimentary learning capability and dexterous manipulation, are examined. A powerful neural learning formalism is introduced for addressing a large class of nonlinear mapping problems, including redundant manipulator inverse kinematics, commonly encountered during the design of real-time adaptive control mechanisms. Artificial neural networks with terminal attractor dynamics are used. The rapid network convergence resulting from the infinite local stability of these attractors allows the development of fast neural learning algorithms. Approaches to manipulator inverse kinematics are reviewed, the neurodynamics model is discussed, and the neural learning algorithm is presented.<<ETX>>


Neural Networks | 1992

Learning a trajectory using adjoint functions and teacher forcing

Nikzad Toomarian; Jacob Barhen

Abstract A new methodology for faster supervised temporal learning in nonlinear neural networks is presented. It builds upon the concept of adjoint operators, to enable a fast computation of the gradients of an error functional with respect to all parameters of the neural architecture, and exploits the concept of teacher forcing to incorporate information regarding the desired output into the activation dynamics. The importance of the initial or final time conditions for the adjoint equations (i.e., the error propagation equations) is discussed. A new algorithm is presented, in which the adjoint equations are solved simultaneously (i.e., forward in time) with the activation dynamics of the neural network. We also indicate how teacher forcing can be modulated in time as learning proceeds. The algorithm is illustrated by examples. The results show that the learning time is reduced by one to two orders of magnitude with respect to previously published results, while trajectory tracking is significantly improved. The proposed methodology makes hardware implementation of temporal learning attractive for real-time applications.


SPIE international conference, Orlando, FL (United States), 21-25 Apr 1997 | 1997

Methodology for hyperspectral image classification using novel neural network

Suresh Subramanian; Nahum Gat; Michael Sheffield; Jacob Barhen; Nikzad Toomarian

A novel feed forward neural network is used to classify hyperspectral data from the AVIRIS sensor. The network applies an alternating direction singular value decomposition technique to achieve rapid training times. Very few samples are required for training. 100 percent accurate classification is obtained using test data sets. The methodology combines this rapid training neural network together with data reduction and maximal feature separation techniques such as principal component analysis and simultaneous diagonalization of covariance matrices, for rapid and accurate classification of large hyperspectral images. The results are compared to those of standard statistical classifiers.


Computers & Geosciences | 2000

Reservoir parameter estimation using a hybrid neural network

Fred Aminzadeh; Jacob Barhen; Charles W. Glover; Nikzad Toomarian

The accuracy of an artificial neural network (ANN) algorithm is a crucial issue in the estimation of an oil field’s reservoir properties from the log and seismic data. This paper demonstrates the use of the k-fold cross validation technique to obtain confidence bounds on an ANN’s accuracy statistic from a finite sample set. In addition, we also show that an ANN’s classification accuracy is dramatically improved by transforming the ANN’s input feature space to a dimensionally smaller, new input space. The new input space represents a feature space that maximizes the linear separation between classes. Thus, the ANN’s convergence time and accuracy are improved because the ANN must merely find nonlinear perturbations to the starting linear decision boundaries. These techniques for estimating ANN accuracy bounds and feature space transformations are demonstrated on the problem of estimating the sand thickness in an oil field reservoir based only on remotely sensed seismic data. 7 2000 Elsevier Science Ltd. All rights reserved.


international symposium on neural networks | 1993

Global descent replaces gradient descent to avoid local minima problem in learning with artificial neural networks

B.C. Cetin; Joel W. Burdick; Jacob Barhen

One of the fundamental limitations of artificial neural network learning by gradient descent is the susceptibility to local minima during training. A new approach to learning is presented in which the gradient descent rule in the backpropagation learning algorithm is replaced with a novel global descent formalism. This methodology is based on a global optimization scheme, acronymed TRUST (terminal repeller unconstrained subenergy tunneling), which formulates optimization in terms of the flow of a special deterministic dynamical system. The ability of the new dynamical system to overcome local minima with common benchmark examples and a pattern recognition example is tested. The results demonstrate that the new method does indeed escape encountered local minima, and thus finds the global minimum solution to the specific problems.<<ETX>>


international conference on robotics and automation | 1993

A neural network based identification of environments models for compliant control of space robots

Subramanian Venkataraman; Sandeep Gulati; Jacob Barhen; Nikzad Toomarian

Many space robotic systems would be required to operate in uncertain or even unknown environments. The problem of identifying such environment for compliance control is considered. In particular, neural networks are used for identifying environments that a robot establishes contact with. Both function approximation and parameter identification (with fixed nonlinear structure and unknown parameters) results are presented. The environment model structure considered is relevant to two space applications: cooperative execution of tasks by robots and astronauts, and sample acquisition during planetary exploration. Compliant motion experiments have been performed with a robotic arm, placed in contact with a single-degree-of-freedom electromechanical environment. In the experiments, desired contact forces are computed using a neural network, given a desired motion trajectory. Results of the control experiments performed on robot hardware are described and discussed. >


IEEE Transactions on Antennas and Propagation | 1995

A massively parallel computation strategy for FDTD: time and space parallelism applied to electromagnetics problems

Amir Fijany; Michael A. Jensen; Yahya Rahmat-Samii; Jacob Barhen

We present a novel strategy for incorporating massive parallelism into the solution of Maxwells equations using finite-difference time-domain methods. In a departure from previous techniques wherein spatial parallelism is used, our approach exploits massive temporal parallelism by computing all of the time steps in parallel. Furthermore, in contrast to other methods which appear to concentrate on explicit schemes such as Yees (1966) algorithm, our strategy uses the implicit Crank-Nicolson technique which provides superior numerical properties. We show that the use of temporal parallelism results in algorithms which offer a massive degree of coarse grain parallelism with minimum communication and synchronization requirements. Due to these features, the time-parallel algorithms are particularly suitable for implementation on emerging massively parallel multiple instruction-multiple data (MIMD) architectures. The methodology is applied to a circular cylindrical configuration, which serves as a testbed problem for the approach, to demonstrate the massive parallelism that can be exploited. We also discuss the generalization of the methodology for more complex problems.


Physics Letters A | 2002

Solving a class of continuous global optimization problems using quantum algorithms

Vladimir Protopopescu; Jacob Barhen

We investigate the entwined roles that additional information and quantum algorithms play in reducing the complexity of a class of global optimization problems (GOP). We show that: (i) a modest amount of additional information is sufficient to map the continuous GOP into the (discrete) Grover problem; (ii) while this additional information is actually available in some GOPs, it cannot be taken advantage of within classical optimization algorithms; and (iii) quantum algorithms offer a natural framework for the efficient use of this information resulting in a speed-up of the solution of the GOP.

Collaboration


Dive into the Jacob Barhen's collaboration.

Top Co-Authors

Avatar

Nikzad Toomarian

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Vladimir Protopopescu

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Sandeep Gulati

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Neena Imam

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Travis S. Humble

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Yehuda Braiman

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

David B. Reister

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Amir Fijany

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michail Zak

California Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge