Edwin A. Heredia
University of Delaware
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Edwin A. Heredia.
IEEE Transactions on Biomedical Engineering | 2000
Juan G. Gonzalez; Edwin A. Heredia; Tariq Rahman; Kenneth E. Barner; Gonzalo R. Arce
Remote manually operated tasks such as those found in teleoperation, virtual reality, or joystick-based computer access, require the generation of an intermediate electrical signal which is transmitted to the controlled subsystem (robot arm, virtual environment, or a cursor in a computer screen). When human movements are distorted, for instance, by tremor, performance can be improved by digitally filtering the intermediate signal before it reaches the controlled device. This paper introduces a novel tremor filtering framework in which digital equalizers are optimally designed through pursuit tracking task experiments. Due to inherent properties of the man-machine system, the design of tremor suppression equalizers presents two serious problems: 1) performance criteria leading to optimizations that minimize mean-squared error are not efficient for tremor elimination and 2) movement signals show ill-conditioned autocorrelation matrices, which often result in useless or unstable solutions. To address these problems, a new performance indicator in the context of tremor is introduced, and the optimal equalizer according to this new criterion is developed, III-conditioning of the autocorrelation matrix is overcome using a novel method which we call pulled-optimization. Experiments performed with artificially induced vibrations and a subject with Parkinsons disease show significant improvement in performance. Additional results, along with MATLAB source code of the algorithms, and a customizable demo for PC joysticks, are available on the Internet at http://tremor-suppression.com.
IEEE Transactions on Signal Processing | 1996
Edwin A. Heredia; Gonzalo R. Arce
The continuous threshold decomposition is a segmentation operator used to split a signal into a set of multilevel components. This decomposition method can be used to represent continuous multivariate piecewise linear (PWL) functions and, therefore, can be employed to describe PWL systems defined over a rectangular lattice. The resulting filters are canonical and have a multichannel structure that can be exploited for the development of rapidly convergent algorithms. The optimum design of the class of PWL filters introduced in this paper can be postulated as a least squares problem whose variables separate into a linear and a nonlinear part. Based on this feature, parameter estimation algorithms are developed. First, a block data processing algorithm that combines linear least-squares with grid localization through recursive partitioning is introduced. Second, a time-adaptive method based on the combination of an RLS algorithm for coefficient updating and a signed gradient descent module for threshold adaptation is proposed and analyzed. A system identification problem for wave propagation through a nonlinear multilayer channel serves as a comparative example where the concepts introduced are tested against the linear, Volterra, and neural network alternatives.
Proceedings of SPIE | 1996
Edwin A. Heredia; Vijay Kumar; Tariq Rahman
Transparency is a method proposed to quantify the telepresence performance of bilateral teleoperation systems. It is practically impossible to achieve transparency for all frequencies, however, previous research has shown that by proper manipulation of the individual transfer functions, transparent systems for limited frequency bands can be designed. In this paper we introduce a different approach. We first study the problem of designing systems that are transparent only for a given value of the output impedance, then, by combining this concept with that of time-adaptive impedance estimation, we postulate a new strategy for the design of transparent systems. In the proposed method, the output impedance estimate is updated at each time instant using adaptive ARMA modeling based on either the LMS or RLS algorithms. The current estimate of the output impedance is used to update some free-design system parameters in such a way that the system tries to achieve transparency. We refer to this strategy as asymptotic transparency. An example on how to use this strategy in the design of a system with position-forward and force-feedback paths is included.
Proceedings of SPIE | 1995
Juan G. Gonzalez; Edwin A. Heredia; Tariq Rahman; Kenneth E. Barner; Gonzalo R. Arce
Remote manually operated tasks such as those found in teleoperation, virtual reality, or joystick-based computer access, require the generation of an intermediate signal which is transmitted to the controlled subsystem (robot arm, virtual environment or cursor). When man-machine movements are distorted by tremor, performance can be improved by digitally filtering the intermediate signal before it reaches the controlled device. This paper introduces a novel filtering framework in which digital equalizers are optimally designed after pursuit tracking task experiments. Due to inherent properties of the man-machine system, the design of tremor suppression equalizers presents two serious problems: (1) performance criteria leading to optimizations that minimize mean-squared error are not efficient for tremor elimination, and (2) movement signals show highly ill-conditioned autocorrelation matrices, which often result in useless or unstable solutions. A new performance indicator is introduced, namely the F-MSEd, and the optimal equalizer according to this new criterion is developed. Ill-condition of the autocorrelation matrix is overcome using a novel method which we call pulled-optimization. Experiments performed with both a person with tremor disability, and a vibration inducing device, show significant results.
international conference on acoustics speech and signal processing | 1996
Edwin A. Heredia; Gonzalo R. Arce
We report our results concerning the study of multivariate functions of threshold-decomposed signals. In particular we show that multilinear tensor forms of the decomposed signal yield a class of filters that we propose to call piecewise Volterra filters (PWV). A filter can be viewed as a transformation of /spl Rscr//sup N//spl rarr//spl Rscr/, where N is the number of filter taps. PWV filters partition /spl Rscr//sup N/ using a hyper-rectangular lattice, and assign a Volterra filter to each of the partition regions. At the partition boundaries continuity between the multivariate polynomials is preserved resulting in class /spl Cscr//sup 0/ piecewise polynomials. PWV filters constitute an efficient alternative for describing some systems rich in hard nonlinear structures, especially since parameter estimation remains a linear problem for PWVs.
international conference on acoustics, speech, and signal processing | 1993
Edwin A. Heredia; Gonzalo R. Arce
In practical applications, signals very often come from nonlinear systems and exhibit features such as limit cycles, bifurcations, time irreversibility, and others that cannot be reproduced by linear models. The authors introduce a method to develop piecewise linear approximations to nonlinear autoregressive processes. The method is based on the threshold decomposition algorithm which can provide multivariate linear-spline characterization of functions. Signal models obtained through piecewise linear autoregressions (PARs) constitute better and more robust representations of unknown nonlinear operators, as is shown using examples from time series prediction.<<ETX>>
international conference on acoustics speech and signal processing | 1998
Edwin A. Heredia
With the arrival of terrestrial digital TV, a distribution network able to deliver up to 19 Mbits/s in each of the physical transmission channels will become available. Using the adopted data broadcast protocols, simultaneous transmission of multimedia documents to large population segments can be achieved. While these protocols describe methods for recognizing files in data streams, no method is known yet on how to distribute large collections of files in one or more data streams. This paper addresses this problem. The method proposed in the paper allocates objects in multiple streams according to their sizes and access probabilities, in such a way that average access latency is minimized. We show that the minimization problem can be described as a particular form of the NP hard quadratic allocation model for which an algorithmic solution for finding local minima exists.
Robotica | 1998
Shoupu Chen; Tariq Rahman; Richard Foulds; Edwin A. Heredia; William S. Harwin
This paper presents a virtual headstick system as an alternative to the conventional passive headstick for persons with limited upper extremity function. The system is composed of a pair of kinematically dissimilar master-slave robots with the master robot being operated by the users head. At the remote site, the end-effector of the slave robot moves as if it were at the tip of an imaginary headstick attached to the users head. A unique feature of this system is that through force-reflection, the virtual headstick provides the user with proprioceptive information as in a conventional headstick, but with an augmentation of workspace volume and additional mechanical power. This paper describes the test-bed development, system identification, bilateral control implementation, and system performance evaluation.
international symposium on neural networks | 1996
Edwin A. Heredia; Gonzalo R. Arce
We introduce a network structure for function approximation based on a combination of kernels with compact support. The kernel functions that we use are continuous piecewise polynomials (CPP) defined over a rectangular lattice. For the representation of CPPs the method of B-splines is used when kernels of the most smooth kind are required. In the paper, we also introduce the localized threshold decomposition operator, as a method to use when the least smooth kernels are required for the approximation. Both methods can be combined for improved results. CPP kernel networks are shape adaptive since the kernel shapes can be fitted to match the local characteristics of the function being approximated. Moreover, the shape parameters are linear and therefore they can be identified using fast linear estimation procedures. For the estimation of kernel locations we use a successive approximation learning algorithm. In this paper, a channel equalization example is used to validate the concepts introduced. We show that the shape adaptation property, in this case, allows the networks to improve by a large margin the results of other standard equalization methods.
Archive | 1995
Javier Gonzalez; Edwin A. Heredia; Tariq Rahman; Kenneth E. Barner; Sumit Kumar Basu; Gonzalo R. Arce