Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jonas Sjöberg is active.

Publication


Featured researches published by Jonas Sjöberg.


Automatica | 1995

Nonlinear black-box modeling in system identification: a unified overview

Jonas Sjöberg; Qinghua Zhang; Lennart Ljung; Albert Benveniste; Bernard Delyon; Pierre-Yves Glorennec; Håkan Hjalmarsson; Anatoli Juditsky

A nonlinear black-box structure for a dynamical system is a model structure that is prepared to describe virtually any nonlinear dynamics. There has been considerable recent interest in this area, with structures based on neural networks, radial basis networks, wavelet networks and hinging hyperplanes, as well as wavelet-transform-based methods and models based on fuzzy sets and fuzzy rules. This paper describes all these approaches in a common framework, from a users perspective. It focuses on what are the common features in the different approaches, the choices that have to be made and what considerations are relevant for a successful system-identification application of these techniques. It is pointed out that the nonlinear structures can be seen as a concatenation of a mapping form observed data to a regression vector and a nonlinear mapping from the regressor space to the output space. These mappings are discussed separately. The latter mapping is usually formed as a basis function expansion. The basis functions are typically formed from one simple scalar function, which is modified in terms of scale and location. The expansion from the scalar argument to the regressor space is achieved by a radial- or a ridge-type approach. Basic techniques for estimating the parameters in the structures are criterion minimization, as well as two-step procedures, where first the relevant basis functions are determined, using data, and then a linear least-squares step to determine the coordinates of the function approximation. A particular problem is to deal with the large number of potentially necessary parameters. This is handled by making the number of ‘used’ parameters considerably less than the number of ‘offered’ parameters, by regularization, shrinking, pruning or regressor selection.


Automatica | 1995

Nonlinear black-box models in system identification: mathematical foundations

Anatoli Juditsky; Håkan Hjalmarsson; Albert Benveniste; Bernard Delyon; Lennart Ljung; Jonas Sjöberg; Qinghua Zhang

We discuss several aspects of the mathematical foundations of the nonlinear black-box identification problem. We shall see that the quality of the identification procedure is always a result of a certain trade-off between the expressive power of the model we try to identify (the larger the number of parameters used to describe the model, the more flexible is the approximation), and the stochastic error (which is proportional to the number of parameters). A consequence of this trade-off is the simple fact that a good approximation technique can be the basis of a good identification algorithm. From this point of view, we consider different approximation methods, and pay special attention to spatially adaptive approximants. We introduce wavelet and ‘neuron’ approximations, and show that they are spatially adaptive. Then we apply the acquired approximation experience to estimation problems. Finally, we consider some implications of these theoretical developments for the practically implemented versions of the ‘spatially adaptive’ algorithms.


IEEE Transactions on Signal Processing | 2000

Efficient training of neural nets for nonlinear adaptive filtering using a recursive Levenberg-Marquardt algorithm

Lester S. H. Ngia; Jonas Sjöberg

The Levenberg-Marquardt algorithm is often superior to other training algorithms in off-line applications. This motivates the proposal of using a recursive version of the algorithm for on-line training of neural nets for nonlinear adaptive filtering. The performance of the suggested algorithm is compared with other alternative recursive algorithms, such as the recursive version of the off-line steepest-descent and Gauss-Newton algorithms. The advantages and disadvantages of the different algorithms are pointed out. The algorithms are tested on some examples, and it is shown that generally the recursive Levenberg-Marquardt algorithm has better convergence properties than the other algorithms.


IFAC Proceedings Volumes | 1994

Neural Networks in System Identification

Jonas Sjöberg; Håkan Hjalmarsson; Lennart Ljung

Neural Networks are non-linear black-box model structures, to be used with conventional parameter estimation methods. They have good general approximation capabilities for reasonable non-linear systems. When estimating the parameters in these structures, there is also good adaptability to concentrate on those parameters that have the most importance for the particular data set.


International Journal of Control | 1995

Overtraining, regularization and searching for a minimum, with application to neural networks

Jonas Sjöberg; Lennart Ljung

In this paper we discuss the role of criterion minimization as a means for parameter estimation. Most traditional methods, such as maximum likelihood and prediction error identification are based on these principles. However, somewhat surprisingly, it turns out that it is not always ‘optimal’ to try to find the absolute minimum point of the criterion. The reason is that ‘stopped minimization’ (where the iterations have been terminated before the absolute minimum has been reached) has more or less identical properties as using regularization (adding a parametric penalty term). Regularization is known to have beneficial effects on the variance of the parameter estimates and it reduces the ‘variance contribution’ of the misfit. This also explains the concept of ‘overtraining’ in neural nets. How does one know when to terminate the iterations then? A useful criterion would be to stop iterations when the criterion function applied to a validation data set no longer decreases. However, in this paper, we show th...


IFAC Proceedings Volumes | 1992

Overtraining, Regularization and Searching for Minimum in Neural Networks

Jonas Sjöberg; Lennart Ljung

Neural network models for dynamical systems have been subject of considerable interest lately. They are often characterized by the fact that they use a fairly large amount of parameters. Here we ad ...


IEEE Transactions on Information Theory | 1998

On the hinge-finding algorithm for hingeing hyperplanes

Predrag Pucar; Jonas Sjöberg

This correspondence concerns the estimation algorithm for hinging hyperplane (HH) models, a piecewise-linear model for approximating functions of several variables, suggested in Breiman (1993). The estimation algorithm is analyzed and it is shown that it is a special case of a Newton algorithm applied to a sum of squared error criterion. This insight is then used to suggest possible improvements of the algorithm so that convergence to a local minimum can be guaranteed. In addition, the way of updating the parameters in the HH model is discussed. In Breiman, a stepwise updating procedure is proposed where only a subset of the parameters are changed in each step. This connects closely to some previously suggested greedy algorithms and these greedy algorithms are discussed and compared to a simultaneous updating of all parameters.


ieee workshop on neural networks for signal processing | 1997

Separable non-linear least-squares minimization-possible improvements for neural net fitting

Jonas Sjöberg; Mats Viberg

Neural network minimization problems are often ill-conditioned and in this contribution two ways to handle this will be discussed. It is shown that a better conditioned minimization problem can be obtained if the problem is separated with respect to the linear parameters. This will increase the convergence speed of the minimization. The Levenberg-Marquardt minimization method is often concluded to perform better than the Gauss-Newton and the steepest descent methods on neural network minimization problems. The reason for this is investigated and it is shown that the Levenberg-Marquardt method divides the parameters into two subsets. For one subset the convergence is almost quadratic like that of the Gauss-Newton method, and on the other subset the parameters do hardly converge at all. In this way a fast convergence among the important parameters is obtained.


Control Engineering Practice | 2003

Iterative controller optimization for nonlinear systems

Jonas Sjöberg; F. De Bruyne; Mukul Agarwal; Brian D. O. Anderson; Michel Gevers; F.J. Kraus; N. Linard

Recently, a data-driven model-free control design method has been proposed in Hjalmarsson et al. (Proceedings of the Conference on Decision and Control, Orlando, FL, 1994, pp. 1735–1740; IEEE Control Systems Mag. 18 (1998) 26) for linear systems. It is based on the minimization of a control criterion with respect to the controller parameters using an iterative gradient technique. In this paper, we extend this method to the case where both the plant and the controller can be nonlinear. It is shown that an estimate of the gradient of the control criterion can be constructed using only signal-based information obtained from closed-loop experiments. The obtained estimate contains a bias which depends on the local nonlinearity of the noise description of the closed-loop system which can be expected to be small in many practical situations. As a side effect the linear model-free control design method is reobtained in a new way.


IEEE Transactions on Intelligent Transportation Systems | 2011

Predictive Threat Assessment via Reachability Analysis and Set Invariance Theory

Paolo Falcone; Mohammad Ali; Jonas Sjöberg

We propose two model-based threat assessment methods for semi-autonomous vehicles, i.e., human-driven vehicles with autonomous driving capabilities. Based on information about the surrounding environment, we introduce a set of constraints on the vehicle states, which are satisfied under “safe” driving conditions. Then, we formulate the threat assessment problem as a constraint satisfaction problem. Vehicle and driver mathematical models are used to predict future constraint violation, indicating the possibility of accident or loss of vehicle control, hence, the need to assist the driver. The two proposed methods differ in the models used to predict vehicle motion within the surrounding environment. We demonstrate the proposed methods in a roadway departure application and validate them through experimental data.

Collaboration


Dive into the Jonas Sjöberg's collaboration.

Top Co-Authors

Avatar

Nikolce Murgovski

Chalmers University of Technology

View shared research outputs
Top Co-Authors

Avatar

Paolo Falcone

Chalmers University of Technology

View shared research outputs
Top Co-Authors

Avatar

Jonas Fredriksson

Chalmers University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Giuseppe Giordano

Chalmers University of Technology

View shared research outputs
Top Co-Authors

Avatar

Håkan Hjalmarsson

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Johan Schoukens

Vrije Universiteit Brussel

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge