Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tomas Hrycej is active.

Publication


Featured researches published by Tomas Hrycej.


Computational Statistics & Data Analysis | 2006

Multivariate distribution models with generalized hyperbolic margins

Rafael Schmidt; Tomas Hrycej; Eric A. Stützle

Multivariate generalized hyperbolic distributions represent an attractive family of distributions (with exponentially decreasing tails) for multivariate data modelling. However, in a limited data environment, robust and fast estimation procedures are rare. An alternative class of multivariate distributions (with exponentially decreasing tails) is proposed which comprises affine-linearly transformed random vectors with stochastically independent and generalized hyperbolic marginals. The latter distributions possess good estimation properties and have attractive dependence structures which are explored in detail. In particular, dependencies of extreme events (tail dependence) can be modelled within this class of multivariate distributions. In addition the necessary estimation and random-number generation procedures are provided. Various advantages and disadvantages of both types of distributions are discussed and illustrated via a simulation study.


international symposium on neural networks | 1990

A modular architecture for efficient learning

Tomas Hrycej

A modular architecture for supervised learning is presented. Three ways to modularize the learning are investigated: (1) preprocessing the input by a self-organizing subnetwork to extract strong features from the data, (2) supervised feature extraction, and (3) an iterative learning cycle in which only one layer learns at a time, with the output-layer weights learned by an exact method. With this modular architecture, only a small fraction of connection weights is determined by the gradient-descent method. A series of computational experiments shows the superiority of the modular model in learning quality and speed. The author begins by considering the decomposition of a feedforward network into a feature-discovery and a supervised-learning modules, then he introduces supervised feature discovery, and finally he describes a modular backpropagation algorithm


vehicular technology conference | 1992

Neural control of autonomous vehicles

Klaus Mecklenburg; Tomas Hrycej; Uwe Franke; Hans Fritz

Lateral control of an autonomous road vehicle by a neural network is presented. The inputs into the controller such as relative vehicle position and yaw angle are delivered by dynamical video scene processing. Nonlinear conflicting requirements of safety and comfort have to be satisfied by the controller. The controller has been trained by the model-based training algorithm. In contrast to other neural network learning algorithms, it uses an explicit plant model to ensure fast and precise convergence. It does not require large training data sets-one or two representative initial states are mostly sufficient. Simulations and practical tests with speeds up to 80 km/h on public highways have confirmed the expectations.<<ETX>>


Neurocomputing | 1992

Supporting supervised learning by self-organization

Tomas Hrycej

Abstract It is a common hypothesis that human learning makes use of both supervised and unsupervised learning modes. This hypothesis suggests that a decomposition of the neural-network learning task might make learning more efficient. Two ways to decompose learning are investigated: (1) preprocessing the input by a self-organizing subnetwork to extract strong features from the data and (2) ‘supervised feature discovery’. With such a modular architecture, the number of connection weights that have to be determined by the gradient-descent method can be substantially reduced. A series of computational experiments shows the superiority of the modular in learnign quality and speed. In particular, the generalization capability of the model is substantially improved.


international symposium on neural networks | 1990

Self-organization by delta rule

Tomas Hrycej

In a certain two-layer network architecture, the delta learning rule leads to a self-organization of connection weights. A formal analysis of this learning rule shows that the weight vectors of n second-layer nodes converge to a rotation of the first n principal components. Therefore, the delta-rule-based self-organization performs optimal encoding and decoding of data in the sense of the principal component analysis. This property of the delta learning rule has been verified by a series of computational experiments, which also showed good convergence stability of the rule. For data compression tasks, it performs substantially better than a three-layer autoassociative perceptron with linear or nonlinear hidden units


international symposium on neural networks | 2007

Warranty Cost Forecast Based on Car Failure Data

Tomas Hrycej; Matthias Grabert

A failure and warranty cost model is gained from a failure database. The model is a combination of statistical components with a multi-layer perceptron and a cross-entropy based learning rule. The model is used for forecasting warranty costs in alternative warranty condition scenarios. The estimate of forecast variance considers both the individual vehicle risk and the overall manufacturing quality fluctuation risk.


industrial conference on data mining | 2004

An early warning system for vehicle related quality data

Matthias Grabert; Markus Prechtel; Tomas Hrycej; Winfried Günther

Vehicle production audit tests, warranty claims and car control unit data are stored in a central data warehouse for data mining analysis. Neural network based part failure rate estimations, adjusted for mileage and seasonality, are used for monitoring warranty claims. Association and sequence analysis connect production audit data, car control unit data and warranty claims for an early detection of quality changes both in production state and car field usage. Calculations are performed via grid computing.


Computational Statistics | 2005

Numerical method for estimating multivariate conditional distributions

Eric A. Stützle; Tomas Hrycej

SummaryA computational framework for estimation of multivariate conditional distributions is presented. It allows the forecast of the joint distribution of target variables in dependence on explaining variables. The concept can be applied to general distribution families such as stable or hyperbolic distributions. The estimation is based on the numerical minimization of the cross entropy, using the Multi-Level Single-Linkage global optimization method. Nonlinear dependencies of conditional parameters can be modeled with help of general functional approximators such as multi-layer perceptrons. In applications, the information about a complete distribution of forecasts can be used to quantify the reliability of the forecast or for decision support. This is illustrated on a case study concerning the spare parts demand forecast. The improvement of the forecast error due to using non-Gaussian distributions is presented in another case study concerning the truck sales forecast.


international symposium on neural networks | 1995

Identifying chaotic attractors with neural networks

Tomas Hrycej

Behavior of chaotic systems cannot be exactly forecast for all state variables by identified models since a deviation in model parameters leads to exponential forecast error. However, under certain conditions a model can be identified that possesses the same strange attractor. A procedure for identifying such models is presented. This procedure is based on error volume evaluation, instead of additive squared error.


vehicular technology conference | 1992

Neural-network-based car drive train control

Tomas Hrycej

The optimization of the drive train string for high comfort requirements involves several control problems. One is the determination of optimal torque trajectory. Classical solutions of this problem suffer from strongly nonmonotonic torque trajectory, resulting from the difficulty of formulating the monotonicity requirement in the classical quadratic objective function form. A model-based neural-network trainable controller has been applied to this problem. In contrast to previous neural-network approaches, it fully exploits the available information about the plant. By its capability of using an arbitrary nonlinear differentiable control objective function (as well as an arbitrary nonlinear differentiable plant), it allows a direct formulation of the torque monotonicity requirement. The development time for the controller has been only two days. No control engineering competence has been required-the design procedure is very general and automatic.<<ETX>>

Collaboration


Dive into the Tomas Hrycej's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christian Manuel Strobel

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge