András Zempléni
Eötvös Loránd University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by András Zempléni.
Journal of Theoretical Probability | 1990
András Zempléni
Denote byD(S) the convolution semigroup of compact-regular probability measures on a topological semigroupS. Hincins classical decomposition theorems are extended to finite point processes on a completely regular topological space and to the convolution semigroupsD(D(G)), D(D(D(G))),... whereG is a locally compact Hausdorff group. The paper applies the Hun-Hungarian semigroup theory approach of Ruzsa and Székely; the proofs also follow this abstract setting.
Studies in Educational Evaluation | 1998
Andrea Kárpáti; András Zempléni; N.D. Verhelst; N.H. Velduijzen; D.W. Schönau
In art education, it is customary to accept the expertise of the art teacher as an impartial and objective evaluator of student achievement. But is he or she really impartial and reliable judge of quality? Is the jury method generally used for the assessment of works of art (Schonau, 1994) a valid method for authentic assessment? If so, are all genres and topics of child art equally suitable for assessment? In order to modernize final examinations in the visual arts, basic issues like these had to be addressed as the impressive amount of literature on child art never seems to question the validity of assumptions made by a single evaluator or a body of jurors. Research on child art usually focuses on the description of models for development as evaluated by experts whose training entitles them for judgment (Gardner, 1993; Kindler, 1997). We know of no previous research project that would have questioned the objectivity and reliability of the judgment of these experts. Our experiences as members of art juries disproves the claim of “expert agreement” reached through negotiations. Usually, it is the rhetoric of one of the jurors based on blurred definitions of “beauty” and “creativity”, and not the objective application of previously agreed upon criteria that decides about the merit of a work. This procedure, however, cannot be utilized for examination purposes. But can it be improved? Is there a more reliable way of conducting a jury of works of art?
Mathematical and Computer Modelling | 2009
N. Miklós Arató; Dávid Bozsó; Péter Elek; András Zempléni
In this paper we suggest solutions to the actuaries, facing the problem of estimating future mortality tables, especially in cases where there is a lack of relevant data and where the tendencies are not easy to estimate directly. We propose the utilization of external sources of information in the form of other, published mortality tables and use formal statistical tests to decide among these possible candidates. The procedure can also be applied for checking e.g. the goodness of mortality selection factors. We suggest the use of parametric families in modelling; for example the simple 2-parameter Azbel model. We conclude the paper by a simulation study which allows for the quantification of the possible risks related to unforeseen changes in the mortality tables in the future. To calibrate the variances of these models, initial estimates are needed, which we get by the Lee-Carter method.
Computers & Mathematics With Applications | 2008
Péter Elek; András Zempléni
We examine the tail behaviour and extremal cluster characteristics of two-state Markov-switching autoregressive models where the first regime behaves like a random walk, the second regime is a stationary autoregression, and the generating noise is light-tailed. Under additional technical conditions we prove that the stationary solution has asymptotically exponential tail and the extremal index is smaller than one. The extremal index and the limiting cluster size distribution of the process are calculated explicitly for some noise distributions, and simulated for others. The practical relevance of the results is illustrated by examining extremal properties of a regime-switching autoregressive process with Gamma-distributed noise, already applied successfully in river flow modeling. The limiting aggregate excess distribution is shown to possess Weibull-like tail in this special case.
Quality and Reliability Engineering International | 2008
Zsolt Robotka; András Zempléni; Csaba Seres; Sándor Balázs
In this paper we describe the development of an image retrieval system that is able to browse, cluster and classify large digital image databases. This work was motivated by the projects of the Visualization Centre of the Eotvos Lorand University, where such problems are to be solved. The systems functions are based on a Gaussian mixture model (GMM) representation of the images. Image matching is done by the distance measure of the representations, based on the approximation of the Kullback–Leibler divergence of the GMMs. The GMMs are estimated with an improved expectation maximization (EM) algorithm that avoids convergence to the boundary of the parameter space. These form the basis of the clustering, where a variant of a genetic algorithm is used. The suggested algorithm is able to work with a large number of images or objects, the grid technology is a useful tool for generating several runs simultaneously. Copyright
Archive | 1991
András Zempléni
The aim of this paper is to present some results concerning algebraic probability theory introduced by I.Z.Ruzsa and G.J.Szekely [5] which show that some of the results in [5] and [8] cannot be generalized without any further assumptions.
Recent Advances in Stochastic Modeling and Data Analysis | 2007
Pál Rakonczai; András Zempléni
There are more and more recent copula models aimed at describing the behavior of multivariate data sets. However, no effective methods are known for checking the validity of these models, especially for the case of higher dimensions. Our approach is based on the multivariate probability integral transformation of the joint distribution, which reduces the multivariate problem to one dimension. We compare the above goodness of fit tests to those, which are based on the copula density function. We present the background of the methods as well as simulations for their power.
arXiv: Statistics Theory | 2017
László Varga; András Zempléni
In an earlier paper Rakonczai et al. (2014), we have emphasized the effective sample size for autocorrelated data. The simulations were based on the block bootstrap methodology. However, the discreteness of the usual block size did not allow for exact calculations. In this paper we propose a generalisation of the block bootstrap methodology, relate it to the existing optimisation procedures and apply it to a temperature data set. Our other focus is on statistical tests, where quite often the actual sample size plays an important role, even in case of relatively large samples. This is especially the case for copulas. These are used for investigating the dependencies among data sets. As in quite a few real applications the time dependence cannot be neglected, we investigated the effect of this phenomenon to the used test statistic. The critical values can be computed by the proposed new block bootstrap simulation, where the block sizes are determined e.g. by fitting a VAR model to the observations. The results are illustrated for models of the used temperature data.
Theoretical and Applied Climatology | 2016
László Varga; Pál Rakonczai; András Zempléni
This paper presents applications of the peaks-over-threshold methodology for both the univariate and the recently introduced bivariate case, combined with a novel bootstrap approach. We compare the proposed bootstrap methods to the more traditional profile likelihood. We have investigated 63 years of the European Climate Assessment daily precipitation data for five Hungarian grid points, first separately for the summer and winter months, then aiming at the detection of possible changes by investigating 20 years moving windows. We show that significant changes can be observed both in the univariate and the bivariate cases, the most recent period being the most dangerous in several cases, as some return values have increased substantially. We illustrate these effects by bivariate coverage regions.
Computational Statistics & Data Analysis | 2004
Charles C. Taylor; András Zempléni
In this paper we present a graphical tool useful for visualizing the cyclic behaviour of bivariate time series. We investigate its properties and link it to the asymmetry of the two variables concerned. We also suggest adding approximate confidence bounds to the points on the plot and investigate the effect of lagging to the chain plot. We conclude our paper by some standard Fourier analysis, relating and comparing this to the chain plot.