John Lamperti
Dartmouth College
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by John Lamperti.
Probability Theory and Related Fields | 1972
John Lamperti
A real valued random function {x,}, continuous in probability and with x o =0, is called semi-stable if there is a constant ~ > 0 (called the order of the process) such that for every a>O the random functions {x,t } and {a~x,} have the same joint distributions. If {xt} is Markovian with the stationary transition function Pt(x, E), it is obvious that this condition holds provided that xo =0 and that
Transactions of the American Mathematical Society | 1958
John Lamperti
JOHN LAMPERTI
Bulletin of the American Mathematical Society | 1961
John Lamperti
The purpose of this department is to provide early announcement of significant new results, with some indications of proof. Although ordinarily a research announcement should be a brief summary of a paper to be published in full elsewhere, papers giving complete proofs of results of exceptional interest are also solicited. A NEW CLASS OF PROBABILITY LIMIT THEOREMS BY JOHN LAMPERTI Communicated by J. L. Doob, December 30, 1960 Suppose that \Xn) is a Markov process with states on the nonnegative real axis and stationary transition probabilities. Define (1) pk{x) = £[(XW+1 Xn) \ Xn = x], k = 1, 2, • • • ;
Archive | 1977
John Lamperti
In this chapter we will consider some general properties of stochastic processes with finite second moments.
Archive | 1977
John Lamperti
This chapter begins the more particular theory of stationary 2nd-order random processes, considered from the view-point of correlation theory. In other words, we will study processes which are “stationary in the wide sense” (page 7) and build a theory based on their covariance functions \({\rm K(s) = E(X}_{{\rm t + s}} \overline {\rm X} _{\rm t} )\) alone. This theory has the flavor of Hilbert space and Fourier analysis, and readers who are familiar with the “spectral theorem” for unitary operators on a Hilbert space will recognize that this theorem is behind the “spectral representation” of a stationary process to be derived below. No advance knowledge of spectral theory is needed, however, and in fact the probabilistic setting can provide an easy and well-motivated introduction to this area of functional analysis.
Archive | 1977
John Lamperti
The skeleton key which brings order out of all this chaos is the theory of semigroups; its application to Markov processes was developed in the early 1950’s, with W. Feller doing the pioneering work.
Archive | 1977
John Lamperti
Suppose that {xt} is a Markov process. If it is known that \(x_{t_0 } \), the process can be thought of as “beginning afresh” thereafter as though x had been its initial state.
Archive | 1977
John Lamperti
Stationary processes (with T = R1 or ℤ) were defined in Chapter 1, section 3 as processes whose finite-dimensional distributions are invariant under translations of t. So far we have used only the invariance of the second-order moments (“wide-sense” stationarity), but in this chapter the full strength of stationarity will be needed. The main new probabilistic result will be the strong law of large numbers; through this we make contact with the interesting branch of analysis known as ergodic theory. Of course, if the strictly-stationary process has finite second moments the theory developed in Chapter 3 and Chapter 4 will apply as well, but the mathematical flavor of the present chapter is quite different from those earlier ones.
Archive | 1977
John Lamperti
Much of the remainder of this book is devoted to Markov processes. A definition of the Markov property was given in Chapter 1, but now we will begin over again in a different spirit. Instead of starting with the general Markov process itself, we will first examine how the transition probability functions of Markov processes can be constructed and studied. While we are doing this in the present chapter and the next one, no probability spaces or random variables will be needed (although the word “probability” will be used informally as a guide to the motivation for the work). Later, in Chapter 8, we will actually construct the processes themselves, and then the connection with the Markov property defined in Chapter 1 will be established.
Archive | 1977
John Lamperti
In this chapter we will take a closer look at certain important questions about stationary sequences. In the case of interpolation, we assume that a stationary process (discrete time) is being recorded continuously, but that one or more observations are missed (perhaps while the experimenter is out to lunch). It is then desired to reconstruct the missing observations as well as possible using all the others, both earlier and later than the ones which were omitted. For prediction, on the other hand, we assume that the entire history of the process is known up to a certain point in time, and on the basis of these observations one or more of the future values must be estimated as accurately as possible. Still another problem, which we won’t discuss here, involves filtering an observed process which consists of a “desired signal” plus “noise” in order to recover the signal alone from the combination. Once again, Wiener’s Cybernetics is a source of interesting historical background.