Terrence L. Fine
Cornell University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Terrence L. Fine.
IEEE Transactions on Information Theory | 1968
Terrence L. Fine
A discrete time, nonlinear system composed of an integrator preceded by a binary quantizer with integrated negative feedback, which can model a tracking loop or a single integrating delta modulation communication system, is discussed with regard to the input-output statistics for two types of input processes: independent inputs and independent increments inputs. A recursion on time for the joint distribution of input and output is obtained for the independent inputs process and explicitly solved for the time asymptotic distribution, when it exists. The solution is examined in greater detail for the special case of IID normal inputs. When the system is excited by a process of independent increments, the asymptotic behavior of the input and output (they diverge) is of less interest than that of the difference between input and output, the tracking error. A recursion in time for the characteristic function of the error is developed and the time asymptotic solution found, The tracking error is interpreted by decomposition into static and dynamic parts, and an exponential bound to its distribution is provided. The particular case of normal increments input is discussed in additional detail.
IEEE Transactions on Information Theory | 1970
Terrence L. Fine
An explanation is provided for the prevalence of apparently convergent relative frequencies in random sequences. The explanation is based upon the computational-complexity characterization of a random sequence. Apparent convergence is shown to be attributable to a surprising consequence of the selectivity with which relative frequency arguments are applied; it is a consequence of data handling rather than an underlying law or good fortune. The consequences of this understanding for probability and its applications are indicated.
IEEE Transactions on Information Theory | 2005
G. del Angel; Terrence L. Fine
Information-theoretic capacity notions for a slotted Aloha random-access system are considered in this paper, as well as joint power and retransmission controls for this protocol. The effect of the bursty nature of the arrival and transmission process on the information-carrying capability and spectral efficiency of the system is studied. The nature of the random-access protocol used to resolve decoding failures determines the system stability and dynamics, and in consequence its capacity. System control is carried out by dynamic control of retransmission probability and by power control. It is shown that substantial performance improvements can be achieved under this control scheme in terms of throughput and spectral efficiency for a range of channel parameters. The tradeoffs involving coding rate, system throughput, and spectral efficiency are analyzed from an information-theoretic point of view.
IEEE Transactions on Signal Processing | 2006
Chin-Jen Ku; Terrence L. Fine
We propose a Bayesian test for independence among signals where only a small dataset is available. Traditional frequentist approaches often fail in this case due to inaccurate estimation of either the source statistical models or the threshold used by the test statistics. In addition, these frequentist methods cannot incorporate prior information into the computation of the test statistics. Our procedure renders parametric the nonparametric problem of testing for independence by quantizing the observed data samples into a table of cell counts. The test statistic is based on the likelihood of the observed cell counts under the independence hypothesis where the marginal cell probabilities are modeled by independent symmetric Dirichlet priors. We apply our Bayesian test to validate the solutions to the problem of blind source separation with small datasets using both synthetic and real-life benchmark data. The experimental results indicate that our approach can overcome the scarcity of data samples and significantly outperform the standard frequentist parametric methods with a proper selection of the prior parameters
IEEE Transactions on Information Theory | 1966
Terrence L. Fine
This paper exhibits an optimum strategy for the sequential estimation of, or search for, the location of the maximum M of a unimodal function, when M is initially uniformly distributed over some interval. The explicit search strategy which is found is valid for a variety of expected cost functions that add the expected cost of observation to the expected cost of terminal decision. The search problem possesses some of the features of problems in the areas of sequential analysis and stochastic approximation. The search stopping time can be determined as the search proceeds as in problems of sequential analysis. However, unlike many sequential analysis problems, the observational outcomes are somewhat within our control by a choice of observation or trial points. In common with problems of stochastic approximation, we attempt to determine the maximum of an unknown regression function. Contrary to many problems in stochastic approximation, though, the observations are noiseless, and the regression function is not required to be smooth or regular in the neighborhood of M . The main result is that the strategy minimizing the expected cost, drawn from the class of randomized, optional stopping strategies, is nonrandomized and of a size that can be fixed in advance of observation.
IEEE Transactions on Information Theory | 1975
Terrence L. Fine
The decoding of efficiently encoded messages, from either probabilistic, nonprobabilistic, or unknown message sources, is shown to be often practically impossible. If \tau(S) is a running-time bound on the computational effort of a decoder \Psi accepting a codeword P for message S , and \gamma[K_{\Psi}(S)] is an upper bound to acceptable codeword length \mid P \mid when the shortest codeword for S has length K_{\Psi}(S) , then for many message sources \mathcal{M} there exist messages S \in \mathcal{M} such that: 1) if the encoder satisfies \gamma , then the decoder violates \tau ; 2) if the decoder satisfies \tau , then the encoder violates \gamma . These conclusions remain valid even when we allow the decoder to reconstruct only an approximation S \prime in a neighborhood \delta(S) of S . The compatibility of these results with those of information theory rests upon the fact that we are inquiring into the detailed properties of coding systems for individual messages and not into the ensemble average properties.
international symposium on information theory | 2002
G. del Angel; Terrence L. Fine
We consider collision resolution protocols for a random access collision channel with multiplicity feedback. By using Markov decision processes, we provide performance bounds for such systems in terms of the mean number of slots required to resolve a collision of a given multiplicity. We show that recursive binary splitting is strictly suboptimal for all collisions of size n>3, and we find the optimal protocol for the case n=4.
international symposium on information theory | 2002
G. del Angel; Terrence L. Fine
We consider a slotted Aloha random access system with combined coding and power/retransmission control. Under suitable conditions, if M or fewer users transmit simultaneously, they are successfully decoded with high probability, with M depending on coding rate and SINR. We show then that, among any realizable random access protocol, slotted Alohas spectral efficiency and maximum throughput are asymptotically optimal as the coding rate vanishes.
IEEE Transactions on Information Theory | 1996
Terrence L. Fine
Artificial neural networks are systems motivated by the distributed, massively parallel computation in the brain that enables it to be so successful at complex control and recognitiodclassification tasks. The biological neural network that accomplishes this can be mathematically modeledcaricatured by a weighted, directed graph of highly interconnected nodes (neurons). The artificial nodes are almost always simple transcendental functions whose arguments are the weighted summation of the inputs to the node; early work on neural networks and some current work uses node functions taking on only binary values. After a period of active development in the 1950’s and 1960’s, that slowed in the face of the limitations of the networks then being explored, neural networks experienced a renaissance in the 1980’s with the work of Hopfield [7] on the use of networks with feedback (graphs with cycles) as associative memories, and that of Rumelhart et al. [13] on backpropagation training and feedforward (acyclic graphs) networks that could “leam” from input-output examples provided in a training set. Learning in this context is carried out by a descent-based algorithm that adjusts the network weights so that the network response closely approximates the desired responses specified by the training set. This ability to learn from training data, rather than needing to be explicitly (heuristically) programmed, was important both for an understanding of the functioning of brains and for progress in a great variety of applications in which practitioners had been unable to embed their qualitative understanding in successful programs. The capabilities of neural networks were quickly exploited in a great number of applications to pattern classification, control, and time-series forecasting. Hopfield’s work on associative memories excited the interest of his fellow statistical physicists who felt that their methods of analysis would be applicable and productive in studying the asymptotic behavior of neural networks. Unfortunately, many of the applications and studies were either trivial or misguided and earned the field the sobriquet of “hype” for the common appearance of exaggerated claims. Nonetheless, information theory distinguished itself with such solid papers as that of McEliece et al. [8] in providing mathematically sophisticated analyses of network capabilities. The 1990’s saw a significant maturation both in applications and in theoretical understanding of performance and limitations. In particular, neural networks provided a wide spectrum of applied statisticians with a new and powerful class of regression and classification functions that, for the first time, allowed them to make successful truly nonlinear models involving hundreds of variables. The problem of “feature” or “regressor” selection becomes less critical when you do not need to narrow your choices among input variables. A new regime in statistics became accessible, and applied statisticians were no longer restricted in practice either to very simple nonlinear models in a few variables or to larger, but linear, models based solely upon second-order properties.
IEEE ACM Transactions on Networking | 2004
G. del Angel; Terrence L. Fine