Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John K. Tsotsos is active.

Publication


Featured researches published by John K. Tsotsos.


Artificial Intelligence | 1995

Modeling visual attention via selective tuning

John K. Tsotsos; Sean M. Culhane; Winky Yan Kei Wai; Yuzhong Lai; Neal Davis; Fernando Nuflo

A model for aspects of visual attention based on the concept of selective tuning is presented. It provides for a solution to the problems of selection in an image, information routing through the visual processing hierarchy and task-specific attentional bias. The central thesis is that attention acts to optimize the search procedure inherent in a solution to vision. It does so by selectively tuning the visual processing network which is accomplished by a top-down hierarchy of winner-take-all processes embedded within the visual processing pyramid. Comparisons to other major computational models of attention and to the relevant neurobiology are included in detail throughout the paper. The model has been implemented; several examples of its performance are shown. This model is a hypothesis for primate visual attention, but it also outperforms existing computational solutions for attention in machine vision and is highly appropriate to solving the problem in a robot vision system.


Behavioral and Brain Sciences | 1990

Analyzing vision at the complexity level

John K. Tsotsos

The general problem of visual search can be shown to be computationally intractable in a formal, complexity-theoretic sense, yet visual search is extensively involved in everyday perception, and biological systems manage to perform it remarkably well. Complexity level analysis may resolve this contradiction. Visual search can be reshaped into tractability through approximations and by optimizing the resources devoted to visual processing. Architectural constraints can be derived using the minimum cost principle to rule out a large class of potential solutions. The evidence speaks strongly against bottom-up approaches to vision. In particular, the constraints suggest an attentional mechanism that exploits knowledge of the specific problem being solved. This analysis of visual search performance in terms of attentional influences on visual information processing and complexity satisfaction allows a large body of neurophysiological and psychological evidence to be tied together.


Medical Image Analysis | 2008

Efficient and generalizable statistical models of shape and appearance for analysis of cardiac MRI.

Alexander Andreopoulos; John K. Tsotsos

We present a framework for the analysis of short axis cardiac MRI, using statistical models of shape and appearance. The framework integrates temporal and structural constraints and avoids common optimization problems inherent in such high dimensional models. The first contribution is the introduction of an algorithm for fitting 3D active appearance models (AAMs) on short axis cardiac MRI. We observe a 44-fold increase in fitting speed and a segmentation accuracy that is on par with Gauss-Newton optimization, one of the most widely used optimization algorithms for such problems. The second contribution involves an investigation on hierarchical 2D+time active shape models (ASMs), that integrate temporal constraints and simultaneously improve the 3D AAM based segmentation. We obtain encouraging results (endocardial/epicardial error 1.43+/-0.49 mm/1.51+/-0.48 mm) on 7980 short axis cardiac MR images acquired from 33 subjects. We have placed our dataset online, for the community to use and build upon.


Vision Research | 2003

The selective tuning model of attention: psychophysical evidence for a suppressive annulus around an attended item

Florin Cutzu; John K. Tsotsos

The selective tuning model [Artif. Intell. 78 (1995) 507] is a neurobiologically plausible neural network model of visual attention. One of its key predictions is that to simultaneously solve the problems of convergence of neural input and selection of attended items, the portions of the visual neural network that process an attended stimulus must be surrounded by inhibition. To test this hypothesis, we mapped the attentional field around an attended location in a matching task where the subjects attention was directed to a cued target while the distance of a probe item to the target was varied systematically. The main result was that accuracy increased with inter-target separation. The observed pattern of variation of accuracy with distance provided strong evidence in favor of the critical prediction of the model that attention is actively inhibited in the immediate vicinity of an attended location.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1980

A framework for visual motion understanding

John K. Tsotsos; John Mylopoulos; H.D. Covvey; Steven W. Zucker

A framework for the abstraction of motion concepts from sequences of images by computer is presented. The framework includes: 1) representation of knowledge for motion concepts that is based on semantic networks; and 2) associated algorithms for recognizing these motion concepts. These algorithms implement a form of feedback by allowing competition and cooperation among local hypotheses. They also allow a change of attention mechanism that is based on similarity links between knowledge units, and a hypothesis ranking scheme based on updating of certainty factors that reflect the hypothesis set inertia. The framework is being realized with a system called ALVEN. The purpose behind this system is to provide an evolving research prototype for experimenting with the analysis of certain classes of biomedical imagery, and for refining and quantifying the body of relevant medical knowledge.


Computer Vision and Image Understanding | 2013

50 Years of object recognition: Directions forward

Alexander Andreopoulos; John K. Tsotsos

Abstract Object recognition systems constitute a deeply entrenched and omnipresent component of modern intelligent systems. Research on object recognition algorithms has led to advances in factory and office automation through the creation of optical character recognition systems, assembly-line industrial inspection systems, as well as chip defect identification systems. It has also led to significant advances in medical imaging, defence and biometrics. In this paper we discuss the evolution of computer-based object recognition systems over the last fifty years, and overview the successes and failures of proposed solutions to the problem. We survey the breadth of approaches adopted over the years in attempting to solve the problem, and highlight the important role that active and attentive approaches must play in any solution that bridges the semantic gap in the proposed object representations, while simultaneously leading to efficient learning and inference algorithms. From the earliest systems which dealt with the character recognition problem, to modern visually-guided agents that can purposively search entire rooms for objects, we argue that a common thread of all such systems is their fragility and their inability to generalize as well as the human visual system can. At the same time, however, we demonstrate that the performance of such systems in strictly controlled environments often vastly outperforms the capabilities of the human visual system. We conclude our survey by arguing that the next step in the evolution of object recognition algorithms will require radical and bold steps forward in terms of the object representations, as well as the learning and inference algorithms used.


Journal of The Optical Society of America A-optics Image Science and Vision | 1986

Ambient illumination and the determination of material changes

Ron Gershon; Allan D. Jepson; John K. Tsotsos

The task of distinguishing material changes from shadow boundaries in chromatic images is discussed. Although there have been previous attempts at providing solutions to this problem, the assumptions that were adopted were too restrictive. Using a simple reflection model, we show that the ambient illumination cannot be assumed to have the same spectral characteristics as the incident illumination, since it may lead to the classification of shadow boundaries as material changes. In such cases, we show that it is necessary to take into account the spectral properties of the ambient illumination in order to develop a technique that is more robust and stable than previous techniques. This technique uses a biologically motivated model of color vision and, in particular, a set of chromatic-opponent and double-opponent center-surround operators. We apply this technique to simulated test patterns as well as to a chromatic image. It is shown that, given some knowledge about the strength of the ambient illumination, this method provides a better classification of shadow boundaries and material changes.


International Journal of Computer Vision | 1988

A ‘complexity level’ analysis of immediate vision

John K. Tsotsos

This paper demonstrates how serious consideration of the deep complexity issues inherent in the design of a visual system can constrain the development of a theory of vision. We first show how the seemingly intractable problem of visual perception can be converted into a much simpler problem by the application of several physical and biological constraints. For this transformation, two guiding principles are used that are claimed to be critical in the development of any theory of perception. The first is that analysis at the ‘complexity level’ is necessary to ensure that the basic space and performance constraints observed in human vision are satisfied by a proposed system architecture. Second, the ‘maximum power/minimum cost principle’ ranks the many architectures that satisfy the complexity level and allows the choice of the best one. The best architecture chosen using this principle is completely compatible with the known architecture of the human visual system, and in addition, leads to several predictions. The analysis provides an argument for the computational necessity of attentive visual processes by exposing the computational limits of bottom-up early vision schemes. Further, this argues strongly for the validity of the computational approach to modeling the human visual system. Finally, a new explanation for the pop-out phenomenon so readily observed in visual search experiments, is proposed.


Computer Vision and Image Understanding | 1997

Sensor planning for object search

John K. Tsotsos; Yiming Ye

This thesis studies the almost-unexplored field of sensor planning for object search. Object search is the task of efficiently searching for a given 3D object in a given 3D environment by an agent equipped with a camera for target detection and, if the environment configuration is not known, a method of calculating depth, like stereo or laser range finder. Sensor planning for object search refers to the task of selecting the sensing parameters so as to bring the target into the field of view of the camera and to make the image of the target easily detected by the available recognition algorithms. In this thesis, the task of sensor planning for object search is formulated as an optimization problem. This problem is proved to be NP-Complete, thus an approximate solution employing a one step look-ahead strategy is proposed. This approximation is equivalent to the optimal solution under certain conditions. The search region is characterized by the probability distribution of the presence of the target. The goal is to find the desired object reliably with minimum effort. The control of the sensing parameters depends on the current state of the search region and the detecting ability of the recognition algorithm. The huge space of possible sensing actions is decomposed into a finite set of actions that must be considered. In order to represent the surrounding environment of the camera and to determine efficiently the sensing parameters over time, a concept called the sensed sphere is proposed, and its construction, using a laser range finder, is derived.


Computer Vision and Image Understanding | 1997

Active Object Recognition Integrating Attention and Viewpoint Control

Sven J. Dickinson; Henrik I. Christensen; John K. Tsotsos; Göran Olofsson

We present an active object recognition strategy which combines the use of an attention mechanism for focusing the search for a 3D object in a 2D image, with a viewpoint control strategy for disambiguating recovered object features. The attention mechanism consists of a probabilistic search through a hierarchy of predicted feature observations, taking objects into a set of regions classified according to the shapes of their bounding contours. We motivate the use of image regions as a focus-feature and compare their uncertainty in inferring objects with the uncertainty of more commonly used features such as lines or corners. If the features recovered during the attention phase do not provide a unique mapping to the 3D object being searched, the probabilistic feature hierarchy can be used to guide the camera to a new viewpoint from where the object can be disambiguated. The power of the underlying representation is its ability to unify these object recognition behaviors within a single framework. We present the approach in detail and evaluate its performance in the context of a project providing robotic aids for the disabled.

Collaboration


Dive into the John K. Tsotsos's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge