D. G. Shaposhnikov
Southern Federal University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by D. G. Shaposhnikov.
Journal of Visual Communication and Image Representation | 2006
Xiaohong W. Gao; Lubov N. Podladchikova; D. G. Shaposhnikov; Kunbin Hong; Natalia A. Shevtsova
Colour and shape are basic characteristics of traffic signs which are used both by the driver and to develop artificial traffic sign recognition systems. However, these sign features have not been represented robustly in the earlier developed recognition systems, especially in disturbed viewing conditions. In this study, this information is represented by using a human vision colour appearance model and by further developing existing behaviour model of visions. Colour appearance model CIECAM97 has been applied to extract colour information and to segment and classify traffic signs. Whilst shape features are extracted by the development of FOSTS model, the extension of behaviour model of visions. Recognition rate is very high for signs under artificial transformations that imitate possible real world sign distortion (up to 50% for noise level, 50 m for distances to signs, and 5° for perspective disturbances) for still images. For British traffic signs (n = 98) obtained under various viewing conditions, the recognition rate is up to 95%.
Eurasip Journal on Image and Video Processing | 2008
Xiaohong W. Gao; Kunbin Hong; Peter J. Passmore; Lubov N. Podladchikova; D. G. Shaposhnikov
This paper presents a new approach to segment traffic signs from the rest of a scene via CIECAM, a colour appearance model. This approach not only takes CIECAM into practical application for the first time since it was standardised in 1998, but also introduces a new way of segmenting traffic signs in order to improve the accuracy of colour-based approach. Comparison with the other CIE spaces, including CIELUV and CIELAB, and RGB colour space is also carried out. The results show that CIECAM performs better than the other three spaces with 94%, 90%, and 85% accurate rates for sunny, cloudy, and rainy days, respectively. The results also confirm that CIECAM does predict the colour appearance similar to average observers.
international conference on artificial neural networks | 2003
Xiaohong W. Gao; Lubov N. Podladchikova; D. G. Shaposhnikov
A system for traffic sign recognition has been developed. Both colour and shape information from signs are utilised for extraction of features. Colour appearance model CIECAM97 has been applied to extract colour information and to segment and classify traffic signs. Whilst shape features are extracted using FOSTS model, the extension of Behaviour Model of Visions (BMV). Recognition rate is very high. For British traffic signs (n=98) obtained under various viewing conditions, the recognition rate is up to 0.95.
Intelligent Robots and Computer Vision XVI: Algorithms, Techniques, Active Vision, and Materials Handling | 1997
Lubov N. Podladchikova; Valentina I. Gusakova; D. G. Shaposhnikov; Alain Faure; Alexander V. Golovan; Natalia A. Shevtsova
Earlier, the biologically plausible active vision, model for multiresolutional attentional representation and recognition (MARR) has been developed. The model is based on the scanpath theory of Noton and Stark and provides invariant recognition of gray-level images. In the present paper, the algorithm of automatic image viewing trajectory formation in the MARR model, the results of psychophysical experiments, and possible applications of the model are considered. Algorithm of automatic image viewing trajectory formation is based on imitation of the scanpath formed by operator. Several propositions about possible mechanisms for a consecutive selection of fixation points in human visual perception inspired by computer simulation results and known psychophysical data have been tested and confirmed in our psychophysical experiments. In particular, we have found that gaze switch may be directed (1) to a peripheral part of the vision field which contains an edge oriented orthogonally to the edge in the point of fixation, and (2) to a peripheral part of the vision field containing crossing edges. Our experimental results have been used to optimize automatic algorithm of image viewing in the MARR model. The modified model demonstrates an ability to recognize complex real world images invariantly with respect to scale, shift, rotation, illumination conditions, and, in part, to point of view and can be used to solve some robot vision tasks.
Optical Memory and Neural Networks | 2009
Lubov N. Podladchikova; D. G. Shaposhnikov; A. V. Tikidgji-Hamburyan; T. I. Koltunova; R. A. Tikidgji-Hamburyan; Valentina I. Gusakova; Alexander V. Golovan
A model-based approach to study complex image viewing mechanisms and the first results of its implementation are presented. The choice of the most informative regions (MIRs) is performed according to results of psychophysical tests with high-accuracy tracking of eye movements. For three test images, the MIRs were determined as image regions with maximal density of gaze fixations for the all subjects (n = 9). Individual image viewing scanpaths (n= 49) were classified into three basic types (i.e. “viewing”, “object-consequent”, and “object-returned” scanpaths). Task-related and temporal dynamics of eye movement parameters for the same subjects have been found. Artificial image scanpaths similar to experimental have been obtained by means of gaze attraction function.
computer-based medical systems | 2008
Sergey Anishchenko; Vladislav Osinov; D. G. Shaposhnikov; Lubov Podlachikova; Richard Comley; Xiaohong W. Gao
A new approach for the detection of head motions during PET scanning is presented. The proposed system includes 4 modules, which are: input module, face segmentation, facial landmark detection, and head movement estimation. The developed system is tested on pictures monitoring a subjects head while simulating PET scanning (n=12) and face images of subjects with different skin colours (n=31). Experimental results show that the centres of chosen facial landmarks (eye corners and middle point of nose basement) can be detected with high precision (1plusmn0.64 pixels). Processing of 2D images with known moving parameters demonstrates that the parameter movement in terms of rotation and translation along X, Y, and Z directions can be obtained very accurately via the developed methods.
Perception | 2015
Anatoly Samarin; T. I. Koltunova; Vladislav Osinov; D. G. Shaposhnikov; Lubov N. Podladchikova
From the first works of Buswell, Yarbus, and Noton and Stark, the scan path for viewing complex images has been considered as a possible key to objective estimation of cognitive processes and their dynamics. However, evidences both pro and con were revealed in the modern research. In this article, the results supporting the Yarbus-Stark concept are presented. In psychophysical tests, two types of images (three paintings from Yarbus` works and four textures) were used with two instructions, namely, “free viewing” and “search for modified image regions.” The focus of the analysis of experimental results and modeling has been given to local elements of the scan path. It was shown that each parameter used (square of viewing area, S; distance between center of mass of viewing area and image center, R; parameter Xi, based on duration of the current fixation and angle between preceding and following saccades), reflects the specificity of both visual task and image properties. Additionally, the return gaze fixations which have a set of specific properties and mainly address to the areas of interest on image were revealed. Evidently these facts can be formalized in an advanced mathematical model as additional instrument to study the mechanisms of complex image viewing.
international symposium on neural networks | 1999
D. G. Shaposhnikov; Lubov N. Podladchikova
Experimental data about local spatial nonuniformity in orientation selectivity and brightness sensitivity in the human peripheral vision field is described. In addition to the known macrodynamics of sensory tuning sharpness in direction from the fovea to the periphery estimated in consequent 10/spl deg/ areas of the vision field, local alteration of high- and low-selective microareas with period 2/spl deg/-8/spl deg/, is shown. Subject reaction time in response to stimuli presented inside high-selective microareas is significantly less than in low-selective microareas (the mean time difference for light spot stimuli is 120 msec). Revealed local nonuniformity may reflect some basic mechanisms of context encoding foveal information and next fixation point choosing before estimation of object perceptual importance. A possible use of the obtained results for development of advanced biologically-motivated artificial foveal system is considered.
Neuroscience and Behavioral Physiology | 2017
T. I. Koltunova; Lubov N. Podladchikova; D. G. Shaposhnikov; B. M. Vladimirskii; L. D. Syrkin; B. I. Kryuchkov; V. M. Usov
A method of studying the dynamics of visual attention in humans at different stages of examining and recognizing complex images is described. The characteristics of the method include presenting dynamically formed whole images and distractors in the foveal area of the visual field. Experimental data on the effects of acclimation to distractors, the bimodal distribution of fixation duration when distractors are used, and the relationship between the effect of the distractor and the complexity of the target image are presented. Most EEG leads, except the occipital leads, showed significant decreases in the latent period of the P350 component of fixation-linked potentials on simultaneous presentation of whole images and distractors. The potential for using these results to create work-oriented tests for assessment of the state of visual attention in human operators without interfering with their work is discussed.
Proceedings of SPIE | 2015
Sergey Anishchenko; D. Beylin; P. Stepanov; A. Stepanov; I. N. Weinberg; S. Schaeffer; V. Zavarzin; D. G. Shaposhnikov; Mark F. Smith
Unintentional head motion during Positron Emission Tomography (PET) data acquisition can degrade PET image quality and lead to artifacts. Poor patient compliance, head tremor, and coughing are examples of movement sources. Head motion due to patient non-compliance can be an issue with the rise of amyloid brain PET in dementia patients. To preserve PET image resolution and quantitative accuracy, head motion can be tracked and corrected in the image reconstruction algorithm. While fiducial markers can be used, a contactless approach is preferable. A video-based head motion tracking system for a dedicated portable brain PET scanner was developed. Four wide-angle cameras organized in two stereo pairs are used for capturing video of the patient’s head during the PET data acquisition. Facial points are automatically tracked and used to determine the six degree of freedom head pose as a function of time. The presented work evaluated the newly designed tracking system using a head phantom and a moving American College of Radiology (ACR) phantom. The mean video-tracking error was 0.99±0.90 mm relative to the magnetic tracking device used as ground truth. Qualitative evaluation with the ACR phantom shows the advantage of the motion tracking application. The developed system is able to perform tracking with accuracy close to millimeter and can help to preserve resolution of brain PET images in presence of movements.