Lech Szymanski
University of Otago
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Lech Szymanski.
IEEE Transactions on Neural Networks | 2014
Lech Szymanski; Brendan McCane
We present a comparative theoretical analysis of representation in artificial neural networks with two extreme architectures, a shallow wide network and a deep narrow network, devised to maximally decouple their representative power due to layer width and network depth. We show that, given a specific activation function, models with comparable VC-dimension are required to guarantee zero error modeling of real functions over a binary input. However, functions that exhibit repeating patterns can be encoded much more efficiently in the deep representation, resulting in significant reduction in complexity. This paper provides some initial theoretical evidence of when and how depth can be extremely effective.
Journal of Clinical Neuroscience | 2015
Reuben Johnson; Lech Szymanski; Steven Mills
Technical advances have led to an increase in the use of the endoscope in neurosurgery in recent years, particularly for intraventricular procedures and pituitary and anterior skull base surgery. Recently stereoscopic three-dimensional (3D) endoscopes have become available and may over time replace traditional two-dimensional (2D) imagery. An alternative strategy would be to use computer software algorithms to give monocular 2D endoscopes 3D capabilities. In this study our objective was to recover depth information from 2D endoscopic images using optical flow techniques. Digital images were recorded using a 2D endoscope and a hierarchical structure from motion algorithm was applied to the motion of the endoscope in order to calculate depth information for the generation of 3D anatomical structure. We demonstrate that 3D data can be recovered from 2D endoscopic images taken during endoventricular surgery where there is a mix of rapid camera motion and periods where the camera is nearly stationary. These algorithms may have the potential to give 3D visualization capabilities to 2D endoscopic hardware.
image and vision computing new zealand | 2014
Steven Mills; Lech Szymanski; Reuben Johnson
We present a hierarchical approach to structure from motion. This approach uses the notion of frame distance, which we define to be the median image displacement of tracked features. Image pairs with a high frame distance are used to initialise reconstruction, as they are expected to have significant associated camera motion and, therefore, give strong geometric constraints on the reconstruction. Additional frames are then added with progressively smaller frame distances to create denser reconstructions. We demonstrate this technique on endoscopic video, where there is a mix of rapid camera motion and periods where the camera is nearly stationary.
international symposium on neural networks | 2013
Lech Szymanski; Brendan McCane
We propose a folding transformation paradigm for supervised layer-wise learning in deep neural networks by introducing concepts of internal decision making, mapping and shatter complexity. These concepts aid in the analysis of an individual hidden transformation in a deep architecture and help to map the capabilities of the proposed folding transformations. We justify the increase of VC-dimension due to depth by showing that the extra model complexity is needed to resolve large variability in the input data for complex problems. We provide an implementation and test the architectures performance on a classification task.
international symposium on neural networks | 2012
Lech Szymanski; Brendan McCane
Deep architecture models are known to be conducive to good generalisation for certain types of classification tasks. Existing unsupervised and semi-supervised training methods do not explain why and when deep internal representations will be effective. We investigate the fundamental principles of representation in deep architectures by devising a method for binary classification in multi-layer feed forward networks with limited breadth. We show that, given enough layers, a super-narrow neural network, with two neurons per layer, is capable of shattering any separable binary dataset. We also show that datasets that exhibit certain type of symmetries are better suited for deep representation and may require only few hidden layers to produce desired classification.
Neurocomputing | 2018
Brendan McCane; Lech Szymanski
Abstract We prove that radially symmetric functions in d dimensions can be approximated by a deep network with fewer neurons than the previously best known result. Our results are much more efficient in terms of the support radius of the radial function and the error of approximation. Our proofs are all constructive and we specify the network architecture and almost all of the weights. The method relies on space-folding transformations that allow us to approximate the norm of a high dimensional vector using relatively few neurons.
image and vision computing new zealand | 2013
Shawn Martin; Lech Szymanski
Manifold clustering is often used to partition a multiple manifold dataset prior to the application of manifold learning. Thus manifold clustering can be seen as a preprocessing step for eliminating singularities in a dataset before doing dimension reduction. In this paper, we propose an algorithm for resolving singularities prior to dimension reduction. We achieve singularity resolution using algebraic blow ups as motivation. With this type of singularity resolution, we are able to simultaneously perform manifold clustering and learning. The algorithm is based on a simple modification of Isomap which identifies and treats singularities before providing reduced dimensional representations. We demonstrate our algorithm with various examples and apply it to problems in molecular conformation, motion segmentation, and face clustering.
international symposium on neural networks | 2012
Lech Szymanski; Brendan McCane
Deep architecture neural networks have been shown to generalise well for many classification problems, however, outside the empirical evidence, it is not entirely clear how deep representation benefits these problems. This paper proposes a supervised cost function for an individual layer in a deep architecture classifier that improves data separability. From this measure, a training algorithm for a multi-layer neural network is developed and evaluated against backpropagation and deep belief net learning. The results confirm that the proposed supervised training objective leads to appropriate internal representation with respect to the classification task, especially for datasets where unsupervised pre-conditioning is not effective. Separability of the hidden layers offers new directions and insights in the quest to illuminate the black box model of deep architectures.
arXiv: Learning | 2017
Brendan McCane; Lech Szymanski
arXiv: Computer Vision and Pattern Recognition | 2016
Xiping Fu; Brendan McCane; Steven Mills; Michael H. Albert; Lech Szymanski