Nick Holliman
Durham University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Nick Holliman.
Journal of the Geological Society | 2005
Ken McCaffrey; Richard R. Jones; R. E. Holdsworth; Robert W. Wilson; Phillip Clegg; Jonathan Imber; Nick Holliman; Immo Trinks
The development of affordable digital technologies that allow the collection and analysis of georeferenced field data represents one of the most significant changes in field-based geoscientific study since the invention of the geological map. Digital methods make it easier to re-use pre-existing data (e.g. previous field data, geophysical survey, satellite images) during renewed phases of fieldwork. Increased spatial accuracy from satellite and laser positioning systems provides access to geostatistical and geospatial analyses that can inform hypothesis testing during fieldwork. High-resolution geomatic surveys, including laser scanning methods, allow 3D photorealistic outcrop images to be captured and interpreted using novel visualization and analysis methods. In addition, better data management on projects is possible using geospatially referenced databases that match agreed international data standards. Collectively, the new techniques allow 3D models of geological architectures to be constructed directly from field data in ways that are more robust compared with the abstract models constructed traditionally by geoscientists. This development will permit explicit information on uncertainty to be carried forward from field data to the final product. Current work is focused upon the development and implementation of a more streamlined digital workflow from the initial data acquisition stage to the final project output.
electronic imaging | 2008
Paul W. Gorley; Nick Holliman
We are interested in metrics for automatically predicting the compression settings for stereoscopic images so that we can minimize file size, but still maintain an acceptable level of image quality. Initially we investigate how Peak Signal to Noise Ratio (PSNR) measures the quality of varyingly coded stereoscopic image pairs. Our results suggest that symmetric, as opposed to asymmetric stereo image compression, will produce significantly better results. However, PSNR measures of image quality are widely criticized for correlating poorly with perceived visual quality. We therefore consider computational models of the Human Visual System (HVS) and describe the design and implementation of a new stereoscopic image quality metric. This, point matches regions of high spatial frequency between the left and right views of the stereo pair and accounts for HVS sensitivity to contrast and luminance changes at regions of high spatial frequency, using Michelsons Formula and Pelis Band Limited Contrast Algorithm. To establish a baseline for comparing our new metric with PSNR we ran a trial measuring stereoscopic image encoding quality with human subjects, using the Double Stimulus Continuous Quality Scale (DSCQS) from the ITU-R BT.500-11 recommendation. The results suggest that our new metric is a better predictor of human image quality preference than PSNR and could be used to predict a threshold compression level for stereoscopic image pairs.
electronic imaging | 2007
Nick Holliman; Barbara Froner; Simon P. Liversedge
Desktop 3D displays vary in their optical design and this results in a significant variation in the way in which stereo images are physically displayed on different 3D displays. When precise depth judgements need to be made these differences may become critical to task performance. Applications where this is a particular issue include medical imaging, geoscience and scientific visualization. We investigate perceived depth thresholds for four classes of desktop 3D display; full resolution, row interleaved, column interleaved and colour-column interleaved. Given the same input image resolution we calculate the physical view resolution for each class of display to geometrically predict its minimum perceived depth threshold. To verify our geometric predictions we present the design of a task where viewers are required to judge which of two neighboring squares lies in front of the other. We report results from a trial using this task where participants are randomly asked to judge whether they can perceive one of four levels of image disparity (0,2,4 and 6 pixels) on seven different desktop 3D displays. The results show a strong effect and the task produces reliable results that are sensitive to display differences. However, we conclude that depth judgement performance cannot always be predicted from display geometry alone. Other system factors, including software drivers, electronic interfaces, and individual participant differences must also be considered when choosing a 3D display to make critical depth judgements.
electronic imaging | 2006
Nick Holliman; Carlton M. Baugh; Carlos S. Frenk; Adrian Jenkins; Barbara Froner; Djamel Hassaine; John C. Helly; N. Metcalfe; Takashi Okamoto
This paper describes our experience making a short stereoscopic movie visualizing the development of structure in the universe during the 13.7 billion years from the Big Bang to the present day. Aimed at a general audience for the Royal Societys 2005 Summer Science Exhibition, the movie illustrates how the latest cosmological theories based on dark matter and dark energy are capable of producing structures as complex as spiral galaxies and allows the viewer to directly compare observations from the real universe with theoretical results. 3D is an inherent feature of the cosmology data sets and stereoscopic visualization provides a natural way to present the images to the viewer, in addition to allowing researchers to visualize these vast, complex data sets. The presentation of the movie used passive, linearly polarized projection onto a 2m wide screen but it was also required to playback on a Sharp RD3D display and in anaglyph projection at venues without dedicated stereoscopic display equipment. Additionally lenticular prints were made from key images in the movie. We discuss the following technical challenges during the stereoscopic production process; 1) Controlling the depth presentation, 2) Editing the stereoscopic sequences, 3) Generating compressed movies in display specific formats. We conclude that the generation of high quality stereoscopic movie content using desktop tools and equipment is feasible. This does require careful quality control and manual intervention but we believe these overheads are worthwhile when presenting inherently 3D data as the result is significantly increased impact and better understanding of complex 3D scenes.
Proceedings of SPIE | 2011
Amy Turner; Jonathan Berry; Nick Holliman
The creation of binocular images for stereoscopic display has benefited from significant research and commercial development in recent years. However, perhaps surprisingly, the effect of adding 3D sound to stereoscopic images has rarely been studied. If auditory depth information can enhance or extend the visual depth experience it could become an important way to extend the limited depth budget on all 3D displays and reduce the potential for fatigue from excessive use of disparity. Objective: As there is limited research in this area our objective was to ask two preliminary questions. First what is the smallest difference in forward depth that can be reliably detected using 3D sound alone? Second does the addition of auditory depth information influence the visual perception of depth in a stereoscopic image? Method: To investigate auditory depth cues we use a simple sound system to test the experimental hypothesis that: participants will perform better than chance at judging the depth differences between two speakers a set distance apart. In our second experiment investigating both auditory and visual depth cues we setup a sound system and a stereoscopic display to test the experimental hypothesis that: participants judge a visual stimulus to be closer if they hear a closer sound when viewing the stimulus. Results: In the auditory depth cue trial every depth difference tested gave significant results demonstrating that the human ear can hear depth differences between physical sources as short as 0.25 m at 1 m. In our trial investigating whether audio information can influence the visual perception of depth we found that participants did report visually perceiving an object to be closer when the sound was played closer to them even though the image depth remained unchanged. Conclusion: The positive results in the two trials show that we can hear small differences in forward depth between sound sources and suggest that it could be practical to extend the apparent depth in a stereoscopic image by using 3D sound, providing a controlled way to compensate for the depth budget limits on 3D displays.
Geosphere | 2007
Tim F. Wawrzyniec; Richard R. Jones; Ken McCaffrey; Jonathan Imber; Nick Holliman; R. E. Holdsworth
Driven by the popularity of easily accessible desktop tools such as Google Earth and in-car satellite navigation systems, we are currently experiencing a global geospatial revolution. In parallel, many areas of geoscience now currently routinely use geographic information systems (GIS) software and
Eos, Transactions American Geophysical Union | 2005
Ken McCaffrey; R. E. Holdsworth; Jonathan Imber; Phillip Clegg; Nicola De Paola; Richard R. Jones; Richard W. Hobbs; Nick Holliman; Immo Trinks
New digital methods for data capture can now provide photorealistic, spatially precise, and geometrically accurate three-dimensional (3-D) models of rocks exposed at the Earths surface [Xu et al., 2000; Pringle et al., 2001; Clegg et al., 2005]. These “virtual outcrops” have the potential to create a new form of laboratory-based teaching aids for geoscience students, to help address accessibility issues in fieldwork, and generally to improve public awareness of the spectacular nature of geologic exposures from remote locations worldwide. This article addresses how virtual outcrops can provide calibration, or a quantitative “reality check,” for a new generation of high-resolution predictive models for the Earths subsurface.
BMC Ophthalmology | 2008
Maged Habib; James Lowell; Nick Holliman; Andrew Hunter; Daniella Vaideanu; Anthony Hildreth; David Steel
BackgroundStereoscopic assessment of the optic disc morphology is an important part of the care of patients with glaucoma. The aim of this study was to assess stereoviewing of stereoscopic optic disc images using an example of the new technology of autostereoscopic screens compared to the liquid shutter goggles.MethodsIndependent assessment of glaucomatous disc characteristics and measurement of optic disc and cup parameters whilst using either an autostereoscopic screen or liquid crystal shutter goggles synchronized with a view switching display. The main outcome measures were inter-modality agreements between the two used modalities as evaluated by the weighted kappa test and Bland Altman plots.ResultsInter-modality agreement for measuring optic disc parameters was good [Average kappa coefficient for vertical Cup/Disc ratio was 0.78 (95% CI 0.62–0.91) and 0.81 (95% CI 0.6–0.92) for observer 1 and 2 respectively]. Agreement between modalities for assessing optic disc characteristics for glaucoma on a five-point scale was very good with a kappa value of 0.97.ConclusionThis study compared two different methods of stereo viewing. The results of assessment of the different optic disc and cup parameters were comparable using an example of the newly developing autostereoscopic display technologies as compared to the shutter goggles system used. The Inter-modality agreement was high. This new technology carries potential clinical usability benefits in different areas of ophthalmic practice.
TPCG | 2005
Barbara Froner; Nick Holliman
The usable perceived depth range of all stereoscopic 3D displays is limited by human factors considerations to a bounded range around the plane of the display. To account for this our Three Region stereoscopic camera model is able to control the depth mapping from scene to display while allowing a defined region of interest in scene depth to have an improved perceived depth representation compared to other regions of the scene. This can be categorized as a focus+context algorithm that manipulates stereoscopic depth representation along the viewing axis of the camera. We present a new implementation of the Three Region stereoscopic camera model as a Utility plug-in for the popular modelling and rendering package 3ds max. We describe our user interface, designed to incorporate stereoscopic image generation into the user’s natural work flow. The implementation required us to overcome a number of technical challenges including; accurately measuring scene depth range, simulating asymmetric camera frustum in a system only supporting symmetric frustum, merging multiple renderings and managing anti-aliasing in layered images. We conclude from our implementation that it is possible to incorporate high quality stereoscopic camera models into standard graphics packages.
Journal of Experimental Psychology: Human Perception and Performance | 2017
Hayward J. Godwin; Tamaryn Menneer; Simon P. Liversedge; Kyle R. Cave; Nick Holliman; Nick Donnelly
Standard models of visual search have focused upon asking participants to search for a single target in displays where the objects do not overlap one another, and where the objects are presented on a single depth plane. This stands in contrast to many everyday visual searches wherein variations in overlap and depth are the norm, rather than the exception. Here, we addressed whether presenting overlapping objects on different depths planes to one another can improve search performance. Across 4 different experiments using different stimulus types (opaque polygons, transparent polygons, opaque real-world objects, and transparent X-ray images), we found that depth was primarily beneficial when the displays were transparent, and this benefit arose in terms of an increase in response accuracy. Although the benefit to search performance only appeared in some cases, across all stimulus types, we found evidence of marked shifts in eye-movement behavior. Our results have important implications for current models and theories of visual search, which have not yet provided detailed accounts of the effects that overlap and depth have on guidance and object identification processes. Moreover, our results show that the presence of depth information could aid real-world searches of complex, overlapping displays.