Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Paul Chapman is active.

Publication


Featured researches published by Paul Chapman.


Experimental Psychology | 2006

Working memory involvement in emotion-based processes underlying choosing advantageously.

Anna Pecchinenda; Michael N. Dretsch; Paul Chapman

The Iowa Gambling Task (IGT) is widely used to assess decision making under conditions of uncertainty in clinical as well as in nonclinical populations. However, there is still debate as to whether normal performance at this task relies on implicit, emotion-based processes that are independent of working memory. To clarify the role of working memory on normal performance on the IGT, participants performed the task under low or high working memory load. We used a modified version of the original task, in which the position of the four decks was randomized between trials. Results showed that only participants performing under low memory load significantly chose more advantageously halfway through the task. In addition, when comparing the number of cards chosen from the two decks with frequent losses, one advantageous and one disadvantageous, only participants performing under low memory load chose more cards from the advantageous deck. The present findings indicate that the processes underlying optimal advantageous performance on the IGT rely on working memory functions.


Current Medical Imaging Reviews | 2014

Real-time Medical Visualization of Human Head and Neck Anatomy and its Applications for Dental Training and Simulation

Paul Anderson; Paul Chapman; Minhua Ma; Paul Rea

The Digital Design Studio and NHS Education Scotland have developed ultra-high definition real-time interactive 3D anatomy of the head and neck for dental teaching, training and simulation purposes. In this paper we present an established workflow using state-of-the-art 3D laser scanning technology and software for design and construction of medical data and describe the workflow practices and protocols in the head and neck anatomy project. Anatomical data was acquired through topographical laser scanning of a destructively dissected cadaver. Each stage of model development was clinically validated to produce a normalised human dataset which was transformed into a real-time environment capable of large-scale 3D stereoscopic display in medical teaching labs across Scotland, whilst also supporting single users with laptops and PC. Specific functionality supported within the 3D Head and Neck viewer includes anatomical labelling, guillotine tools and selection tools to expand specific local regions of anatomy. The software environment allows thorough and meaningful investigation to take place of all major and minor anatomical structures and systems whilst providing the user with the means to record sessions and individual scenes for learning and training purposes. The model and software have also been adapted to permit interactive haptic simulation of the injection of a local anaesthetic.


IEEE Computer Graphics and Applications | 1999

Visualizing underwater environments using multifrequency sonar

Paul Chapman; Derek Wills; Graham R. Brookes; Peter Stevens

This article introduces seabed visualization by describing three case studies that use a high-speed, multifrequency, continuous scan sonar called the Seabed Visualization System. The case studies involve: modeling a harbour wall in Holland, permitting a virtual inspection of the harbour environment; visualizing a sunken military vessel, the SS Richard Montgomery; and visualizing underwater pipelines in Easington, England.


IEEE Computer Graphics and Applications | 2010

We All Live in a Virtual Submarine

Paul Chapman; Kim Bale; Pierre Drap

Our seas and oceans hide a plethora of archaeological sites such as ancient shipwrecks that, overtime, are being destroyed through activities such as deepwater trawling and treasure hunting. In 2006, a multidisciplinary team of 11 European institutions established the Venus (Virtual Exploration of Underwater Sites) consortium to make underwater sites more accessible by generating thorough, exhaustive 3D records for virtual exploration. Over the past three years, we surveyed several shipwrecks around Europe and investigated advanced techniques for data acquisition using both autonomous and remotely operated vehicles coupled with innovative sonar and photogrammetric equipment. Access to most underwater sites can be difficult and hazardous owing to deep waters. However, this same inhospitable environment offers extraordinary opportunities to archaeologists because darkness, low temperatures, and low oxygen rates are all favorable to preservation. From a visualization pipeline perspective, this project had two main challenges. First, we had to gather large amounts of raw data from various sources. Then, we had to develop techniques to filter, calibrate, and map the data and then bring it all together into a single accurate visual representation.


international conference on virtual reality | 2008

Virtual exploration of underwater archaeological sites: visualization and interaction in mixed reality environments

Mahmoud Haydar; Madjid Maidi; David Roussel; Malik Mallem; Pierre Drap; Kim Bale; Paul Chapman

This paper describes the ongoing developments in Photogrammetry and Mixed Reality for the Venus European project (Virtual ExploratioN of Underwater Sites, http://www.venus-project.eu). The main goal of the project is to provide archaeologists and the general public with virtual and augmented reality tools for exploring and studying deep underwater archaeological sites out of reach of divers. These sites have to be reconstructed in terms of environment (seabed) and content (artifacts) by performing bathymetric and photogrammetric surveys on the real site and matching points between geolocalized pictures. The base idea behind using Mixed Reality techniques is to offer archaeologists and general public new insights on the reconstructed archaeological sites allowing archaeologists to study directly from within the virtual site and allowing the general public to immersively explore a realistic reconstruction of the sites. Both activities are based on the same VR engine but drastically differ in the way they present information. General public activities emphasize the visually and auditory realistic aspect of the reconstruction while archaeologists activities emphasize functional aspects focused on the cargo study rather than realism which leads to the development of two parallel VR demonstrators. This paper will focus on several key points developed for the reconstruction process as well as both VR demonstrators (archaeological and general public) issues. The first developed key point concerns the densification of seabed points obtained through photogrammetry in order to obtain high quality terrain reproduction. The second point concerns the development of the Virtual and Augmented Reality (VR/AR) demonstrators for archaeologists designed to exploit the results of the photogrammetric reconstruction. And the third point concerns the development of the VR demonstrator for general public aimed at creating awareness of both the artifacts that were found and of the process with which they were discovered by recreating the dive process from ship to seabed.


international conference on virtual reality | 2011

Linking evidence with heritage visualization using a large scale collaborative interface

Kim Bale; Daisy Abbott; Ramy Gowigati; Douglas Pritchard; Paul Chapman

The virtual reconstruction of heritage sites and artefacts is a complicated task that requires researchers to gather and assess many different types of historical evidence which can vary widely in accuracy, authority, completeness, interpretation and opinion. It is now acknowledged that elements of speculation, interpretation and subjectivity form part of 3D reconstruction using primary research sources. Ensuring transparency in the reconstruction process and therefore the ability to evaluate the purpose, accuracy and methodology of the visualization is of great importance. Indeed, given the prevalence of 3D reconstruction in recent heritage research, methods of managing and displaying reconstructions alongside their associated metadata and sources has become an emerging area of research. In this paper, we describe the development of techniques that allow research sources to be added as multimedia annotations to a 3D reconstruction of the British Empire Exhibition of 1938. By connecting a series of wireless touchpad PCs with an embedded webserver we provide users with a unique collaborative interface for semantic description and placement of objects within a 3D scene. Our interface allows groups of users to simultaneously create annotations, whilst also allowing them to move freely within a large display visualization environment. The development of a unique, life-size, stereo visualization of this lost architecture with spatialised semantic annotations enhances not only the engagement with and understanding of this significant event in history, but the accountability of the research process itself.


ieee virtual reality conference | 2004

New perspectives on ancient landscapes: a case study of the Foulness valley

Julien Pansiot; Paul Chapman; Warren J. Viant

The standard method for gathering and representing archaeological information consists of two-dimensional layer managers. This paper presents an archaeological Geographical Information System (GIS) based on an immersive virtual environment. Our goal is to provide an immersive visualisation of multiple datasets relating to the Foulness Valley in East Yorkshire. By maximising the users visual bandwidth within an immersive virtual environment, we have provided archeologists with greater insight into the Foulness Valley datasets using both existing and novel visualisation tools and techniques.


Digital Creativity | 2017

Enheduanna – A Manifesto of Falling: first demonstration of a live brain-computer cinema performance with multi-brain BCI interaction for one performer and two audience members

Polina Zioga; Paul Chapman; Minhua Ma; Frank E. Pollick

ABSTRACT The new commercial-grade Electroencephalography (EEG)-based Brain-Computer Interfaces (BCIs) have led to a phenomenal development of applications across health, entertainment and the arts, while an increasing interest in multi-brain interaction has emerged. In the arts, there is already a number of works that involve the interaction of more than one participants with the use of EEG-based BCIs. However, the field of live brain-computer cinema and mixed-media performances is rather new, compared to installations and music performances that involve multi-brain BCIs. In this context, we present the particular challenges involved. We discuss Enheduanna – A Manifesto of Falling, the first demonstration of a live brain-computer cinema performance that enables the real-time brain-activity interaction of one performer and two audience members; and we take a cognitive perspective on the implementation of a new passive multi-brain EEG-based BCI system to realise our creative concept. This article also presents the preliminary results and future work.


Joint International Conference on Serious Games | 2015

A hypothesis of brain-to-brain coupling in interactive new media art and games using brain-computer interfaces

Polina Zioga; Paul Chapman; Minhua Mae; Frank E. Pollick

Interactive new media art and games belong to distinctive fields, but nevertheless share common grounds, tools, methodologies, challenges, and goals, such as the use of applications and devices for engaging multiple participants and players, and more recently electroencephalography (EEG)-based brain-computer interfaces (BCIs). At the same time, an increasing number of new neuroscientific studies explore the phenomenon of brain-to-brain coupling, the dynamics and processes of the interaction and synchronisation between multiple subjects and their brain activity. In this context, we discuss interactive works of new media art, computer and serious games that involve the interaction of the brain-activity, and hypothetically brain-to-brain coupling, between multiple performer/s, spectator/s, or participants/players. We also present Enheduanna – A Manifesto of Falling (2015), a new live brain-computer cinema performance, with the use of an experimental passive multi-brain BCI system under development. The aim is to explore brain-to-brain coupling between performer/s and spectator/s as means of controlling the audio-visual creative outputs.


International Conference on Medical Information Visualisation - BioMedical Visualisation (MedVis'06) | 2006

Kaleidomap Visualizations of Cardiovascular Function in Critical Care Medicine

Kim Bale; Paul Chapman; Jon Purdy; Nizamettin Aydin; Paul Dark

In this paper we consider how the use of Kaleidomaps can facilitate our understanding and interpretation of large complex multivariate medical datasets relating to cardiovascular function in critical care medicine. Kaleidomaps are a new technique for the visualization of multivariate time-series data. They build upon the classic cascade plot and use the curvature of a line to enhance the detection of periodic patterns within multivariate dualperiodicity datasets. Kaleidomaps keep user interaction to a minimum, facilitating the rapid identification of periodic patterns not only within their own variants but also across many different sets of the variants. By linking this technique with traditional line graphs and signal processing techniques, we are able to provide medical experts with a set of visualization tools that permit the combination of medical datasets in their raw form and also with the results of mathematical analysis.

Collaboration


Dive into the Paul Chapman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pierre Drap

Aix-Marseille University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Minhua Ma

Glasgow School of Art

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge