Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mark Sagar is active.

Publication


Featured researches published by Mark Sagar.


international conference on computer graphics and interactive techniques | 2000

Acquiring the reflectance field of a human face

Paul E. Debevec; Tim Hawkins; Chris Tchou; Haarm-Pieter Duiker; Westley Sarokin; Mark Sagar

We present a method to acquire the reflectance field of a human face and use these measurements to render the face under arbitrary changes in lighting and viewpoint. We first acquire images of the face from a small set of viewpoints under a dense sampling of incident illumination directions using a light stage. We then construct a reflectance function image for each observed image pixel from its values over the space of illumination directions. From the reflectance functions, we can directly generate images of the face from the original viewpoints in any form of sampled or computed illumination. To change the viewpoint, we use a model of skin reflectance to estimate the appearance of the reflectance functions for novel viewpoints. We demonstrate the technique with synthetic renderings of a persons face under novel illumination and viewpoints.


international conference on computer graphics and interactive techniques | 1994

A virtual environment and model of the eye for surgical simulation

Mark Sagar; David P. Bullivant; Gordon Mallinson; Peter Hunter

An anatomically detailed 3-D computer graphic model of the eye and surrounding face within a virtual environment has been implemented for use in a surgical simulator. The simulator forms part of a teleoperated micro-surgical robotic system being developed for eye surgery. The model has been designed to both visually and mechanically simulate features of the human eye by coupling computer graphic realism with finite element analysis. The paper gives an overview of the system with emphasis on the graphical modelling techniques and a computationally efficient framework for representing anatomical details of the eye and for finite element analysis of the mechanical properties. Examples of realistic images coupled to large deformation finite element model of the cornea are presented. These images can be rendered sufficiently fast for the virtual reality application.


Presence: Teleoperators & Virtual Environments | 1993

A teleoperated microsurgical robot and associated virtual environment for eye surgery

Ian W. Hunter; Tilemachos D. Doukoglou; Serge R. Lafontaine; Paul G. Charette; Lynette A. Jones; Mark Sagar; Gordon Mallinson; Peter Hunter

We have developed a prototype teleoperated microsurgical robot (MSR-1) and associated virtual environment for eye surgery. Bidirectional pathways relay visual, auditory, and mechanical information between the MSR-1 master and slave. The surgeon wears a helmet (visual master) that is used to control the orientation of a stereo camera system (visual slave) observing the surgery. Images from the stereo camera system are relayed back to the helmet (or adjacent screen) where they are viewed by the surgeon. In each hand the surgeon holds a pseudotool (a shaft shaped like a microsurgical scalpel) that projects from the left and right limbs of a force reflecting interface (mechanical master). Movements of the left and right pseudotools cause corresponding movements (scaled down by 1 to 100 times) in the microsurgical tools held by the left and right limbs of the micromotion robot (mechanical slave) that performs the surgery. Forces exerted on the left and right limbs of the slave microsurgical robot via the microtools are reflected back (after being scaled up by 1 to 100 times) to the pseudotools and hence surgeon via actuators in the left and right limbs of the mechanical master. This system enables tissue cutting forces to be felt including those that would normally be imperceptible if they were transmitted directly to the surgeons hands. The master and slave subsystems (visual, auditory, and mechanical) communicate through a computer system which serves to enhance and augment images, filter hand tremor, perform coordinate transformations, and perform safety checks. The computer system consists of master and slave computers that communicate via an optical fiber connection. As a result, the MSR-1 master and slave may be located at different sites, which permits remote robotic microsurgery to become a reality. MSR-1 is being used as an experimental testbed for studying the effects of feedforward and feedback delays on remote surgery and is used in research on enhancing the accuracy and dexterity of microsurgeons by creating mechanical and visual telepresence.


Computers in Biology and Medicine | 1995

Ophthalmic microsurgical robot and associated virtual environment

Ian W. Hunter; Lynette A. Jones; Mark Sagar; Serge R. Lafontaine; Peter Hunter

An ophthalmic virtual environment has been developed as part of a teleoperated microsurgical robot built to perform surgery on the eye. The virtual environment is unique in that it incorporates a detailed continuum model of the anatomical structures of the eye, its mechanics and optical properties, together with a less detailed geometric-mechanical model of the face. In addition to providing a realistic visual display of the eye being operated on, the virtual environment simulates tissue properties during manipulation and cutting and the forces involved are determined by solving a mechanical finite element model of the tissue. These forces are then fed back to the operator via a force reflecting master and so the surgeon can experience both the visual and mechanical sensations associated with performing surgery. The virtual environment can be used to enhance the images produced by the camera on the microsurgical slave robot during surgery and as a surgical simulator in which it replaces these images with computer graphics generated from the eye model.


international conference on computer graphics and interactive techniques | 2006

Facial performance capture and expressive translation for King Kong

Mark Sagar

Expression space has several benefits. It provides a transcription of the communicative content of the performance, and is ideal to map in an art-directable way to a topologically different character. It provides an intuitive format for motion editing and making performance changes. Because the face is constrained in its motion, fitting to an expression space significantly reduces noise and allows for intelligent filtering.


Health Psychology | 2015

Do slumped and upright postures affect stress responses? A randomized trial.

Shwetha Nair; Mark Sagar; John J. Sollers; Nathan S. Consedine; Elizabeth Broadbent

OBJECTIVE The hypothesis that muscular states are related to emotions has been supported predominantly by research on facial expressions. However, body posture also may be important to the initiation and modulation of emotions. This experiment aimed to investigate whether an upright seated posture could influence affective and cardiovascular responses to a psychological stress task, relative to a slumped seated posture. METHOD There were 74 participants who were randomly assigned to either a slumped or upright seated posture. Their backs were strapped with physiotherapy tape to hold this posture throughout the study. Participants were told a cover story to reduce expectation effects of posture. Participants completed a reading task, the Trier Social Stress speech task, assessments of mood, self-esteem, and perceived threat. Blood pressure and heart rate were continuously measured. RESULTS Upright participants reported higher self-esteem, more arousal, better mood, and lower fear, compared to slumped participants. Linguistic analysis showed slumped participants used more negative emotion words, first-person singular pronouns, affective process words, sadness words, and fewer positive emotion words and total words during the speech. Upright participants had higher pulse pressure during and after the stressor. CONCLUSIONS Adopting an upright seated posture in the face of stress can maintain self-esteem, reduce negative mood, and increase positive mood compared to a slumped posture. Furthermore, sitting upright increases rate of speech and reduces self-focus. Sitting upright may be a simple behavioral strategy to help build resilience to stress. The research is consistent with embodied cognition theories that muscular and autonomic states influence emotional responding.


Archive | 2010

Characterizing facial tissue sliding using ultrasonography

Tim Wu; Kumar Mithraratne; Mark Sagar; Peter Hunter

An integrated structure of gliding spaces and ligament attachments in the facial anatomy provides a fine balance between mobility and stability. In order to understand the sliding kinematics between facial muscular layers, an experimental program was undertaken to infer the way underlying soft tissue motion takes place in the human face in vivo. The motion data of the facial soft tissue structures were acquired using ultrasonography. An optical flow algorithm was implemented to visualize the deformation field and to segment out the region of discontinuity. Finite-element tracking meshes with cubic-Hermite bases were used to measure the displacement and strain at the discontinuous interface. To improve the convergence properties of the tracking mesh, a multi-resolution scheme was adapted. The tracking results from our method have been shown to give better correlation compared to optical flow algorithm at interface region.


Archive | 2010

An efficient heterogeneous continuum model to simulate active contraction of facial soft tissue structures

Kumar Mithraratne; Alice P.-L. Hung; Mark Sagar; Peter Hunter

A computationally efficient and accurate finite element model of a soft tissue continuum with heterogeneous constitutive properties is presented in this article. Cubic-Hermite interpolation functions were used in formulating non-linear, finite elasticity finite element equations. The use of Hermite family elements serves two purposes here. Firstly, the topology of the structure can be accurately represented with a fewer number of degrees of freedom and hence shorter computational time. Secondly, Hermite family elements guarantee derivative continuity of the displacement field across element boundaries ensuring that no physical laws are violated in large deformation mechanics.


Nature Communications | 2017

Reinforcement determines the timing dependence of corticostriatal synaptic plasticity in vivo

Simon D. Fisher; Paul Robertson; Melony J. Black; Peter Redgrave; Mark Sagar; Wickliffe C. Abraham; John J. Reynolds

Plasticity at synapses between the cortex and striatum is considered critical for learning novel actions. However, investigations of spike-timing-dependent plasticity (STDP) at these synapses have been performed largely in brain slice preparations, without consideration of physiological reinforcement signals. This has led to conflicting findings, and hampered the ability to relate neural plasticity to behavior. Using intracellular striatal recordings in intact rats, we show here that pairing presynaptic and postsynaptic activity induces robust Hebbian bidirectional plasticity, dependent on dopamine and adenosine signaling. Such plasticity, however, requires the arrival of a reward-conditioned sensory reinforcement signal within 2 s of the STDP pairing, thus revealing a timing-dependent eligibility trace on which reinforcement operates. These observations are validated with both computational modeling and behavioral testing. Our results indicate that Hebbian corticostriatal plasticity can be induced by classical reinforcement learning mechanisms, and might be central to the acquisition of novel actions.Spike timing dependent plasticity (STDP) has been studied extensively in slices but whether such pairings can induce plasticity in vivo is not known. Here the authors report an experimental paradigm that achieves bidirectional corticostriatal STDP in vivo through modulation by behaviourally relevant reinforcement signals, mediated by dopamine and adenosine signaling.


international conference on computer graphics and interactive techniques | 2014

A neurobehavioural framework for autonomous animation of virtual human faces

Mark Sagar; David P. Bullivant; Paul Robertson; Oleg Efimov; Khurram Jawed; Ratheesh Kalarot; Tim Wu

We describe a neurobehavioural modeling and visual computing framework for the integration of realistic interactive computer graphics with neural systems modelling, allowing real-time autonomous facial animation and interactive visualization of the underlying neural network models. The system has been designed to integrate and interconnect a wide range of computational neuroscience models to construct embodied interactive psychobiological models of behaviour. An example application of the framework combines models of the facial motor system, physiologically based emotional systems, and basic neural systems involved in early interactive behaviour and learning and embodies them in a virtual infant rendered with realistic computer graphics. The model reacts in real time to visual and auditory input and its own evolving internal processes as a dynamic system. The live state of the model which generates the resulting facial behaviour can be visualized through graphs and schematics or by exploring the activity mapped to the underlying neuroanatomy.

Collaboration


Dive into the Mark Sagar's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tim Wu

University of Auckland

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Oleg Efimov

University of Auckland

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lynette A. Jones

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Serge R. Lafontaine

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge