Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Matt Adcock is active.

Publication


Featured researches published by Matt Adcock.


symposium on haptic interfaces for virtual environment and teleoperator systems | 2005

Using collaborative haptics in remote surgical training

Chris Gunn; Matthew A. Hutchins; Duncan Stevenson; Matt Adcock; Patricia Youngblood

We describe the design and trial of a remotely conducted surgical master class, using a haptic virtual environment. The instructor was located in the United States and the class was in Australia. The responses of the audience and participants are presented.


PLOS ONE | 2014

Capturing natural-colour 3D models of insects for species discovery and diagnostics.

Chuong V. Nguyen; David Lovell; Matt Adcock

Collections of biological specimens are fundamental to scientific understanding and characterization of natural diversity—past, present and future. This paper presents a system for liberating useful information from physical collections by bringing specimens into the digital domain so they can be more readily shared, analyzed, annotated and compared. It focuses on insects and is strongly motivated by the desire to accelerate and augment current practices in insect taxonomy which predominantly use text, 2D diagrams and images to describe and characterize species. While these traditional kinds of descriptions are informative and useful, they cannot cover insect specimens “from all angles” and precious specimens are still exchanged between researchers and collections for this reason. Furthermore, insects can be complex in structure and pose many challenges to computer vision systems. We present a new prototype for a practical, cost-effective system of off-the-shelf components to acquire natural-colour 3D models of insects from around 3 mm to 30 mm in length. (“Natural-colour” is used to contrast with “false-colour”, i.e., colour generated from, or applied to, gray-scale data post-acquisition.) Colour images are captured from different angles and focal depths using a digital single lens reflex (DSLR) camera rig and two-axis turntable. These 2D images are processed into 3D reconstructions using software based on a visual hull algorithm. The resulting models are compact (around 10 megabytes), afford excellent optical resolution, and can be readily embedded into documents and web pages, as well as viewed on mobile devices. The system is portable, safe, relatively affordable, and complements the sort of volumetric data that can be acquired by computed tomography. This system provides a new way to augment the description and documentation of insect species holotypes, reducing the need to handle or ship specimens. It opens up new opportunities to collect data for research, education, art, entertainment, biodiversity assessment and biosecurity control.


Presence: Teleoperators & Virtual Environments | 2005

Combating latency in haptic collaborative virtual environments

Chris Gunn; Matthew A. Hutchins; Matt Adcock

Haptic (force) feedback is increasingly being used in surgical-training simulators. The addition of touch is important extra information that can add another dimension to the realism of the experience. Progress in networking these systems together over long distances has been held back, principally because the latency of the network can induce severe instability in any dynamic objects in the scene. This paper describes techniques allowing long-distance sharing of haptic-enabled, dynamic scenes. At the CSIRO Virtual Environments Laboratory, we have successfully used this system to connect a prototype of a surgical-simulation application between participants on opposite sides of the world in Sweden and Australia, over a standard Internet connection spanning 3 continents and 2 oceans. The users were able to simultaneously manipulate pliable objects in a shared workspace, as well as guide each others hands (and shake hands!) over 22,000 km (13620 miles) of Internet connection. The main obstacle to overcome was the latency-induced instability in the system, caused by the delays and jitter inherent in the network. Our system involved a combination of an event-collection mechanism, a network event-forwarding mechanism and a pseudophysics mechanism. We found that the resulting behavior of the interconnected body organs, under simultaneous-user manipulation, was sufficiently convincing to be considered for training surgical procedures.


australasian computer-human interaction conference | 2007

Annotating with light for remote guidance

Doug Palmer; Matt Adcock; Jocelyn Smith; Matthew A. Hutchins; Chris Gunn; Duncan Stevenson; Ken Taylor

This paper presents a system that will support a remote guidance collaboration, in which a local expert guides a remotely located assistant to perform physical, three-dimensional tasks. The system supports this remote guidance by allowing the expert to annotate, point at and draw upon objects in the remote location using a pen and tablet-based interface to control a laser projection device. The specific design criteria for this system are drawn from a tele-health scenario involving remote medical examination of patients and the paper presents the software architecture and implementation details of the associated hardware. In particular, the algorithm for aligning the representation of the laser projection over the video display of the remote scene is described. Early evaluations by medical specialists are presented, the usability of the system in laboratory experiments is discussed and ideas for future developments are outlined.


virtual reality continuum and its applications in industry | 2013

RemoteFusion: real time depth camera fusion for remote collaboration on physical tasks

Matt Adcock; Stuart Anderson; Bruce H. Thomas

Remote guidance systems allow humans to collaborate on physical tasks across large distances and have applications in fields such as medicine, maintenance and working with hazardous substances. Existing systems typically provide two dimensional video streams to remote participants, and these are restricted to viewpoint locations based on the placement of physical cameras. Recent systems have incorporated the ability of a remote expert to annotate their 2D view and for these annotations to be displayed in the physical workspace to the local worker. We present a prototype remote guidance system, called RemoteFusion, which is based on the volumetric fusion of commodity depth cameras. The system incorporates real-time 3D fusion with color, the ability to distinguish and render dynamic elements of a scene whether human or non-human, a multi-touch driven free 3D viewpoint, and a Spatial Augmented Reality (SAR) light annotation mechanism. We provide a physical overview of the system, including hardware and software configuration, and detail the implementation of each of the key features.


2003 IEEE International Augmented Reality Toolkit Workshop | 2003

Augmented reality haptics: using ARToolKit for display of haptic applications

Matt Adcock; Matthew A. Hutchins; Chris Gunn

This paper described the integration of the ARToolKit with the Reachin Core Technology API. The result is a system capable of providing a coherent mix of real world video, computer haptics and computer graphics. A key feature is that new applications can be rapidly developed. Ultimately, this system is used to support rich object based collaboration between face-to-face and remote participants.


field and service robotics | 2010

The Development of a Telerobotic Rock Breaker

Elliot S. Duff; Con Caris; Adrian Bonchis; Ken Taylor; Chris Gunn; Matt Adcock

This paper describes the development of a tele-robotic rock breaker deployed at a mine over 1000kms from the remote operations centre. This distance introduces a number of technical and cognitive challenges to the design of the system, which have been addressed with the development of shared autonomy in the control system and a mixed reality user interface. A number of trials were conducted, culminating in a production field trial, which demonstrated that the system is safe, productive (sometimes faster) and integrates seamlessly with mine operations.


australasian computer-human interaction conference | 2011

Using sticky light technology for projected guidance

Chris Gunn; Matt Adcock

A worker performing a physical task may need to ask for advice and guidance from an expert. This can be a problem if the expert is in some distant location. We describe a system which allows the expert to see the workplace from the workers point of view, and to draw annotations directly into that workplace using a picoprojector. Since the system can be worn by the worker, these projected annotations may move with the workers movements. We describe two methods of sticking these annotations to the original positions thereby compensating for the movement of the worker.


symposium on spatial user interaction | 2013

Visualization of off-surface 3D viewpoint locations in spatial augmented reality

Matt Adcock; David Feng; Bruce H. Thomas

Spatial Augmented Reality (SAR) systems can be used to convey guidance in a physical task from a remote expert. Sometimes that remote expert is provided with a single camera view of the workspace but, if they are given a live captured 3D model and can freely control their point of view, the local worker needs to know what the remote expert can see. We present three new SAR techniques, Composite Wedge, Vector Boxes, and Eyelight, for visualizing off-surface 3D viewpoints and supporting the required workspace awareness. Our study showed that the Composite Wedge cue was best for providing location awareness, and the Eyelight cue was best for providing visibility map awareness.


international conference on computer graphics and interactive techniques | 2004

Haptic collaboration with augmented reality

Matt Adcock; Matthew A. Hutchins; Chris Gunn

We describe a (face-to-face) collaborative environment that provides a coherent mix of real world video, computer haptics, graphics and audio. This system is a test-bed for investigating new collaborative affordances and behaviours.

Collaboration


Dive into the Matt Adcock's collaboration.

Top Co-Authors

Avatar

Chris Gunn

Commonwealth Scientific and Industrial Research Organisation

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Lovell

Commonwealth Scientific and Industrial Research Organisation

View shared research outputs
Top Co-Authors

Avatar

Bruce H. Thomas

University of South Australia

View shared research outputs
Top Co-Authors

Avatar

Matthew A. Hutchins

Commonwealth Scientific and Industrial Research Organisation

View shared research outputs
Top Co-Authors

Avatar

Eleanor Gates-Stuart

Commonwealth Scientific and Industrial Research Organisation

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Duncan Stevenson

Commonwealth Scientific and Industrial Research Organisation

View shared research outputs
Top Co-Authors

Avatar

Kazys Stepanas

Commonwealth Scientific and Industrial Research Organisation

View shared research outputs
Top Co-Authors

Avatar

Stephen Barrass

Commonwealth Scientific and Industrial Research Organisation

View shared research outputs
Researchain Logo
Decentralizing Knowledge