Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ben Ward is active.

Publication


Featured researches published by Ben Ward.


international conference on computer graphics and interactive techniques | 2007

VideoTrace: rapid interactive scene modelling from video

Anton van den Hengel; Anthony R. Dick; Thorsten Thormählen; Ben Ward; Philip H. S. Torr

VideoTrace is a system for interactively generating realistic 3D models of objects from video---models that might be inserted into a video game, a simulation environment, or another video sequence. The user interacts with VideoTrace by tracing the shape of the object to be modelled over one or more frames of the video. By interpreting the sketch drawn by the user in light of 3D information obtained from computer vision techniques, a small number of simple 2D interactions can be used to generate a realistic 3D model. Each of the sketching operations in VideoTrace provides an intuitive and powerful means of modelling shape from video, and executes quickly enough to be used interactively. Immediate feedback allows the user to model rapidly those parts of the scene which are of interest and to the level of detail required. The combination of automated and manual reconstruction allows VideoTrace to model parts of the scene not visible, and to succeed in cases where purely automated approaches would fail.


IEEE Computer Graphics and Applications | 2011

Depth Director: A System for Adding Depth to Movies

Ben Ward; Sing Bing Kang; Eric Paul Bennett

Depth Director is an interactive system for converting 2D footage to 3D. It integrates recent computer vision advances with specialized tools that let users accurately recreate or stylistically manipulate 3D depths.


international symposium on mixed and augmented reality | 2009

In situ image-based modeling

Anton van den Hengel; Rhys Hill; Ben Ward; Anthony R. Dick

We present an interactive image-based modelling method for generating 3D models within an augmented reality system. Applying real time camera tracking, and high-level automated image analysis, enables more powerful modelling interactions than have previously been possible. The result is an immersive modelling process which generates accurate three dimensional models of real objects efficiently and effectively. In demonstrating the modelling process on a range of indoor and outdoor scenes, we show the flexibility it offers in enabling augmented reality applications in previously unseen environments.


international symposium on mixed and augmented reality | 2010

Interactive modelling for AR applications

John W. Bastian; Ben Ward; Rhys Hill; Anton van den Hengel; Anthony R. Dick

We present a method for estimating the 3D shape of an object from a sequence of images captured by a hand-held device. The method is well suited to augmented reality applications in that minimal user interaction is required, and the models generated are of an appropriate form. The method proceeds by segmenting the object in every image as it is captured and using the calculated silhouette to update the current shape estimate. In contrast to previous silhouettebased modelling approaches, however, the segmentation process is informed by a 3D prior based on the previous shape estimate. A voting scheme is also introduced in order to compensate for the inevitable noise in the camera position estimates. The combination of the voting scheme with the closed-loop segmentation process provides a robust and flexible shape estimation method. We demonstrate the approach on a number of scenes where segmentation without a 3D prior would be challenging.


workshop on applications of computer vision | 2009

Automatic camera placement for large scale surveillance networks

Anton van den Hengel; Rhys Hill; Ben Ward; Alex Cichowski; Henry Detmold; Christopher S. Madden; Anthony R. Dick; John W. Bastian

Automatic placement of surveillance cameras in arbitrary buildings is a challenging task, and also one that is essential for efficient deployment of large scale surveillance networks. Existing approaches for automatic camera placement are either limited to a small number of cameras, or constrained in terms of the building layouts to which they can be applied. This paper describes a new method for determining the best placement for large numbers of cameras within arbitrary building layouts. The method takes as input a 3D model of the building, and uses a genetic algorithm to find a placement that optimises coverage and (if desired) overlap between cameras. Results are reported for an implementation of the method, including its application to a wide variety of complex buildings, both real and synthetic.


british machine vision conference | 2006

Building models of regular scenes from structure-and-motion

Anton van den Hengel; Anthony R. Dick; Thorsten Thormählen; Ben Ward; Philip H. S. Torr

This paper describes a method for generating a model-based reconstruction of a scene from image data. The method uses the camera models and point cloud typically generated by a structure-and-motion process as a starting point for developing a higher level model of the scene. The method relies on the user to provide a minimal amount of structural seeding information from which more complex geometry is extrapolated. The regularity typically present in man-made environments is used to minimise the interaction required, but also to improve the accuracy of fit. We demonstrate model based reconstructions obtained using this method.


eurographics | 2006

Rapid Interactive Modelling from Video with Graph Cuts

Anton van den Hengel; Anthony R. Dick; Thorsten Thormählen; Ben Ward; Philip H. S. Torr

We present a method for generating a parameterised model of a scene fr om a set of images. The method is novel in that it uses information from several sources—video, sparse 3D points and user input—to fit models to a scene. The user drives the process by providing selected high-level scene info rmation, for instance selecting an object in the scene, or specifying the relationship between a pair of objects. The sy stem combines this information with image and 3D data to dynamically update its model of the scene. In doing so it avoids common pitfalls of both automatic structure and motion algorithms, and image-based modelling pack ages.


digital image computing: techniques and applications | 2007

Interactive 3D Model Completion

A. van den Hengel; Anthony R. Dick; Thorsten Thormählen; Ben Ward; Philip H. S. Torr

A common problem when using automated structure from motion techniques is that the object to be modelled can only be partially reconstructed from the video. This can occur because not all of the object is visible in the video, or because of featureless or ambiguous regions on the objects surface. In this paper we present an interactive method for rapidly and intuitively generating a complete 3D model from the output of a structure and motion algorithm. The method combines information obtained from the video data with the partial 3D model and user interaction. It is demonstrated on video containing partially seen objects, including planar and curved surfaces, and indoor and outdoor settings.


international conference on computer graphics and interactive techniques | 2007

A shape hierarchy for 3D modelling from video

A. van den Hengel; Anthony R. Dick; Thorsten Thormählen; Ben Ward; Philip H. S. Torr

This paper describes an interactive method for generating a model of a scene from image data. The method uses the camera parameters and point cloud typically generated by structure-and-motion estimation as a starting point for developing a higher level model, in which the scene is represented as a set of parameterised shapes. Classes of shapes are represented in a hierarchy which defines their properties but also the method by which they are localised in the scene, using a combination of user interaction, sampling and optimisation. Relations between shapes, such as adjancency and alignment, are also specified interactively. The method thus provides a modelling process which requires the user to provide only high level scene information, the remaining detail being provided through geometric analysis of the image set. This mixture of guided, yet automated, fitting techniques allows a non-expert user to rapidly and intuitively create a visually convincing 3D model of a real world scene from an image set.


international conference on computer graphics imaging and visualisation | 2006

Hierarchical Model Fitting to 2D and 3D Data

A. van den Hengel; Anthony R. Dick; Thorsten Thormählen; Ben Ward; Philip H. S. Torr

We propose a method for interactively generating a model-based reconstruction of a scene from a set of images. The method facilitates the fitting of multiple object models to the data in a manner that provides the best overall fit to the image set. This requires that models are not fit independently, but rather collectively, each potentially impacting upon the fit of the other

Collaboration


Dive into the Ben Ward's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rhys Hill

University of Adelaide

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge