John Pretlove
ABB Ltd
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by John Pretlove.
international symposium on mixed and augmented reality | 2003
Thomas Pettersen; John Pretlove; Charlotte Skourup; Torbjorn Engedal; Trond Løkstad
Existing practice for programming robots involves teaching it a sequence of waypoints in addition to process-related events, which defines the complete robot path. The programming process is time consuming, error prone and, in most cases, requires several iterations before the program quality is acceptable. By introducing augmented reality technologies in this programming process, the operator gets instant real-time, visual feedback of a simulated process in relation to the real object, resulting in reduced programming time and increased quality of the resulting robot program. This paper presents a demonstrator of a standalone augmented reality pilot system allowing an operator to program robot waypoints and process specific events related to paint applications. During the programming sequence, the system presents visual feedback of the paint result for the operator, allowing him to inspect the process result before the robot has performed the actual task.
international symposium on mixed and augmented reality | 2007
Jeremiah Neubert; John Pretlove; Tom Drummond
Many of the robust visual tracking techniques utilized by augmented reality applications rely on 3D models and information extracted from images. Models enhanced with image information make it possible to initialize tracking and detect poor registration. Unfortunately, generating 3D CAD models and registering them to image information can be a time consuming operation. Regularly the process requires multiple trips between the site being modeled and the workstation used to create the model. The system presented in this work eliminates the need for a separately generated 3D model by utilizing modern structure-from-motion techniques to extract the model and associated image information directly from an image sequence. The technique can be implemented on any handheld device instrumented with a camera and network connection. The process of creating the model requires minimal user interaction in the form of a few cues to identify planar regions on the object of interest. In addition the system selects a set of keyframes for each region to capture viewpoint based appearance changes. This work also presents a robust tracking framework to take advantage of these new edge models. Performance of both the modeling technique and the tracking system are verified on several different objects.
Archive | 2005
John Pretlove; Charlotte Skourup; Pierre Öberg; Thomas Pettersen; Christoffer Apneseth
Archive | 2004
John Pretlove; Charlotte Skourup; Thomas Pettersen
Archive | 2003
John Pretlove; Thomas Pettersen
Archive | 2007
Charlotte Skourup; John Pretlove; Trond Loekstad; Torbjorn Engedal
Archive | 2004
Charlotte Skourup; John Pretlove; Thomas Pettersen
Archive | 2003
John Pretlove; Thomas Pettersen
Archive | 2005
Charlotte Skourup; John Pretlove; Thomas Pettersen; Christoffer Apneseth; Pierre Öberg
Archive | 2006
Charlotte Skourup; John Pretlove; Kristoffer Husoy