Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jens T. Thielemann is active.

Publication


Featured researches published by Jens T. Thielemann.


Journal of Near Infrared Spectroscopy | 2006

Non-contact transflectance near infrared imaging for representative on-line sampling of dried salted coalfish (bacalao)

Jens Petter Wold; Ib-Rune Johansen; Karl Henrik Haugholt; Jon Tschudi; Jens T. Thielemann; Vegard Segtnan; Bjørg Narum; Erik Wold

This paper describes a multi-spectral imaging near infrared (NIR) transflectance system developed for on-line determination of crude chemical composition of highly heterogeneous foods and other bio-materials. The system was evaluated for moisture determination in 70 dried salted coalfish (bacalao), an extremely heterogeneous product. A spectral image cube was obtained for each fish and different sub-sampling approaches for spectral extraction and partial least squares calibration were evaluated. The best prediction models obtained correlation R2 values around 0.92 and root mean square error of cross-validation of 0.70%, which is much more accurate than todays traditional manual grading. The combination of non-contact NIR transflectance measurements with spectral imaging allows rather deep penetrating optical sampling as well as large flexibility in spatial sampling patterns and calibration approaches. The technique works well for moisture determination in heterogeneous foods and should, in principle, work for other NIR absorbing compounds such as fat and protein. A part of this study compares the principles of reflectance, contact transflectance and non-contact transflectance with regard to water determination in a set of 20 well-defined dried salted cod samples. Transflectance and non-contact transflectance performed equally well and were superior to reflectance measurements, since the measured light penetrated deeper into the sample.


Proceedings of SPIE | 2008

Modelling and Compensating Measurement Errors Caused by Scattering in Time-Of-Flight Cameras

Tom Kavli; Trine Kirkhus; Jens T. Thielemann; Borys Jagielski

Recently, Range Imaging (RIM) cameras have become available that capture high resolution range images at video rate. Such cameras measure the distance from the scene for each pixel independently based upon a measured time of flight (TOF). Some cameras, such as the SwissRanger(tm) SR-3000, measure the TOF based on the phase shift of reflected light from a modulated light source. Such cameras are shown to be susceptible to severe distortions in the measured range due to light scattering within the lens and camera. Earlier work induced using a simplified Gaussian point spread function and inverse filtering to compensate for such distortions. In this work a method is proposed for how to identify and use generally shaped empirical models for the point spread function to get a more accurate compensation. The otherwise difficult inverse problem is solved by using the forward model iteratively, according to well established procedures from image restoration. Each iteration is done as a sequential process, starting with the brightest parts of the image and then moving sequentially to the least bright parts, with each step subtracting the estimated effects from the measurements. This approach gives a faster and more reliable compensation convergence. An average reduction of the error by more than 60% is demonstrated on real images. The computation load corresponds to one or two convolutions of the measured complex image with a real filter of the same size as the image.


computer vision and pattern recognition | 2008

Pipeline landmark detection for autonomous robot navigation using time-of-flight imagery

Jens T. Thielemann; Gøril Margrethe Breivik; Asbjørn Berge

3D imaging systems provide valuable information for autonomous robot navigation based on landmark detection in pipelines. This paper presents a method for using a time-of-flight (TOF) camera for detection and tracking of pipeline features such as junctions, bends and obstacles. Feature extraction is done by fitting a cylinder to images of the pipeline. Data in captured images appear to take a conic rather than cylindrical shape, and we adjust the geometric primitive accordingly. Pixels deviating from the estimated cylinder/cone fit are grouped into blobs. Blobs fulfilling constraints on shape and stability over time are then tracked. The usefulness of TOF imagery as a source for landmark detection and tracking in pipelines is evaluated by comparison to auxiliary measurements. Experiments using a model pipeline and a prototype robot show encouraging results.


intelligent robots and systems | 2010

A robotic concept for remote maintenance operations: A robust 3D object detection and pose estimation method and a novel robot tool

Aksel Andreas Transeth; Øystein Skotheim; Henrik Schumann-Olsen; Gorm Johansen; Jens T. Thielemann; Erik Kyrkjebø

Future normally-unmanned oil platforms offer potentially significantly lower commissioning and operation costs than their current manned counterparts. The ability to initiate and perform remote inspection and maintenance (I&M) operations is crucial for maintaining such platforms. This paper presents a system solution, including key components such as a 3D robot vision system, a robot tool and a control architecture for remote I&M operations on processes similar to those on topside oil platforms. In particular, a case study on how to automatically replace a battery in a wireless process sensor is investigated. A novel robot tool for removing and re-attaching the sensor lid has been designed. Moreover, a robot control architecture for remote control of industrial-type robot manipulators is presented. A 3D robot vision system for localizing the sensor lid and the battery has been developed. The system utilizes structured light, using an off-the-shelf projector and a standard machine vision camera. A novel, robust and fast vision algorithm called 3D-MaMa has been adapted to work for object localization and pose estimation in complex scenes, in our case the process equipment in our lab facility. Experimental results from our lab facility are presented which describe a series of battery replacement operations for various unknown positions of the wireless sensor, and we report on accuracies and success ratios. The experiments demonstrate that the described vision system is able to recover the full pose and orientation of an object, and that the results are directly applicable for controlling advanced robot contact operations. Moreover, the custom-built lid operation tool demonstrates successful results.


advanced concepts for intelligent vision systems | 2007

System for estimation of pin bone positions in pre-rigor salmon

Jens T. Thielemann; Trine Kirkhus; Tom Kavli; Henrik Schumann-Olsen; Oddmund Haugland; Harry Westavik

Current systems for automatic processing of salmon are not able to remove all bones from freshly slaughtered salmon. This is because some of the bones are attached to the flesh by tendons, and the fillet is damaged or the bones broken if the bones are pulled out. This paper describes a camera based system for determining the tendon positions in the tissue, so that the tendon can be cut with a knife and the bones removed. The location of the tendons deep in the tissue is estimated based on the position of a texture pattern on the fillet surface. Algorithms for locating this line-looking pattern, in the presence of several other similar-looking lines and significant other texture are described. The algorithm uses a model of the patterns location to achieve precision and speed, followed by a RANSAC/MLESAC inspired line fitting procedure. Close to the neck the pattern is barely visible; this is handled through a greedy search algorithm. We achieve a precision better than 3 mm for 78% of the fish using maximum 2 seconds processing time.


Proceedings of SPIE | 2010

Robust 3D object localization and pose estimation for random bin picking with the 3DMaMa algorithm

Øystein Skotheim; Jens T. Thielemann; Asbjørn Berge; Arne Sommerfelt

Enabling robots to automatically locate and pick up randomly placed and oriented objects from a bin is an important challenge in factory automation, replacing tedious and heavy manual labor. A system should be able to recognize and locate objects with a predefined shape and estimate the position with the precision necessary for a gripping robot to pick it up. We describe a system that consists of a structured light instrument for capturing 3D data and a robust approach for object location and pose estimation. The method does not depend on segmentation of range images, but instead searches through pairs of 2D manifolds to localize candidates for object match. This leads to an algorithm that is not very sensitive to scene complexity or the number of objects in the scene. Furthermore, the strategy for candidate search is easily reconfigurable to arbitrary objects. Experiments reported in this paper show the utility of the method on a general random bin picking problem, in this paper exemplified by localization of car parts with random position and orientation. Full pose estimation is done in less than 380 ms per image. We believe that the method is applicable for a wide range of industrial automation problems where precise localization of 3D objects in a scene is needed.


electronic imaging | 2008

A flexible 3D vision system based on structured light for in-line product inspection

Øystein Skotheim; Jens Olav Nygaard; Jens T. Thielemann; Thor Vollset

A flexible and highly configurable 3D vision system targeted for in-line product inspection is presented. The system includes a low cost 3D camera based on structured light and a set of flexible software tools that automate the measurement process. The specification of the measurement tasks is done in a first manual step. The user selects regions of the point cloud to analyze and specifies primitives to be characterized within these regions. After all measurement tasks have been specified, measurements can be carried out on successive parts automatically and without supervision. As a test case, a measurement cell for inspection of a V-shaped car component has been developed. The car component consists of two steel tubes attached to a central hub. Each of the tubes has an additional bushing clamped to its end. A measurement is performed in a few seconds and results in an ordered point cloud with 1.2 million points. The software is configured to fit cylinders to each of the steel tubes as well as to the inside of the bushings of the car part. The size, position and orientation of the fitted cylinders allow us to measure and verify a series of dimensions specified on the CAD drawing of the component with sub-millimetre accuracy.


advanced concepts for intelligent vision systems | 2007

Adaptive image content-based exposure control for scanning applications in radiography

Helene Schulerud; Jens T. Thielemann; Trine Kirkhus; Kristin Kaspersen; J.M. Østby; M Metaxas; Gary J. Royle; Jennifer A. Griffiths; Emily Cook; Colin Esbrand; S. Pani; C. Venanzi; Paul F. van der Stelt; G. Li; R. Turchetta; A. Fant; Sergios Theodoridis; Harris V. Georgiou; G. Hall; M. Noy; John Jones; J. Leaver; F. A. Triantis; A. Asimidis; N. Manthos; Renata Longo; A. Bergamaschi; Robert D. Speller

I-ImaS (Intelligent Imaging Sensors) is a European project which has designed and developed a new adaptive X-ray imaging system using on-line exposure control, to create locally optimized images. The I-ImaS system allows for real-time image analysis during acquisition, thus enabling real-time exposure adjustment. This adaptive imaging system has the potential of creating images with optimal information within a given dose constraint and to acquire optimally exposed images of objects with variable density during one scan. In this paper we present the control system and results from initial tests on mammographic and encephalographic images. Furthermore, algorithms for visualization of the resulting images, consisting of unevenly exposed image regions, are developed and tested. The preliminary results show that the same image quality can be achieved at 30-70% lower dose using the I-ImaS system compared to conventional mammography systems.


computer vision and pattern recognition | 2011

A motion based real-time foveation control loop for rapid and relevant 3D laser scanning

Gøril Margrethe Breivik; Jens T. Thielemann; Asbjørn Berge; Øystein Skotheim; Trine Kirkhus

We present an implementation of a novel foveating 3D sensor concept, inspired by the human eye, which intends to allow future robots to better interact with their surroundings. The sensor is based on a time-of-flight laser scanning technology, where each range distance measurement is performed individually for increased quality. Micro-mirrors enable detailed control on where and when each sample point is acquired in the scene. By finding regions-of-interest (ROIs) and mainly concentrating the data acquisition here, the spatial resolution or frame rate of these ROIs can be significantly increased compared to a non-foveating system. Foveation is enabled through a real-time implementation of a feed-back control loop for the sensor hardware, based on vision algorithms for 3D scene analysis. In this paper, we describe and apply an algorithm for detecting ROIs based on motion detection in range data using background modeling. Heuristics are incorporated to cope with camera motion. We report first results applying this algorithm to scenes with moving objects, and show that the foveation capability allows the frame rate to be increased by up to 8.2 compared to a non-foveating sensor, utilizing up to 99% of the potential frame rate increase. The incorporated heuristics significantly improves the foveations performance for moving camera scenes.


Proceedings of SPIE | 2009

System for conveyor belt part picking using structured light and 3D pose estimation

Jens T. Thielemann; Øystein Skotheim; Jens Olav Nygaard; T. Vollset

Automatic picking of parts is an important challenge to solve within factory automation, because it can remove tedious manual work and save labor costs. One such application involves parts that arrive with random position and orientation on a conveyor belt. The parts should be picked off the conveyor belt and placed systematically into bins. We describe a system that consists of a structured light instrument for capturing 3D data and robust methods for aligning an input 3D template with a 3D image of the scene. The method uses general and robust pre-processing steps based on geometric primitives that allow the well-known Iterative Closest Point algorithm to converge quickly and robustly to the correct solution. The method has been demonstrated for localization of car parts with random position and orientation. We believe that the method is applicable for a wide range of industrial automation problems where precise localization of 3D objects in a scene is needed.

Researchain Logo
Decentralizing Knowledge