Thomas Olsvik Opsahl
Norwegian Defence Research Establishment
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Thomas Olsvik Opsahl.
Proceedings of SPIE | 2010
T. Skauli; Trym Vegard Haavardsholm; Ingebjørg Kåsen; Gunnar Arisholm; Amela Kavara; Thomas Olsvik Opsahl; Atle Skaugen
An airborne system for hyperspectral target detection is described. The main sensor is a HySpex pushbroom hyperspectral imager for the visible and near-infrared spectral range with 1600 pixels across track, supplemented by a panchromatic line imager. An optional third sensor can be added, either a SWIR hyperspectral camera or a thermal camera. In real time, the system performs radiometric calibration and georeferencing of the images, followed by image processing for target detection and visualization. The current version of the system implements only spectral anomaly detection, based on normal mixture models. Image processing runs on a PC with a multicore Intel processor and an Nvidia graphics processing unit (GPU). The processing runs in a software framework optimized for large sustained data rates. The platform is a Cessna 172 aircraft based close to FFI, modified with a camera port in the floor.
Applied Optics | 2014
T. Skauli; Hans Erling Torkildsen; Stephane Nicolas; Thomas Olsvik Opsahl; Trym Vegard Haavardsholm; Ingebjørg Kåsen; Atle Rognmo
A multispectral camera concept is presented. The concept is based on using a patterned filter in the focal plane, combined with scanning of the field of view. The filter layout has stripes of different bandpass filters extending orthogonally to the scan direction. The pattern of filter stripes is such that all bands are sampled multiple times, while minimizing the total duration of the sampling of a given scene point. As a consequence, the filter needs only a small part of the area of an image sensor. The remaining area can be used for conventional 2D imaging. A demonstrator camera has been built with six bands in the visible and near infrared, as well as a panchromatic 2D imaging capability. Image recording and reconstruction is demonstrated, but the quality of image reconstruction is expected to be a main challenge for systems based on this concept. An important advantage is that the camera can potentially be made very compact, and also low cost. It is shown that under assumptions that are not unreasonable, the proposed camera concept can be much smaller than a conventional imaging spectrometer. In principle, it can be smaller in volume by a factor on the order of several hundred while collecting the same amount of light per multispectral band. This makes the proposed camera concept very interesting for small airborne platforms and other applications requiring compact spectral imagers.
IEEE Transactions on Image Processing | 2014
Erik Ringaby; Ola Friman; Per-Erik Forssén; Thomas Olsvik Opsahl; Trym Vegard Haavardsholm; Ingebjorg Kȧsen
This paper deals with fast and accurate visualization of pushbroom image data from airborne and spaceborne platforms. A pushbroom sensor acquires images in a line-scanning fashion, and this results in scattered input data that need to be resampled onto a uniform grid for geometrically correct visualization. To this end, we model the anisotropic spatial dependence structure caused by the acquisition process. Several methods for scattered data interpolation are then adapted to handle the induced anisotropic metric and compared for the pushbroom image rectification problem. A trick that exploits the semiordered line structure of pushbroom data to improve the computational complexity several orders of magnitude is also presented.
Proceedings of SPIE | 2011
Thomas Olsvik Opsahl; Trym Vegard Haavardsholm; Ingebrigt Winjum
The paper describes the georeferencing part of an airborne hyperspectral imaging system based on pushbroom scanning. Using ray-tracing methods from computer graphics and a highly efficient representation of the digital elevation model (DEM), georeferencing of high resolution pushbroom images runs in real time by a large margin. By adapting the georeferencing to match the DEM resolution, the camera field of view and the flight altitude, the method has potential to provide real time georeferencing, even for HD video on a high resolution DEM when a graphics processing unit (GPU) is used for processing.
Proceedings of SPIE | 2013
Ingmar Renhorn; Véronique Achard; Maria Axelsson; Koen W. Benoist; Dirk Borghys; Xavier Briottet; R.J. Dekker; Alwin Dimmeler; Ola Friman; Ingebjørg Kåsen; Stefania Matteoli; Maria Lo Moro; Thomas Olsvik Opsahl; Mark van Persie; Salvatore Resta; Hendrik Schilling; Piet B. W. Schwering; Michal Shimoni; Trym Vegard Haavardsholm; Françoise Viallefont
Seven countries within the European Defence Agency (EDA) framework are joining effort in a four year project (2009-2013) on Detection in Urban scenario using Combined Airborne imaging Sensors (DUCAS). Data has been collected in a joint field trial including instrumentation for 3D mapping, hyperspectral and high resolution imagery together with in situ instrumentation for target, background and atmospheric characterization. Extensive analysis with respect to detection and classification has been performed. Progress in performance has been shown using combinations of hyperspectral and high spatial resolution sensors.
conference on lasers and electro-optics | 2011
Torbj⊘rn Skauli; Trym Vegard Haavardsholm; Ingebj⊘rg Kåsen; Thomas Olsvik Opsahl; Amela Kavara; Atle Skaugen
Hyperspectral imaging exploits the information contained in the spectrum of light, and has many applications. Systems require specialized cameras and application-specific image processing. As an example, we describe an airborne system with real-time image processing.
Proceedings of SPIE | 2013
Thomas Olsvik Opsahl; Trym Vegard Haavardsholm
Images from airborne cameras can be a valuable resource for data fusion, but this typically requires them to be georeferenced. This usually implies that the information of every pixel should be accompanied by a single geographical position describing where the center of the pixel is located in the scene. This geospatial information is well suited for tasks like target positioning and orthorectification. But when it comes to fusion, a detailed description of the area on the ground contributing to the pixel signal would be preferable over a single position. In this paper we present a method for estimating these regions. Simple Monte Carlo simulations are used to combine the influences of the main geometrical aspects of the imaging process, such as the point spread function, the camera’s motion and the topography in the scene. Since estimates of the camera motion are uncertain to some degree, this is incorporated in the simulations as well. For every simulation, a pixel’s sampling point in the scene is estimated by intersecting a randomly sampled line of sight with a 3D-model of the scene. Based on the results of numerous simulations, the pixel’s sampling region can be represented by a suitable probability distribution. This will be referred to as the pixel’s footprint distribution (PFD). We present results for high resolution hyperspectral pushbroom images of an urban scene.
ieee aerospace conference | 2017
Aiden Morrison; Nadezda Sokolova; Trym Vegard Haavardsholm; Ove Kent Hagen; Thomas Olsvik Opsahl; Kjetil Bergh Ånonsen; Kapteinleytnant Erik H. Eriksen
First responders and other emergency services personnel must often enter buildings which prevent the use of GPS or other satellite navigation signals for positioning. Loss of navigation capability combined with the fact that the buildings are often unknown to the personnel in question makes it more difficult for individual team members to coordinate with one another, and difficult or impossible for the team leader to monitor and direct the actions of each team member. While inertial navigation or pedestrian dead reckoning provide for some degree of navigation in GPS signal denied environments, these solutions degrade with time and may require prohibitively large and expensive inertial solutions to navigate over extended periods, while also allowing each individual user to accumulate independent positioning errors and thereby appearing to ‘drift away’ from one another. This paper presents an implementation of a collaborative navigation system utilizing each of user-to-user radio links, Global Navigation Satellite Systems (GNSS) when available, inertial navigation, pedestrian dead reckoning, as well as camera based Simultaneous Location and Mapping (SLAM) to provide a team of users with absolute and relative situational awareness for themselves and their team. The application of collaborative navigation to such a team provides the triple benefits of providing improved absolute navigation accuracy, improved relative navigation accuracy, and greatly enhanced situational awareness for all cooperating team members.
Proceedings of SPIE | 2016
Hans Erling Torkildsen; Trym Vegard Haavardsholm; Thomas Olsvik Opsahl; Urmila Datta; Atle Skaugen; T. Skauli
Cameras with filters in the focal plane provide the most compact solution for multispectral imaging. A small UAV can carry multiple such cameras, providing large area coverage rate at high spatial resolution. We investigate a camera concept where a patterned bandpass filter with six bands provides multiple interspersed recordings of all bands, enabling consistency checks for improved spectral integrity. A compact sensor payload has been built with multiple cameras and a data acquisition computer. Recorded imagery demonstrates the potential for large area coverage with good spectral integrity.
international workshop on analysis of multi temporal remote sensing images | 2011
Salvatore Resta; Nicola Acito; Marco Diani; Giovanni Corsini; Thomas Olsvik Opsahl; Trym Vegard Haavardsholm