Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Håkon Kvale Stensland is active.

Publication


Featured researches published by Håkon Kvale Stensland.


acm sigmm conference on multimedia systems | 2013

Bagadus: an integrated system for arena sports analytics: a soccer case study

Pål Halvorsen; Simen Sægrov; Asgeir Mortensen; David K. C. Kristensen; Alexander Eichhorn; Magnus Stenhaug; Stian Dahl; Håkon Kvale Stensland; Vamsidhar Reddy Gaddam; Carsten Griwodz; Dag Johansen

Sports analytics is a growing area of interest, both from a computer system view to manage the technical challenges and from a sport performance view to aid the development of athletes. In this paper, we present Bagadus, a prototype of a sports analytics application using soccer as a case study. Bagadus integrates a sensor system, a soccer analytics annotations system and a video processing system using a video camera array. A prototype is currently installed at Alfheim Stadium in Norway, and in this paper, we describe how the system can follow and zoom in on particular player(s). Next, the system will playout events from the games using stitched panorama video or camera switching mode and create video summaries based on queries to the sensor system. Furthermore, we evaluate the system from a systems point of view, benchmarking different approaches, algorithms and tradeoffs.


ACM Transactions on Multimedia Computing, Communications, and Applications | 2014

Bagadus: An integrated real-time system for soccer analytics

Håkon Kvale Stensland; Vamsidhar Reddy Gaddam; Marius Tennøe; Espen Helgedagsrud; Mikkel Næss; Henrik Kjus Alstad; Asgeir Mortensen; Ragnar Langseth; Sigurd Ljødal; Øystein Landsverk; Carsten Griwodz; Pål Halvorsen; Magnus Stenhaug; Dag Johansen

The importance of winning has increased the role of performance analysis in the sports industry, and this underscores how statistics and technology keep changing the way sports are played. Thus, this is a growing area of interest, both from a computer system view in managing the technical challenges and from a sport performance view in aiding the development of athletes. In this respect, Bagadus is a real-time prototype of a sports analytics application using soccer as a case study. Bagadus integrates a sensor system, a soccer analytics annotations system, and a video processing system using a video camera array. A prototype is currently installed at Alfheim Stadium in Norway, and in this article, we describe how the system can be used in real-time to playback events. The system supports both stitched panorama video and camera switching modes and creates video summaries based on queries to the sensor system. Moreover, we evaluate the system from a systems point of view, benchmarking different approaches, algorithms, and trade-offs, and show how the system runs in real time.


acm sigmm conference on multimedia systems | 2014

Soccer video and player position dataset

Svein Arne Pettersen; Dag Johansen; Håvard D. Johansen; Vegard Berg-Johansen; Vamsidhar Reddy Gaddam; Asgeir Mortensen; Ragnar Langseth; Carsten Griwodz; Håkon Kvale Stensland; Pål Halvorsen

This paper presents a dataset of body-sensor traces and corresponding videos from several professional soccer games captured in late 2013 at the Alfheim Stadium in Tromsø, Norway. Player data, including field position, heading, and speed are sampled at 20Hz using the highly accurate ZXY Sport Tracking system. Additional per-player statistics, like total distance covered and distance covered in different speed classes, are also included with a 1Hz sampling rate. The provided videos are in high-definition and captured using two stationary camera arrays positioned at an elevated position above the tribune area close to the center of the field. The camera array is configured to cover the entire soccer field, and each camera can be used individually or as a stitched panorama video. This combination of body-sensor data and videos enables computer-vision algorithms for feature extraction, object tracking, background subtraction, and similar, to be tested against the ground truth contained in the sensor traces.


international symposium on multimedia | 2011

Improved Multi-Rate Video Encoding

Dag Haavi Finstad; Håkon Kvale Stensland; Håvard Espeland; Pål Halvorsen

Adaptive HTTP streaming is frequently used for both live and on-Demand video delivery over the Internet. Adaptive ness is often achieved by encoding the video stream in multiple qualities (and thus bit rates), and then transparently switching between the qualities according to the bandwidth fluctuations and the amount of resources available for decoding the video content on the end device. For this kind of video delivery over the Internet, H.264 is currently the most used codec, but VP8 is an emerging open-source codec expected to compete with H.264 in the streaming scenario. The challenge is that, when encoding video for adaptive video streaming, both VP8 and H.264 run once for each quality layer, i.e., consuming both time and resources, especially important in a live video delivery scenario. In this paper, we address the resource consumption issues by proposing a method for reusing redundant steps in a video encoder, emitting multiple outputs with varying bit rates and qualities. It shares and reuses the computational heavy analysis step, notably macro-block mode decision, intra prediction and inter prediction between the instances, and outputs video in several rates. The method has been implemented in the VP8 reference encoder, and experimental results show that we can encode the different quality layers at the same rates and qualities compared to the VP8 reference encoder, while reducing the encoding time significantly.


international symposium on multimedia | 2013

Efficient Implementation and Processing of a Real-Time Panorama Video Pipeline

Marius Tennøe; Espen Helgedagsrud; Mikkel Næss; Henrik Kjus Alstad; Håkon Kvale Stensland; Vamsidhar Reddy Gaddam; Dag Johansen; Carsten Griwodz; Pål Halvorsen

High resolution, wide field of view video generated from multiple camera feeds has many use cases. However, processing the different steps of a panorama video pipeline in real-time is challenging due to the high data rates and the stringent requirements of timeliness. We use panorama video in a sport analysis system where video events must be generated in real-time. In this respect, we present a system for real-time panorama video generation from an array of low-cost CCD HD video cameras. We describe how we have implemented different components and evaluated alternatives. We also present performance results with and without co-processors like graphics processing units (GPUs), and we evaluate each individual component and show how the entire pipeline is able to run in real-time on commodity hardware.


international conference on parallel processing | 2011

P2G: A Framework for Distributed Real-Time Processing of Multimedia Data

Håvard Espeland; Paul B. Beskow; Håkon Kvale Stensland; Preben N. Olsen; Ståle Kristoffersen; Carsten Griwodz; Pål Halvorsen

The computational demands of multimedia data processing are steadily increasing as consumers call for progressively more complex and intelligent multimedia services. New multi-core hardware architectures provide the required resources, but writing parallel, distributed applications remains a labor-intensive task compared to their sequential counter-part. For this reason, Google and Microsoft implemented their respective processing frameworks MapReduce and Dryad, as they allow the developer to think sequentially, yet benefit from parallel and distributed execution. An inherent limitation in the design of these batch processing frameworks is their inability to express arbitrarily complex workloads. The dependency graphs of the frameworks are often limited to directed acyclic graphs, or even pre-determined stages. This is particularly problematic for video encoding and other algorithms that depend on iterative execution. With the Nornir runtime system for parallel programs, which is a Kahn Process Network implementation, we addressed and solved several of these limitations. However, it is more difficult to use than other frameworks due to its complex programming model. In this paper, we build on the knowledge gained from Nornir and present a new framework, called P2G, designed specifically for developing and processing distributed real-time multimedia data. P2G supports arbitrarily complex dependency graphs with cycles, branches and deadlines, and provides both data- and task-parallelism. The framework is implemented to scale transparently with available (heterogeneous) resources, a concept familiar from the cloud computing paradigm. We have implemented an (interchangeable) P2G kernel language to ease development. In this paper, we present a proof of concept implementation of a P2G execution node and some experimental examples using complex workloads like Motion JPEG and K-means clustering. The results show that theP2G system is a feasible approach to multimedia processing.


acm multimedia | 2016

Computer aided disease detection system for gastrointestinal examinations

Michael Riegler; Konstantin Pogorelov; Jonas Markussen; Mathias Lux; Håkon Kvale Stensland; Thomas de Lange; Carsten Griwodz; Pål Halvorsen; Dag Johansen; Peter T. Schmidt; Sigrun Losada Eskeland

In this paper, we present the computer-aided diagnosis part of the EIR system [9], which can support medical experts in the task of detecting diseases and anatomical landmarks in the gastrointestinal (GI) system. This includes automatic detection of important findings in colonoscopy videos and marking them for the doctors. EIR is designed in a modular way so that it can easily be extended for other diseases. For this demonstration, we will focus on polyp detection, as our system is trained with the ASU-Mayo Clinic polyp database [5].


acm sigmm conference on multimedia systems | 2014

Be your own cameraman: real-time support for zooming and panning into stored and live panoramic video

Vamsidhar Reddy Gaddam; Ragnar Langseth; Håkon Kvale Stensland; Pierre Gurdjos; Vincent Charvillat; Carsten Griwodz; Dag Johansen; Pål Halvorsen

High-resolution panoramic video with a wide field-of-view is popular in many contexts. However, in many examples, like surveillance and sports, it is often desirable to zoom and pan into the generated video. A challenge in this respect is real-time support, but in this demo, we present an end-to-end real-time panorama system with interactive zoom and panning. Our system installed at Alfheim stadium, a Norwegian premier league soccer team, generates a cylindrical panorama from five 2K cameras live where the perspective is corrected in real-time when presented to the client. This gives a better and more natural zoom compared to existing systems using perspective panoramas and zoom operations using plain crop. Our experimental results indicate that virtual views can be generated far below the frame-rate threshold, i.e., on a GPU, the processing requirement per frame is about 10 milliseconds. The proposed demo lets participants interactively zoom and pan into stored panorama videos generated at Alfheim stadium and from a live 2-camera array on-site.


international conference on parallel processing | 2012

LEARS: A Lockless, Relaxed-Atomicity State Model for Parallel Execution of a Game Server Partition

Kjetil Raaen; Håvard Espeland; Håkon Kvale Stensland; Andreas Petlund; Pål Halvorsen; Carsten Griwodz

Supporting thousands of interacting players in a virtual world poses huge challenges with respect to processing. Existing work that addresses the challenge utilizes a variety of spatial partitioning algorithms to distribute the load. If, however, a large number of players needs to interact tightly across an area of the game world, spatial partitioning cannot subdivide this area without incurring massive communication costs, latency or inconsistency. It is a major challenge of game engines to scale such areas to the largest number of players possible, in a deviation from earlier thinking, parallelism on multi-core architectures is applied to increase scalability. In this paper, we evaluate the design and implementation of our game server architecture, called LEARS, which allows for lock-free parallel processing of a single spatial partition by considering every game cycle an atomic tick. Our prototype is evaluated using traces from live game sessions where we measure the server response time for all objects that need timely updates. We also measure how the response time for the multi-threaded implementation varies with the number of threads used. Our results show that the challenge of scaling up a game-server can be an embarrassingly parallel problem.


acm multimedia | 2016

Right inflight?: a dataset for exploring the automatic prediction of movies suitable for a watching situation

Michael Riegler; Martha Larson; Concetto Spampinato; Pål Halvorsen; Mathias Lux; Jonas Markussen; Konstantin Pogorelov; Carsten Griwodz; Håkon Kvale Stensland

In this paper, we present the dataset Right Inflight developed to support the exploration of the match between video content and the situation in which that content is watched. Specifically, we look at videos that are suitable to be watched on an airplane, where the main assumption is that that viewers watch movies with the intent of relaxing themselves and letting time pass quickly, despite the inconvenience and discomfort of flight. The aim of the dataset is to support the development of recommender systems, as well as computer vision and multimedia retrieval algorithms capable of automatically predicting which videos are suitable for inflight consumption. Our ultimate goal is to promote a deeper understanding of how people experience video content, and of how technology can support people in finding or selecting video content that supports them in regulating their internal states in certain situations. Right Inflight consists of 318 human-annotated movies, for which we provide links to trailers, a set of pre-computed low-level visual, audio and text features as well as user ratings. The annotation was performed by crowdsourcing workers, who were asked to judge the appropriateness of movies for inflight consumption.

Collaboration


Dive into the Håkon Kvale Stensland's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jonas Markussen

Simula Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge