Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stefan Axelsson is active.

Publication


Featured researches published by Stefan Axelsson.


availability, reliability and security | 2010

Using Normalized Compression Distance for Classifying File Fragments

Stefan Axelsson

We have applied the generalized and universal distance measure NCD--Normalized Compression Distance--to the problem of determining the types of file fragments via example. A corpus of files that can be redistributed to other researchers in the field was developed and the NCD algorithm using k-nearest-neighbor as a classification algorithm was applied to a random selection of file fragments. The experiment covered circa 2000 fragments from 17 different file types. While the overall accuracy of the n-valued classification only improved the prior probability of the class from approximately 6% to circa 50% overall, the classifier reached accuracies of 85%--100% for the most successful file types.


Digital Investigation | 2014

VMI-PL: A monitoring language for virtual platforms using virtual machine introspection

Florian Westphal; Stefan Axelsson; Christian Neuhaus; Andreas Polze

With the growth of virtualization and cloud computing, more and more forensic investigations rely on being able to perform live forensics on a virtual machine using virtual machine introspection (VMI). Inspecting a virtual machine through its hypervisor enables investigation without risking contamination of the evidence, crashing the computer, etc. To further access to these techniques for the investigator/researcher we have developed a new VMI monitoring language. This language is based on a review of the most commonly used VMI-techniques to date, and it enables the user to monitor the virtual machines memory, events and data streams. A prototype implementation of our monitoring system was implemented in KVM, though implementation on any hypervisor that uses the common x86 virtualization hardware assistance support should be straightforward. Our prototype outperforms the proprietary VMWare VProbes in many cases, with a maximum performance loss of 18% for a realistic test case, which we consider acceptable. Our implementation is freely available under a liberal software distribution license.


Knowledge and Information Systems | 2012

Similarity assessment for removal of noisy end user license agreements

Niklas Lavesson; Stefan Axelsson

In previous work, we have shown the possibility to automatically discriminate between legitimate software and spyware-associated software by performing supervised learning of end user license agreements (EULAs). However, the amount of false positives (spyware classified as legitimate software) was too large for practical use. In this study, the false positives problem is addressed by removing noisy EULAs, which are identified by performing similarity analysis of the previously studied EULAs. Two candidate similarity analysis methods for this purpose are experimentally compared: cosine similarity assessment in conjunction with latent semantic analysis (LSA) and normalized compression distance (NCD). The results show that the number of false positives can be reduced significantly by removing noise identified by either method. However, the experimental results also indicate subtle performance differences between LSA and NCD. To improve the performance even further and to decrease the large number of attributes, the categorical proportional difference (CPD) feature selection algorithm was applied. CPD managed to greatly reduce the number of attributes while at the same time increase classification performance on the original data set, as well as on the LSA- and NCD-based data sets.


Operations Research/Computer Science Interfaces Series | 2015

Potential Fields in Modeling Transport over Water

Ewa Osekowska; Stefan Axelsson; Bengt Carlsson

Without an explicit road-like regulation, following the proper sailing routes and practices is still a challenge mostly addressed using seamen’s know-how and experience. This chapter focuses on the problem of modeling ship movements over water with the aim to extract and represent this kind of knowledge. The purpose of the developed modeling method, inspired by the theory of potential fields, is to capture the process of navigation and piloting through the observation of ship behaviors in transport over water on narrow waterways. When successfully modeled, that knowledge can be subsequently used for various purposes. Here, the models of typical ship movements and behaviors are used to provide a visual insight into the actual normal traffic properties (maritime situational awareness) and to warn about potentially dangerous traffic behaviors (anomaly detection). A traffic modeling and anomaly detection prototype system STRAND implements the potential field based method for a collected set of AIS data. A quantitative case study is taken out to evaluate the applicability and performance of the implemented modeling method. The case study focuses on quantifying the detections for varying geographical resolution of the detection process. The potential fields extract and visualize the actual behavior patterns, such as right-hand sailing rule and speed limits, without any prior assumptions or information introduced in advance. The display of patterns of correct (normal) behavior aids the choice of an optimal path, in contrast to the anomaly detection which notifies about possible traffic incidents. A tool visualizing the potential fields may aid traffic surveillance and incident response, help recognize traffic regulation and legislative issues, and facilitate the process of waterways development and maintenance.


Digital Investigation | 2014

Key-hiding on the ARM platform

Alexander Nilsson; Marcus Andersson; Stefan Axelsson

To combat the problem of encryption key recovery from main memory using cold boot-attacks, various solutions has been suggested, but most of these have been implemented on the x86 architecture, which is not prevalent in the smartphone market, where instead ARM dominates. One existing solution does exist for the ARM architecture but it is limited to key sizes of 128 bits due to not being able to utilise the full width of the CPU registers used for key storage. We developed a test-implementation of CPU-bound key storage with 256-bit capacity, without using more hardware resources than the previous solution. We also show that access to the key can be restricted for programs executing outside the kernel space.


advanced data mining and applications | 2012

Using Data Mining for Static Code Analysis of C

Hannes Tribus; Irene Morrigl; Stefan Axelsson

Static analysis of source code is one way to find bugs and problems in large software projects. Many approaches to static analysis have been proposed. We proposed a novel way of performing static analysis. Instead of methods based on semantic/logic analysis we apply machine learning directly to the problem. This has many benefits. Learning by example means trivial programmer adaptability (a problem with many other approaches), learning systems also has the advantage to be able to generalise and find problematic source code constructs that are not exactly as the programmer initially thought, to name a few. Due to the general interest in code quality and the availability of large open source code bases as test and development data, we believe this problem should be of interest to the larger data mining community. In this work we extend our previous approach and investigate a new way of doing feature selection and test the suitability of many different learning algorithms. This on a selection of problems we adapted from large publicly available open source projects. Many algorithms were much more successful than our previous proof-of-concept, and deliver practical levels of performance. This is clearly an interesting and minable problem.


international conference on digital forensics | 2015

DO DATA LOSS PREVENTION SYSTEMS REALLY WORK

Sara Ghorbanian; Glenn Fryklund; Stefan Axelsson

The threat of insiders stealing valuable corporate data continues to escalate. The inadvertent exposure of internal data has also become a major problem. Data loss prevention systems are designed to monitor and block attempts at exposing sensitive data to the outside world. They have become very popular, to the point where forensic investigators have to take these systems into account. This chapter describes the first experimental analysis of data loss prevention systems that attempts to ascertain their effectiveness at stopping the unauthorized exposure of sensitive data and the ease with which the systems could be circumvented. Four systems are evaluated (three of them in detail). The results point to considerable weaknesses in terms of general effectiveness and the ease with which the systems could be disabled.


international conference on digital forensics | 2013

File Fragment Analysis Using Normalized Compression Distance

Stefan Axelsson; Kamran Ali Bajwa; Mandhapati Venkata Srikanth

The first step when recovering deleted files using file carving is to identify the file type of a block, also called file fragment analysis. Several researchers have demonstrated the applicability of Kolmogorov complexity methods such as the normalized compression distance (NCD) to this problem. NCD methods compare the results of compressing a pair of data blocks with the compressed concatenation of the pair. One parameter that is required is the compression algorithm to be used. Prior research has identified the NCD compressor properties that yield good performance. However, no studies have focused on its applicability to file fragment analysis. This paper describes the results of experiments on a large corpus of files and file types with different block lengths. The experimental results demonstrate that, in the case of file fragment analysis, compressors with the desired properties do not perform statistically better than compressors with less computational complexity.


Digital Investigation | 2010

The Normalised Compression Distance as a file fragment classifier

Stefan Axelsson


The 27th annual workshop of the Swedish Artificial Intelligence Society (SAIS); 14-15 May 2012; Örebro; Sweden | 2012

Money Laundering Detection using Synthetic Data

Edgar Alonso Lopez-Rojas; Stefan Axelsson

Collaboration


Dive into the Stefan Axelsson's collaboration.

Top Co-Authors

Avatar

Edgar Alonso Lopez-Rojas

Blekinge Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Alexander Nilsson

Blekinge Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Bengt Carlsson

Blekinge Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Dan Gorton

Blekinge Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ewa Osekowska

Blekinge Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Florian Westphal

Blekinge Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Hannes Tribus

Blekinge Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Irene Morrigl

Blekinge Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marcus Andersson

Blekinge Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge