Dag Stranneby
Örebro University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Dag Stranneby.
electronic components and technology conference | 2002
Katarina Boustedt; Katrin Persson; Dag Stranneby
With the recent trends in microelectronics to move more and more towards incorporating MEMS (micro electro mechanical systems) structures, lowering the overall cost becomes vital. One major cost driver in todays MEMS is the packaging. Many of the MEMS structures require some level of low pressure for full quality operation, and some may even need vacuum to function properly. Different MEMS packaging strategies exist on the market and they can be divided into two different approaches. The first one protects the wafer temporarily during wafer scribing or dicing and the second one provides a permanent seal to the wafer through full wafer bonding before scribing and dicing. The latter, permanent methods allows for selecting very low cost packaging without hermeticity as a requirement, whereas in the temporary seal methods the seal is removed after dicing and the sensitive structures become unprotected again. Using flip chip for MEMS has the benefit of providing MEMS structures with a covering lid, the chip itself. A number of flip chip MEMS interconnection methods presented in literature are described.
Journal of Rehabilitation Research and Development | 2009
Parivash Ranjbar; Dag Stranneby; Erik Borg
This study compared three different signal-processing principles (eight basic algorithms)-transposing, modulating, and filtering-to find the principle(s)/algorithm(s) that resulted in the best tactile identification of environmental sounds. The subjects were 19 volunteers (9 female/10 male) who were between 18 and 50 years old and profoundly hearing impaired. We processed sounds produced by 45 representative environmental events with the different algorithms and presented them to subjects as tactile stimuli using a wide-band stationary vibrator. We compared eight algorithms based on the three principles (one unprocessed, as reference). The subjects identified the stimuli by choosing among 10 alternatives drawn from the 45 events. We found that algorithm and subject were significant factors affecting the results (repeated measures analysis of variance, p < 0.001). We also found large differences between individuals regarding which algorithm was best. The test-retest variability was small (mean +/- 95% confidence interval = 8 +/- 3 percentage units), and no correlation was noted between identification score and individual vibratory thresholds. One transposing algorithm and two modulating algorithms led to significantly better results than did the unprocessed signals (p < 0.05). Thus, the two principles of transposing and modulating were appropriate, whereas filtering was unsuccessful. In future work, the two transposing algorithms and the modulating algorithm will be used in tests with a portable vibrator for people with dual sensory impairment (hearing and vision).
Digital Signal Processing and Applications (Second Edition) | 2004
Dag Stranneby; William Walker
This chapter discusses recursive least square (RLS) estimation and the underlying idea of Kalman filters. When it comes to filtering of stochastic (random) signals, it is difficult to extract or reject the desired parts of the spectra to obtain the required filtering action. In such a situation, a Kalman filter is very helpful as signals are filtered according to their statistical properties, rather than their frequency contents. The Kalman filter has other interesting properties. The filter contains a signal model, a type of “simulator” that produces the output signal. When the quality of the input signal is good, the signal is used to generate the output signal and the internal model is adjusted to follow the input signal. When, on the other hand, the input signal is poor, it is ignored and the output signal from the filter is mainly produced by the model. In this way, the filter can produce a reasonable output signal even during drop out of the input signal. Further, once the model has converged well to the input signal, the filter can be used to simulate future output signals, i.e., the filter can be used for prediction. Kalman filters are often used to condition transducer signals and in control systems for satellites, aircraft and missiles. The filter is also used in applications dealing with examples, such as economics, medicine, chemistry, and sociology.
IEEE Design & Test of Computers | 2003
Mahnaz Salamati; Dag Stranneby
When we test boards, we usually think in terms of traditional electrical test (in-circuit, flying probe) and nonelectrical test (optical, x-ray). This Orebro University article develops an alternative, connectionless technique based on scanning the electromagnetic field generated by active on-board devices. Could this make it into industry as an additional diagnostic tool?.
military communications conference | 1993
Dag Stranneby; Per Källquist
A frequency hopping scheme (1 bit per chip) is proposed and studied in some detail for channels in the HF-band. This band is subject to interference that varies with time and frequency, due to varying transmission and channel propagation conditions. To make efficient use of the available channel resource an adaptive frequency hopping algorithm has been implemented by means of a neural network to make use of the available channel information from the link quality analysis (LQA) in selecting the frequency slot in contrast to having a uniformly distrinuted selection process. The algorithm proposed combines the advantages of low probability of detection during transmission and reduced fading and interference disturbance when detecting transmitted signals. The disturbances not eliminated by the channel selection is further reduced by using block codes with first order soft decision decoding. The results on residual bit error rate and average signal to interference ratio are promising with an overall reduction in bit error rate of more than two orders of magnitude at moderate signal to noise and signal to interference ratios for a course channel model based upon the results by P. J. Laycock et al. (1988).<<ETX>>
International Journal of Audiology | 2008
Parivash Ranjbar; Erik Borg; Lennart Philipson; Dag Stranneby
The goal of the present study was to compare six transposing signal-processing algorithms based on different principles (Fourier-based and modulation based), and to choose the algorithm that best enables identification of environmental sounds, i.e. improves the ability to monitor events in the surroundings. Ten children (12–15 years) and 10 adults (21–33 years) with normal hearing listened to 45 representative environmental (events) sounds processed using the six algorithms, and identified them in three different listening experiments involving an increasing degree of experience. The sounds were selected based on their importance for normal hearing and deaf-blind subjects. Results showed that the algorithm based on transposition of 1/3 octaves (fixed frequencies) with large bandwidth was better (p<0.015) than algorithms based on modulation. There was also a significant effect of experience (p<0.001). Adults were significantly (p<0.05) better than children for two algorithms. No clear gender difference was observed. It is concluded that the algorithm based on transposition with large bandwidth and fixed frequencies is the most promising for development of hearing aids to monitor environmental sounds.
Digital Signal Processing and Applications (Second Edition) | 2004
Dag Stranneby; William Walker
This chapter presents common methods for spectral analysis of temporal signals. Spectral analysis, by estimation of the power spectrum or spectral power density of a deterministic or random signal, involves performing a squaring function. Obtaining a good estimate of the spectrum, i.e., the signal power contents as a function of the frequency, is not entirely easy in practice. The main problem is that in most cases one only has access to a limited set of samples of the signal; in another situation, one is forced to limit the number of samples in order to be able to perform the calculations in a reasonable time. Spectral analysis using the Fourier transform, a non-parametric method are commonly used today in digital signal processing (DSP) applications, is called discrete Fourier transform (DFT). A smart algorithm for calculating the DFT, causing less computational load for a digital computer, is the fast Fourier transform (FFT). Fourier-based methods assume that data outside the observed window is either periodic or zero. The estimate is not only an estimate of the observed data samples, but also an estimate of the “unknown” data samples outside the windows. Alternative estimation methods can be found in the class of model-based, parametric spectrum analysis methods. Another topic addressed in this chapter is modulation. Quite often in analog and digital telecommunication systems, the information signals cannot be transmitted as is. They have to be encoded (modulated) onto a carrier signal, suited for the transmission media in question. Some common modulation methods are demonstrated.
Digital Signal Processing and Applications (Second Edition) | 2004
Dag Stranneby; William Walker
This chapter overviews the background of the channel capacity and explores a number of methods and algorithms that enable communication to be as close to the maximum information transfer rate as possible. The issue is to find smart methods to minimize the probability of communication errors, without slowing the communication process. Such methods are called error-correcting codes or error-control codes (ECC). This chapter denotes both data transmission (in space) and a data storing (transmission in time) mechanism called a channel. The maximum average mutual information possible for a channel is called the channel capacity. There are a number of different error-correcting codes. These codes can be divided into classes, depending on their properties. A rough classification is to divide the codes into two groups: block codes and convolution codes. A general problem with the traditional way of designing codes, like block codes and convolution codes relying on algebraic structures, is the block or constraint length required to approach the channel capacity. Long blocks imply transmission delays and complex decoders as pointed out above. Two ways to boost performance while avoiding too long blocks are the use of concatenated codes or turbo codes. Concatenated coding relies on the old concept of “divide and conquer.” Turbo codes as concatenated codes and can be viewed as a parallel way of doing concatenated coding. Another way of viewing turbo codes is as a mix between block codes and convolution codes.
Digital Signal Processing and Applications (Second Edition) | 2004
Dag Stranneby; William Walker
This chapter focuses on hardware issues associated with digital signal processor chips, and compares the characteristics of a digital signal processing (DSP) to a conventional, general-purpose microprocessor. This chapter further discusses software issues and some common algorithms. There are mainly four different ways of implementing the required hardware in digital signal processing application: conventional microprocessor, DSP chip, bitslice or wordslice approach, and dedicated hardware, field programmable gate array (FPGA), application specific integrated circuit (ASIC). If higher processing capacity is required, it is common to connect a number of processors, working in parallel in a larger system. This can be done in different ways, either in a single instruction multiple data (SIMD) or in a multiple instruction multiple data (MIMD) structure. In an SIMD structure, all the processors are executing the same instruction but on different data streams. Such systems are sometimes also called vector processors. In an MIMD system, the processors may be executing different instructions. Most of the common processors today are of the complex instruction set computer (CISC) type. Further, these instructions often require more than 1 machine cycle to execute. In many cases, reduced instruction set computers (RlSC)-type processors may perform better in signal processing applications. In an RISC processor, no instruction occupies more than one memory word; it can be fetched in 1 bus cycle and executes in 1 machine cycle. On the other hand, many RISC instructions may be needed to perform the same function as one CISC-type instruction, but in the RISC case, one can get the required complexity only when needed.
Digital Signal Processing and Applications (Second Edition) | 2004
Dag Stranneby; William Walker
This chapter discusses the theory of adaptive filters and related systems. An adaptive signal processing system is a system that has the ability to change its processing behavior in a way to maximize a given performance measure. An adaptive system is self-adjusting and is, by its nature, a time varying and non-linear system. A simple example of an adaptive system is the automatic gain control (AGC) in a radio receiver (RX). A generic adaptive signal processing system consists of three parts: the processor, the performance function, and the adaptation algorithm. The processor is the part of the system that is responsible for the actual processing of the input signal, thus generating the output signal. The processor can, for instance, be a digital finite impulse response (FIR) filter. The performance function is a quality measure of the adaptive system. In optimization theory, this function corresponds to the objective function and in control theory it corresponds to the cost function. The task of the adaptation algorithm is to change the parameters of the processor in such a way that the performance is maximized. This chapter also includes example applications of adaptive filters, such as interference canceling, equalizers, and beam-forming systems. Adaptive filters are common in telecommunication applications such as high-speed modems and cell phones.