Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Matti Niskanen is active.

Publication


Featured researches published by Matti Niskanen.


machine vision applications | 2003

Wood inspection with non-supervised clustering

Olli Silvén; Matti Niskanen; Hannu Kauppinen

Abstract. The appearance of sawn timber has huge natural variations that the human inspector easily compensates for mentally when determining the types of defects and the grade of each board. However, for automatic wood inspection systems these variations are a major source for complication. This makes it difficult to use textbook methodologies for visual inspection. These methodologies generally aim at systems that are trained in a supervised manner with samples of defects and good material, but selecting and labeling the samples is an error-prone process that limits the accuracy that can be achieved. We present a non-supervised clustering-based approach for detecting and recognizing defects in lumber boards. A key idea is to employ a self-organizing map (SOM) for discriminating between sound wood and defects. Human involvement needed for training is minimal. The approach has been tested with color images of lumber boards, and the achieved false detection and error escape rates are low. The approach also provides a self-intuitive visual user interface.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2009

Human Motion Tracking by Registering an Articulated Surface to 3D Points and Normals

Radu Horaud; Matti Niskanen; Guillaume Dewaele; Edmond Boyer

We address the problem of human motion tracking by registering a surface to 3-D data. We propose a method that iteratively computes two things: Maximum likelihood estimates for both the kinematic and free-motion parameters of an articulated object, as well as probabilities that the data are assigned either to an object part, or to an outlier cluster. We introduce a new metric between observed points and normals on one side, and a parameterized surface on the other side, the latter being defined as a blending over a set of ellipsoids. We claim that this metric is well suited when one deals with either visual-hull or visual-shape observations. We illustrate the method by tracking human motions using sparse visual-shape data (3-D surface points and normals) gathered from imperfect silhouettes.


Sixth International Conference on Quality Control by Artificial Vision | 2003

Comparison of dimensionality reduction methods for wood surface inspection

Matti Niskanen; Olli Silvén

Dimensionality reduction methods for visualization map the original high-dimensional data typically into two dimensions. Mapping preserves the important information of the data, and in order to be useful, fulfils the needs of a human observer. We have proposed a self-organizing map (SOM)- based approach for visual surface inspection. The method provides the advantages of unsupervised learning and an intuitive user interface that allows one to very easily set and tune the class boundaries based on observations made on visualization, for example, to adapt to changing conditions or material. There are, however, some problems with a SOM. It does not address the true distances between data, and it has a tendency to ignore rare samples in the training set at the expense of more accurate representation of common samples. In this paper, some alternative methods for a SOM are evaluated. These methods, PCA, MDS, LLE, ISOMAP, and GTM, are used to reduce dimensionality in order to visualize the data. Their principal differences are discussed and performances quantitatively evaluated in a few special classification cases, such as in wood inspection using centile features. For the test material experimented with, SOM and GTM outperform the others when classification performance is considered. For data mining kinds of applications, ISOMAP and LLE appear to be more promising methods.


international conference on multimedia and expo | 2006

Video Stabilization Performance Assessment

Matti Niskanen; Olli Silvén; Marius Tico

Shooting videos with a hand-held camera introduces shaking, which incontrovertibly reduces video quality. Digital video stabilization is a process to compensate for camera motion by means of image processing. In the best case, it not only removes the image motion, but also reduces image distortion caused by unintentional camera motion. In practice, removing solely unwanted jitter cannot be achieved precisely. Furthermore, the stabilization process itself often introduces some additional distortion in images instead of removing it. In this paper, various means to automatically evaluate the performance of the video stabilization process are proposed, based on measuring the divergence and jitter of the remaining unintentional motion and blurring using point spread function (PSF). This helps, for example, in tuning the system parameters for better quality


british machine vision conference | 2005

Articulated motion capture from 3-D points and normals

Matti Niskanen; Edmond Boyer; Radu Horaud

In this paper we address the problem of tracking the motion of articulated objects from their 2-D silhouettes gathered with several cameras. The vast majority of existing approaches relies on a single camera or on stereo. We describe a new method which requires at least two cameras. The method relies on (i) building 3-D observations (points and normals) from image silhouettes and on (ii) fitting an articulated object model to these observations by minimizing their discrepancies. The objective function sums up these discrepancies while it takes into account both the scaled algebraic distance from data points to the model surface and the offset in orientation between observed normals and model normals. The combination of a feed-forward reconstruction technique with a robust model-tracking method results in a reliable and efficient method for articulated motion capture.


machine vision applications | 2002

Real-time aspects of SOM-based visual surface inspection

Matti Niskanen; Hannu Kauppinen; Olli Silvén

We have developed a self-organizing map (SOM) -based approach for training and classification in visual surface inspection applications. The approach combines the advantages of non-supervised and supervised training and offers an intuitive visual user interface. The training is less sensitive to human errors, since labeling of large amounts of individual training samples is not necessary. In the classification, the user interface allows on-line control of class boundaries. Earlier experiments show that our approach gives good results in wood inspection. In this paper, we evaluate its real time capability. When quite simple features are used, the bottleneck in real time inspection is the nearest SOM code vector search during the classification phase. In experiments, we compare acceleration techniques that are suitable for high dimensional nearest neighbor search typical for the method. We show that even simple acceleration techniques can improve the speed considerably, and the SOM approach can be used in real time with a standard PC.


Sixth International Conference on Quality Control by Artificial Vision | 2003

Texture-based paper characterization using nonsupervised clustering

Markus Turtinen; Matti Pietikaeinen; Olli Silvén; Topi Mäenpää; Matti Niskanen

A non-supervised clustering based method for classifying paper according to its quality is presented. The method is simple to train, requiring minimal human involvement. The approach is based on Self-Organizing Maps and texture features that discriminate the texture of effectively. Multidimensional texture feature vectors are first extracted from paper images. The dimensionality of the data is then reduced by a Self-Organizing Map (SOM). In dimensionality reduction, the feature data are projected to a two-dimensional space and clustered according to their similarity. The clusters represent different paper qualities and can be labeled according to the quality information of the training samples. After that, it is easy to find the quality class of the inspected paper by checking where a sample is placed in the low-dimensional space. Tests based on images taken in a laboratory environment from four different paper quality classes provided very promising results. Local Binary Pattern (LBP) texture features combined with a SOM-based approach classified the test data almost perfectly: the error percentage was only 0.2% with the multiresolution version of LBP and 1.6% with the regular LBP. The improvement to the previously used texture features in paper inspection is huge: the classification error is reduced over 40 times. In addition to the excellent classification accuracy, the method also offers a self-intuitive user interface and a synthetic view to the inspected data.


international conference on pattern recognition | 2006

View Dependent Enhancement of the Dynamic Range of Video

Matti Niskanen

There are many applications for computer vision where a scene observed contains a wide range of brightness. Often, the low dynamic range of a camera limits the accuracy of information that can be extracted from the video. Frames may contain saturated pixels of bright targets, poor resolution and noisy data for dark regions, or both. In this paper, we propose a method for generating high dynamic range (HDR) videos by combining successive frames. The first phase is to set the exposures for each frame contributing to one HDR frame. The exposures are automatically adapted according to image contents to provide a maximum amount of information about the target. HDR frames are then combined in a maximum likelihood manner, based on the noise model of image acquisition. Experiments for texture based classification show that utilizing a proposed methodology, even the HDR videos built in real-time, contribute to many topical vision systems


electronic imaging | 2008

New video applications on mobile communication devices

Olli Silvén; Jari Hannuksela; Miguel Bordallo-López; Markus Turtinen; Matti Niskanen; Jani Boutellier; Markku Vehvilainen; Marius Tico

The video applications on mobile communication devices have usually been designed for content creation, access, and playback. For instance, many recent mobile devices replicate the functionalities of portable video cameras and video recorders, and digital TV receivers. These are all demanding uses, but nothing new from the consumer point of view. However, many of the current devices have two cameras built in, one for capturing high resolution images, and the other for lower, typically VGA (640x480 pixels) resolution video telephony. We employ video to enable new applications and describe four actual solutions implemented on mobile communication devices. The first one is a real-time motion based user interface that can be used for browsing large images or documents such as maps on small screens. The motion information is extracted from the image sequence captured by the camera. The second solution is a real-time panorama builder, while the third one assembles document panoramas, both from individual video frames. The fourth solution is a real-time face and eye detector. It provides another type of foundation for motion based user interfaces as knowledge of presence and motion of a human faces in the view of the camera can be a powerful application enabler.


Sixth International Conference on Quality Control by Artificial Vision | 2003

Framework for industrial visual surface inspections

Olli Silvén; Matti Niskanen

A key problem in using automatic visual surface inspection in industry is training and tuning the systems to perform in a desired manner. This may take from minutes up to a year after installation, and can be a major cost. Based on our experiences the training issues need to be taken into account from the very beginning of system design. In this presentation we consider approaches for visual surface inspection and system training. We advocate using a non-supervised learning based visual training method.

Collaboration


Dive into the Matti Niskanen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge