Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Luka Šajn is active.

Publication


Featured researches published by Luka Šajn.


Computer Methods and Programs in Biomedicine | 2005

Computerized segmentation of whole-body bone scintigrams and its use in automated diagnostics

Luka Šajn; Matjaž Kukar; Igor Kononenko; Metka Milčinski

Bone scintigraphy or whole-body bone scan is one of the most common diagnostic procedures in nuclear medicine used in the last 25 years. Pathological conditions, technically poor image resolution and artefacts necessitate that algorithms use sufficient background knowledge of anatomy and spatial relations of bones in order to work satisfactorily. A robust knowledge based methodology for detecting reference points of the main skeletal regions that is simultaneously applied on anterior and posterior whole-body bone scintigrams is presented. Expert knowledge is represented as a set of parameterized rules which are used to support standard image-processing algorithms. Our study includes 467 consecutive, non-selected scintigrams, which is, to our knowledge the largest number of images ever used in such studies. Automatic analysis of whole-body bone scans using our segmentation algorithm gives more accurate and reliable results than previous studies. Obtained reference points are used for automatic segmentation of the skeleton, which is applied to automatic (machine learning) or manual (expert physicians) diagnostics. Preliminary experiments show that an expert system based on machine learning closely mimics the results of expert physicians.


Computer Methods and Programs in Biomedicine | 2011

Image processing and machine learning for fully automated probabilistic evaluation of medical images

Luka Šajn; Matjaž Kukar

The paper presents results of our long-term study on using image processing and data mining methods in a medical imaging. Since evaluation of modern medical images is becoming increasingly complex, advanced analytical and decision support tools are involved in integration of partial diagnostic results. Such partial results, frequently obtained from tests with substantial imperfections, are integrated into ultimate diagnostic conclusion about the probability of disease for a given patient. We study various topics such as improving the predictive power of clinical tests by utilizing pre-test and post-test probabilities, texture representation, multi-resolution feature extraction, feature construction and data mining algorithms that significantly outperform medical practice. Our long-term study reveals three significant milestones. The first improvement was achieved by significantly increasing post-test diagnostic probabilities with respect to expert physicians. The second, even more significant improvement utilizes multi-resolution image parametrization. Machine learning methods in conjunction with the feature subset selection on these parameters significantly improve diagnostic performance. However, further feature construction with the principle component analysis on these features elevates results to an even higher accuracy level that represents the third milestone. With the proposed approach clinical results are significantly improved throughout the study. The most significant result of our study is improvement in the diagnostic power of the whole diagnostic process. Our compound approach aids, but does not replace, the physicians judgment and may assist in decisions on cost effectiveness of tests.


EURASIP Journal on Advances in Signal Processing | 2008

Multiresolution image parametrization for improving texture classification

Luka Šajn; Igor Kononenko

In the paper an innovative alternative to automatic image parametrization on multiple resolutions, based on texture description with specialized association rules, and image evaluation with machine learning methods is presented. The algorithm ArTex for parameterizing textures with association rules belonging to structural parametrization algorithms was developed. In order to improve the classification accuracy a multiresolution approach is used. The algorithm ARes for finding more informative resolutions based on the SIFT algorithm is described. The presented algorithms are evaluated on several public domains and the results are compared to other well-known parametrization algorithms belonging to statistical and spectral parametrization algorithms. Significant improvement of classification results was observed when combining parametrization attributes at several image resolutions for most parametrization algorithms. Our results show that multiresolution image parametrization should be considered when improvement of classification accuracy in textural domains is required. These resolutions have to be selected carefully and may depend on the domain itself.


international convention on information and communication technology electronics and microelectronics | 2017

Augmented Coaching Ecosystem for Non-obtrusive Adaptive Personalized Elderly Care on the basis of Cloud-Fog-Dew computing paradigm

Yu. Gordienko; Sergii Stirenko; Oleg Alienin; Karolj Skala; Z. Sojat; Anis Rojbi; J.R. Lopez Benito; E. Artetxe González; U. Lushchyk; Luka Šajn; A. Llorente Coto; G. Jervan

The concept of the augmented coaching ecosystem for non-obtrusive adaptive personalized elderly care is proposed on the basis of the integration of new and available ICT approaches. They include multimodal user interface (MMUI), augmented reality (AR), machine learning (ML), Internet of Things (IoT), and machine-to-machine (M2M) interactions. The ecosystem is based on the Cloud-Fog-Dew computing paradigm services, providing a full symbiosis by integrating the whole range from low level sensors up to high level services using integration efficiency inherent in synergistic use of applied technologies. Inside of this ecosystem, all of them are encapsulated in the following network layers: Dew, Fog, and Cloud computing layer. Instead of the “spaghetti connections”, “mosaic of buttons”, “puzzles of output data”, etc., the proposed ecosystem provides the strict division in the following dataflow channels: consumer interaction channel, machine interaction channel, and caregiver interaction channel. This concept allows to decrease the physical, cognitive, and mental load on elderly care stakeholders by decreasing the secondary human-to-human (H2H), human-to-machine (H2M), and machine-to-human (M2H) interactions in favor of M2M interactions and distributed Dew Computing services environment. It allows to apply this non-obtrusive augmented reality ecosystem for effective personalized elderly care to preserve their physical, cognitive, mental and social well-being.


Computerized Medical Imaging and Graphics | 2007

Computerized segmentation and diagnostics of whole-body bone scintigrams

Luka Šajn; Igor Kononenko; Metka Milčinski

Bone scintigraphy or whole-body bone scan is one of the most common diagnostic procedures in nuclear medicine. Since expert physicians evaluate images manually some automated procedure for pathology detection is desired. A robust knowledge based methodology for segmenting body scans into the main skeletal regions is presented. The algorithm is simultaneously applied on anterior and posterior whole-body bone scintigrams. Expert knowledge is represented as a set of parameterized rules, used to support standard image processing algorithms. The segmented bone regions are parameterized with algorithms for classifying patterns so the pathologies can be classified with machine learning algorithms. This approach enables automatic scintigraphy evaluation of pathological changes, thus in addition to detection of point-like high-uptake lesions also other types can be discovered. Our study includes 467 consecutive, non-selected scintigrams. Automatic analysis of whole-body bone scans using our segmentation algorithm gives more accurate and reliable results than previous studies. Preliminary experiments show that our expert system based on machine learning closely mimics the results of expert physicians.


Journal of Microscopy | 2015

Comparison of two automatic cell-counting solutions for fluorescent microscopic images

Jasna Lojk; Uros Cibej; D. Karlaš; Luka Šajn; Mojca Pavlin

Cell counting in microscopic images is one of the fundamental analysis tools in life sciences, but is usually tedious, time consuming and prone to human error. Several programs for automatic cell counting have been developed so far, but most of them demand additional training or data input from the user. Most of them do not allow the users to online monitor the counting results, either. Therefore, we designed two straightforward, simple‐to‐use cell‐counting programs that also allow users to correct the detection results. In this paper, we present the Cellcounter and Learn123 programs for automatic and semiautomatic counting of objects in fluorescent microscopic images (cells or cell nuclei) with a user‐friendly interface. Although Cellcounter is based on predefined and fine‐tuned set of filters optimized on sets of chosen experiments, Learn123 uses an evolutionary algorithm to determine the adapt filter parameters based on a learning set of images. Cellcounter also includes an extension for analysis of overlaying images. The efficiency of both programs was assessed on images of cells stained with different fluorescent dyes by comparing automatically obtained results with results that were manually annotated by an expert. With both programs, the correlation between automatic and manual counting was very high (R2 < 0.9), although Cellcounter had some difficulties processing images with no cells or weakly stained cells, where sometimes the background noise was recognized as an object of interest. Nevertheless, the differences between manual and automatic counting were small compared to variations between experimental repeats. Both programs significantly reduced the time required to process the acquired images from hours to minutes. The programs enable consistent, robust, fast and accurate detection of fluorescent objects and can therefore be applied to a range of different applications in different fields of life sciences where fluorescent labelling is used for quantification of various phenomena. Moreover, Cellcounter overlay extension also enables fast analysis of related images that would otherwise require image merging for accurate analysis, whereas Learn123s evolutionary algorithm can adapt counting parameters to specific sets of images of different experimental settings.


artificial intelligence in medicine in europe | 2005

Automatic segmentation of whole-body bone scintigrams as a preprocessing step for computer assisted diagnostics

Luka Šajn; Matjaž Kukar; Igor Kononenko; Metka Milčinski

Bone scintigraphy or whole-body bone scan is one of the most common diagnostic procedures in nuclear medicine used in the last 25 years. Pathological conditions, technically poor quality images and artifacts necessitate that algorithms use sufficient background knowledge of anatomy and spatial relations of bones in order to work satisfactorily. We present a robust knowledge based methodology for detecting reference points of the main skeletal regions that simultaneously processes anterior and posterior whole-body bone scintigrams. Expert knowledge is represented as a set of parameterized rules which are used to support standard image processing algorithms. Our study includes 467 consecutive, non-selected scintigrams, which is to our knowledge the largest number of images ever used in such studies. Automatic analysis of whole-body bone scans using our knowledge based segmentation algorithm gives more accurate and reliable results than previous studies. Obtained reference points are used for automatic segmentation of the skeleton, which is used for automatic (machine learning) or manual (expert physicians) diagnostics. Preliminary experiments show that an expert system based on machine learning closely mimics the results of expert physicians.


international convention on information and communication technology, electronics and microelectronics | 2014

Automatic Cell Counter for cell viability estimation

Jasna Lojk; Luka Šajn; Uros Cibej; Mojca Pavlin

Despite several methods that exist in different fields of life sciences, certain biotechnological applications still require microscopic analysis of the samples and in many instances, counting of cells. Some of those are drug delivery, transfection or analysis of mechanism fluorescent probes are used to detect cell viability, efficiency of a specific drug delivery or some other effect. For analysis and quantification of these results it is necessary to either manually or automatically count and analyze microscope images. However, in everyday use many researchers still count cells manually since existing solutions require either some specific knowledge of computer vision and/or manual fine tuning of various parameters. Here we present a new software solution (named CellCounter) for automatic and semi-automatic cell counting of fluorescent microscopic images. This application is specifically designed for counting fluorescently stained cells. The program enables counting of cell nuclei or cell cytoplasm stained with different fluorescent stained. This simplifies image analysis for several biotechnological applications where fluorescent microscopy is used. We present results and validate the presented automatic cell counting program for cell viability application. We give empirical results showing the efficiency of the proposed solution by comparing manual counts with the results returned by automated counting. We also show how the results can be further improved by combining manual and automated counts.


Displays | 2017

User interface for a better eye contact in videoconferencing

Aleš Jaklič; Franc Solina; Luka Šajn

Abstract When people talk to each other, eye contact is very important for a trustful and efficient communication. Video-conferencing systems were invented to enable such communication over large distances, recently using mostly Internet and personal computers. Despite low cost of such solutions, a broader acceptance and use of these communication means has not happened yet. One of the most important reasons for this situation is that it is almost impossible to establish eye contact between distant parties on the most common hardware configurations of such videoconferencing systems, where the camera for face capture is usually mounted above the computer monitor, where the face of the correspondent is observed. Different hardware and software solutions to this problem of missing eye contact have been proposed over the years. In this article we propose a simple solution that can improve the subjective feeling of eye contact, which is based on how people perceive 3D scenes displayed on slanted surfaces, and offer some experiments in support of the hypothesis.


international convention on information and communication technology electronics and microelectronics | 2015

Automatic adaptation of filter sequences for cell counting

Uros Cibej; Jasna Lojk; Mojca Pavlin; Luka Šajn

Manual cell counting in microscopic images is usually tedious, time consuming and prone to human error. Several programs for automatic cell counting have been developed so far, but most of them demand some specific knowledge of image analysis and/or manual fine tuning of various parameters. Even if a set of filters is found and fine tuned to the specific application, small changes to the image attributes might make the automatic counter very unreliable. The goal of this article is to present a new application that overcomes this problem by learning the set of parameters for each application, thus making it more robust to changes in the input images. The users must provide only a small representative subset of images and their manual count, and the program offers a set of automatic counters learned from the given input. The user can check the counters and choose the most suitable one. The resulting application (which we call Learn123) is specifically tailored to the practitioners, i.e. even though the typical workflow is more complex, the application is easy to use for non-technical experts.

Collaboration


Dive into the Luka Šajn's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jasna Lojk

University of Ljubljana

View shared research outputs
Top Co-Authors

Avatar

Mojca Pavlin

University of Ljubljana

View shared research outputs
Top Co-Authors

Avatar

Uros Cibej

University of Ljubljana

View shared research outputs
Top Co-Authors

Avatar

Franc Solina

University of Ljubljana

View shared research outputs
Top Co-Authors

Avatar

Peter Peer

University of Ljubljana

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge