Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jana Novovicová is active.

Publication


Featured researches published by Jana Novovicová.


Pattern Recognition Letters | 1999

Adaptive floating search methods in feature selection

Petr Somol; Pavel Pudil; Jana Novovicová; Pavel Paclík

Abstract A new suboptimal search strategy for feature selection is presented. It represents a more sophisticated version of “classical” floating search algorithms ( Pudil et al., 1994 ), attempts to remove some of their potential deficiencies and facilitates finding a solution even closer to the optimal one.


scandinavian conference on image analysis | 2000

Road sign classification using Laplace kernel classifier

Pavel Paclík; Jana Novovicová; Pavel Pudil; Petr Somol

Abstract Driver support systems (DSS) of intelligent vehicles will predict potentially dangerous situations in heavy traffic, help with navigation and vehicle guidance and interact with a human driver. Important information necessary for traffic situation understanding is presented by road signs. A new kernel rule has been developed for road sign classification using the Laplace probability density. Smoothing parameters of the Laplace kernel are optimized by the pseudo-likelihood cross-validation method. To maximize the pseudo-likelihood function, an Expectation–Maximization algorithm is used. The algorithm has been tested on a dataset with more than 4900 noisy images. A comparison to other classification methods is also given.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1996

Divergence based feature selection for multimodal class densities

Jana Novovicová; Pavel Pudil; Josef Kittler

A new feature selection procedure based on the Kullback J-divergence between two class conditional density functions approximated by a finite mixture of parameterized densities of a special type is presented. This procedure is suitable especially for multimodal data. Apart from finding a feature subset of any cardinality without involving any search procedure, it also simultaneously yields a pseudo-Bayes decision rule. Its performance is tested on real data.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2010

Evaluating Stability and Comparing Output of Feature Selectors that Optimize Feature Subset Cardinality

Petr Somol; Jana Novovicová

Stability (robustness) of feature selection methods is a topic of recent interest, yet often neglected importance, with direct impact on the reliability of machine learning systems. We investigate the problem of evaluating the stability of feature selection processes yielding subsets of varying size. We introduce several novel feature selection stability measures and adjust some existing measures in a unifying framework that offers broad insight into the stability problem. We study in detail the properties of considered measures and demonstrate on various examples what information about the feature selection process can be gained. We also introduce an alternative approach to feature selection evaluation in the form of measures that enable comparing the similarity of two feature selection processes. These measures enable comparing, e.g., the output of two feature selection methods or two runs of one method with different parameters. The information obtained using the considered stability and similarity measures is shown to be usable for assessing feature selection methods (or criteria) as such.


Pattern Recognition | 1995

Feature selection based on the approximation of class densities by finite mixtures of special type

Pavel Pudil; Jana Novovicová; N. Choakjarernwanit; Josef Kittler

Abstract A new method of feature selection based on the approximation of class conditional densities by a mixture of parameterized densities of a special type, suitable especially for multimodal data, is presented. No search procedure is needed when using the proposed method. Its performance is tested both on real simulated data.


international conference on pattern recognition | 1992

Multistage pattern recognition with reject option

Pavel Pudil; Jana Novovicová; Svatopluk Bláha; Josef Kittler

The idea of constructing a multistage pattern classification system with reject option is presented and conditions in terms of upper bounds of the cost of higher-stage measurements for a multistage classifier to give lower decision risk than a single-stage classifier are derived.<<ETX>>


iberoamerican congress on pattern recognition | 2007

Conditional mutual information based feature selection for classification task

Jana Novovicová; Petr Somol; Michal Haindl; Pavel Pudil

We propose a sequential forward feature selection method to find a subset of features that are most relevant to the classification task. Our approach uses novel estimation of the conditional mutual information between candidate feature and classes, given a subset of already selected features which is utilized as a classifier independent criterion for evaluation of feature subsets. The proposed mMIFS-U algorithm is applied to text classification problem and compared with MIFS method and MIFS-U method proposed by Battiti and Kwak and Choi, respectively. Our feature selection algorithm outperforms MIFS method and MIFS-U in experiments on high dimensional Reuters textual data.


Pattern Recognition Letters | 2002

Feature selection toolbox software package

Pavel Pudil; Jana Novovicová; Petr Somol

Recent advances in the statistical methodology for selecting optimal subsets of features for data representation and classification are presented. The paper attempts to provide a guideline which approach to choose with respect to the extent of a priori knowledge of the problem. Two basic approaches are reviewed and the conditions under which they should be used are specified. One approach involves the use of the computationally effective floating search methods. The alternative approach trades off the requirement for a priori information for the requirement of sufficient data to represent the distributions involved. Owing to its nature it is particularly suitable for cases when the underlying probability distributions are not unimodal. The approach attempts to achieve simultaneous feature selection and decision rule inference. According to the criterion adopted there are two variants allowing the selection of features either for optimal representation or discrimination. A consulting system aimed to guide a user to choose a proper method for the problem at hand is being prepared.


Lecture Notes in Computer Science | 2004

Feature Selection Using Improved Mutual Information for Text Classification

Jana Novovicová; Antonín Malík; Pavel Pudil

A major characteristic of text document classification problem is extremely high dimensionality of text data. In this paper we present two algorithms for feature (word) selection for the purpose of text classification. We used sequential forward selection methods based on improved mutual information introduced by Battiti [1] and Kwak and Choi [6] for non-textual data. These feature evaluation functions take into consideration how features work together. The performance of these evaluation functions compared to the information gain which evaluate features individually is discussed. We present experimental results using naive Bayes classifier based on multinomial model on the Reuters data set. Finally, we analyze the experimental results from various perspectives, including F 1-measure, precision and recall. Preliminary experimental results indicate the effectiveness of the proposed feature selection algorithms in a text classification problem.


Archive | 2010

Efficient Feature Subset Selection and Subset Size Optimization

Petr Somol; Jana Novovicová; Pavel Pudil

A broad class of decision-making problems can be solved by learning approach. This can be a feasible alternative when neither an analytical solution exists nor the mathematical model can be constructed. In these cases the required knowledge can be gained from the past data which form the so-called learning or training set. Then the formal apparatus of statistical pattern recognition can be used to learn the decision-making. The first and essential step of statistical pattern recognition is to solve the problem of feature selection (FS) ormore generally dimensionality reduction (DR). The problem of feature selection in statistical pattern recognition will be of primary focus in this chapter. The problem fits in the wider context of dimensionality reduction (Section 2) which can be accomplished either by a linear or nonlinear mapping from the measurement space to a lower dimensional feature space, or by measurement subset selection. This chapter will focus on the latter (Section 3). The main aspects of the problem as well as the choice of the right feature selection tools will be discussed (Sections 3.1 to 3.3). Several optimization techniques will be reviewed, with emphasis put to the framework of sequential selection methods (Section 4). Related topics of recent interest will be also addressed, including the problem of subset size determination (Section 4.7), search acceleration through hybrid algorithms (Section 5), and the problem of feature selection stability and feature over-selection (Section 6).

Collaboration


Dive into the Jana Novovicová's collaboration.

Top Co-Authors

Avatar

Pavel Pudil

Academy of Sciences of the Czech Republic

View shared research outputs
Top Co-Authors

Avatar

Petr Somol

Academy of Sciences of the Czech Republic

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Antonín Malík

Academy of Sciences of the Czech Republic

View shared research outputs
Top Co-Authors

Avatar

Pavel Paclík

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

Jirí Grim

Academy of Sciences of the Czech Republic

View shared research outputs
Top Co-Authors

Avatar

Svatopluk Bláha

Czechoslovak Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Robert P. W. Duin

Delft University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge