Petr Musilek
University of Alberta
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Petr Musilek.
Knowledge Engineering Review | 2006
Lukasz Kurgan; Petr Musilek
Knowledge Discovery and Data Mining is a very dynamic research and development area that is reaching maturity. As such, it requires stable and well-defined foundations, which are well understood and popularized throughout the community. This survey presents a historical overview, description and future directions concerning a standard for a Knowledge Discovery and Data Mining process model. It presents a motivation for use and a comprehensive comparison of several leading process models, and discusses their applications to both academic and industrial problems. The main goal of this review is the consolidation of the research in this area. The survey also proposes to enhance existing models by embedding other current standards to enable automation and interoperability of the entire process.
ACM Sigapp Applied Computing Review | 2000
Petr Musilek; Witold Pedrycz; Giancarlo Succi; Marek Reformat
Estimation of effort/cost required for development of software products is inherently associated with uncertainty. In this paper, we are concerned with a fuzzy set-based generalization of the COCOMO model (f-COCOMO). The inputs of the standard COCOMO model include an estimation of project size and an evaluation of other parameters. Rather than using a single number, the software size can be regarded as a fuzzy set (fuzzy number) yielding the cost estimate also in form of a fuzzy set. The paper includes detailed results with this regard by relating fuzzy sets of project size with the fuzzy set of effort. The analysis is carried out for several commonly encountered classes of membership functions (such as triangular and parabolic fuzzy sets). The issue of designer-friendliness of the f-COCOMO model is discussed in detail. Here we emphasize a way of propagation of uncertainty and ensuing visualization of the resulting effort (cost). Furthermore we augment the model by admitting software systems to belong partially to the three main categories (namely embedded, semidetached and organic) and discuss key implications of this generalization and highlight its links with a generalized sensitivity analysis. The experimental part of the study illustrates the approach and contrasts it with the standard numeric version of the COCOMO model.
ieee international software metrics symposium | 2002
Petr Musilek; Witold Pedrycz; Nan Sun; Giancarlo Succi
Software cost estimation techniques predict the amount of effort required to develop a software system. Cost estimates are needed throughout the software lifecycle to determine feasibility of software projects and to provide for appropriate allocation or reallocation of available resources. To assess the effect of imprecise evaluations, a comprehensive sensitivity analysis was performed on a major cost estimation model, COCOMO II. Results of this analysis are described and explicated in this paper. To reduce risk of drawing biased conclusions, three different methods for sensitivity analysis were employed: the mathematical analysis of the estimating equation, Monte Carlo simulation, and error propagation. The results of the first two methods are very consistent and confirm expected highest sensitivity of the model to the imprecision of the size estimate. Error propagation allows determination of the combined impact of imprecision in multiple inputs and it is therefore most valuable from the practical point of view. The results obtained by this technique also indicate very strong sensitivity to the imprecision in size estimates. A possible way to cope with imprecise information in software cost estimation is also indicated.
ieee international conference on pervasive computing and communications | 2006
Farooq Ahmad; Petr Musilek
The widespread adoption of mobile electronic devices and the advent of wearable computing have encouraged the development of compact alternatives to the keyboard and mouse. These include one-handed keyboards, digitizing tablets, and glove-based devices. This paper describes a combination pointer position and non-chorded keystroke input device that relies on miniature wrist-worn wireless video cameras that track finger position. A hidden Markov model is used to correlate finger movements to keystrokes during a brief training phase, after which the user can type in the air or above a flat surface as if typing on a standard keyboard. Language statistics are used to help disambiguate keystrokes, allowing the assignment of multiple unique keys to each finger and obviating chorded input. In addition, the system can be trained to recognize certain finger positions for switching between input modes; for example, from typing mode to pointer movement mode. In the latter mode of operation, the position of the mouse pointer is controlled by hand movement. The camera motion is estimated by tracking environmental features and is used to control pointer position. This allows fast switching between keystroke mode and pointer control mode
IEEE Annual Meeting of the Fuzzy Information, 2004. Processing NAFIPS '04. | 2004
Yifan Li; Petr Musilek; Loren Wyard-Scott
The underlying artificial intelligence of computer games is constantly in need of improvement to meet the ever-increasing demands of game players. This paper discusses how intelligent agents and fuzzy logic can help increase the quality and amount of a computer games most important element: interaction. The applications of fuzzy logic in behavior design are illustrated in detail through implementation of an arcade-style game.
Engineering Applications of Artificial Intelligence | 2012
Ashkan Zarnani; Petr Musilek; Xiaoyu Shi; Xiaodi Ke; Hua He; Russell Greiner
Ice accretion on power transmission and distribution lines is one of the major causes of power grid outages in northern regions. While such icing events are rare, they are very costly. Thus, it would be useful to predict how much ice will accumulate. Many current ice accretion forecasting systems use precipitation-type prediction and physical ice accretion models. These systems are based on expert knowledge and experimentations. An alternative strategy is to learn the patterns of ice accretion based on observations of previous events. This paper presents two different forecasting systems that are obtained by applying the learning algorithm of Support Vector Machines to the outputs of a Numerical Weather Prediction model. The first forecasting system relies on an icing model, just as the previous algorithms do. The second system learns an effective forecasting model directly from meteorological features. We use a rich data set of eight different icing events (from 2002 to 2008) to empirically compare the performance of the various ice accretion forecasting systems. Several experiments are conducted to investigate the effectiveness of the forecasting algorithms. Results indicate that the proposed forecasting system is significantly more accurate than other state-of-the-art algorithms.
canadian conference on electrical and computer engineering | 2010
Demian Pimentel; Petr Musilek
Traditional power management for battery powered devices does not meet the requirements of energy harvesting systems. Transducers that extract energy from the environment significantly differ from batteries in that their power output is limited. Besides, the energy source can be of variable nature and energy availability can be virtually infinite. Electronic systems that rely on energy harvesting sources have to be harvesting aware designed, both from the hardware and software perspectives. Proper Harvesting Aware Power Management (HAPM) should allow a harvesting system to operate indefinitely and within the expected operational utility. An energy neutral mode of operation can guarantee that the system will operate forever, but not that the utility meets the desired utility. This paper presents an introduction to HAPM, including topologies of Energy Harvesting Systems, the Energy Neutrality Principle and Power Management Techniques.
electrical power and energy conference | 2009
Pawel Pytlak; Petr Musilek; Edward P. Lozowski
This paper presents a precipitation-based conductor cooling model for use in power line ampacity rating applications. It is aimed at better modelling a conductors temperature by incorporating line cooling resulting from precipitation falling on power lines. The improved calculations provide gains in additional line capacity for power transmission networks incorporating advanced Dynamic Thermal Rating systems. Depending on the precipitation rate and other atmospheric variables, the initial work presented in this paper suggests that line cooling gains between 1°C to over 20°C may be obtained. The precipitation based cooling model shows that the highest gains are observed for largest line loads, thus providing cooling where it is needed the most.
power and energy society general meeting | 2011
Jana Heckenbergerova; Petr Musilek; Konstantin Filimonenkov
Deterministic static thermal ratings of overhead transmission lines are usually conservative, causing underutilization of their potential capacity. Efforts to overcome this limitation led to the development of alternative rating strategies, based on probabilistic and dynamic methods. One such strategy is the seasonal static thermal rating. It uses a probabilistic rating approach with explicit treatment of seasonal effects on conductor temperature. In this paper, we present several variants of seasonal ratings, and analyze their performance with respect to other rating approaches. Seasonal ratings use a set of predetermined probabilistic ratings that are engaged according to the season of year or time of day. By alternating among several ratings, transmission lines can be operated closer to their actual ampacity. In addition, seasonal ratings can reduce the risk of thermal overload, compared to the uniform probabilistic rating which remains constant at all times. Despite the risk reduction, and counter to the common belief, they still pose a significant risk of conductor thermal overload. Characteristics of several seasonal rating strategies are illustrated using a case study involving a power transmission line in Newfoundland, Canada. Simulation results show that seasonal ratings can provide a modest increase in transmission line throughput, compared to the basic probabilistic rating. However, they also confirm the high levels of residual risk.
international conference on tools with artificial intelligence | 2003
Bin Shen; Xiaoyuan Su; Russell Greiner; Petr Musilek; Corrine Cheng
Greiner and Zhou (1988) presented ELR, a discriminative parameter-learning algorithm that maximizes conditional likelihood (CL) for a fixed Bayesian belief network (BN) structure, and demonstrated that it often produces classifiers that are more accurate than the ones produced using the generative approach (OFE), which finds maximal likelihood parameters. This is especially true when learning parameters for incorrect structures, such as naive Bayes (NB). In searching for algorithms to learn better BN classifiers, this paper uses ELR to learn parameters of more nearly correct BN structures - e.g., of a general Bayesian network (GBN) learned from a structure-learning algorithm by Greiner and Zhou (2002). While OFE typically produces more accurate classifiers with GBN (vs. NB), we show that ELR does not, when the training data is not sufficient for the GBN structure learner to produce a good model. Our empirical studies also suggest that the better the BN structure is, the less advantages ELR has over OFE, for classification purposes. ELR learning on NB (i.e., with little structural knowledge) still performs about the same as OFE on GBN in classification accuracy, over a large number of standard benchmark datasets.