Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ron S. Kenett is active.

Publication


Featured researches published by Ron S. Kenett.


Journal of Applied Statistics | 2009

Bayesian networks of customer satisfaction survey data

Silvia Salini; Ron S. Kenett

A Bayesian network (BN) is a probabilistic graphical model that represents a set of variables and their probabilistic dependencies. Formally, BNs are directed acyclic graphs whose nodes represent variables, and whose arcs encode the conditional dependencies among the variables. Nodes can represent any kind of variable, be it a measured parameter, a latent variable, or a hypothesis. They are not restricted to represent random variables, which form the “Bayesian” aspect of a BN. Efficient algorithms exist that perform inference and learning in BNs. BNs that model sequences of variables are called dynamic BNs. In this context, [A. Harel, R. Kenett, and F. Ruggeri, Modeling web usability diagnostics on the basis of usage statistics, in Statistical Methods in eCommerce Research, W. Jank and G. Shmueli, eds., Wiley, 2008] provide a comparison between Markov Chains and BNs in the analysis of web usability from e-commerce data. A comparison of regression models, structural equation models, and BNs is presented in Anderson et al. [R.D. Anderson, R.D. Mackoy, V.B. Thompson, and G. Harrell, A bayesian network estimation of the service–profit Chain for transport service satisfaction, Decision Sciences 35(4), (2004), pp. 665–689]. In this article we apply BNs to the analysis of customer satisfaction surveys and demonstrate the potential of the approach. In particular, BNs offer advantages in implementing models of cause and effect over other statistical techniques designed primarily for testing hypotheses. Other advantages include the ability to conduct probabilistic inference for prediction and diagnostic purposes with an output that can be intuitively understood by managers.


Quality Technology and Quantitative Management | 2006

Achieving Robust Design from Computer Simulations

R. A. Bates; Ron S. Kenett; David M. Steinberg; Henry P. Wynn

Abstract Computer simulations are widely used during product development. In particular, computer experiments are often conducted in order to optimize both product and process performance while respecting constraints that may be imposed. Several methods for achieving robust design in this context are described and compared with the aid of a simple example problem. The methods presented compare classical as well as modern approaches and introduce the idea of a ‘stochastic response’ to aid the search for robust solutions. Emphasis is placed on the efficiency of each method with respect to computational cost and the ability to formulate objectives that encapsulate the notion of robustness.


European Journal of Operational Research | 1995

The impact of defects on a process with rework

Saligrama R. Agnihothri; Ron S. Kenett

In this paper, we attempt to quantify the impact of defects on various system performance measures for a production process with 100% inspection followed by rework. In the literature, the number of defects (feedback loops) is assumed to be a random variable having a geometric distribution. We model the number of defects as a random variable having any general discrete distribution, and investigate the impact of the defect distribution on system performance measures such as yield, production lead time, and work-in-process inventory. We provide management guidelines for short term control decisions such as identifying potential bottlenecks under increased workloads and allocating additional resources to release bottlenecks. In order to meet the long term goal of continuously decreasing defect levels, we propose a budget allocation method for process improvement projects.


Journal of the American Statistical Association | 1983

On Sequential Detection of a Shift in the Probability of a Rare Event

Ron S. Kenett; Moshe Pollak

Abstract Suppose one is monitoring a sequence of observations for a possible increase in the probability of a rare event, and that it is not possible to immediately stop the process under observation or influence it to return to its normal state. One would then desire a scheme which takes advantage of observations occurring after a detection of a change is proclaimed. A modification of Pages CUSUM Procedure is developed, taking account of these additional observations. A table is given enabling one to select a modified Page procedure with a specified rate of false alarms. A comparison is made between the modified Page procedure and a procedure proposed by Chen (1978).


Archive | 2010

Operational risk management : a practical approach to intelligent data analysis

Ron S. Kenett; Yossi Raanan

Foreword. Preface. Introduction. Notes on Contributors. List of Acronyms. PART I INTRODUCTION TO OPERATIONAL RISK MANAGEMENT. 1 Risk management: a general view (Ron S. Kenett, Richard Pike and Yossi Raanan). 1.1 Introduction. 1.2 Definitions of risk. 1.3 Impact of risk. 1.4 Types of risk. 1.5 Enterprise risk management. 1.6 State of the art in enterprise risk management. 1.7 Summary. References. 2 Operational risk management: an overview (Yossi Raanan, Ron S. Kenett and Richard Pike). 2.1 Introduction. 2.2 Definitions of operational risk management. 2.3 Operational risk management techniques. 2.4 Operational risk statistical models. 2.5 Operational risk measurement techniques. 2.6 Summary. References. PART II DATA FOR OPERATIONAL RISK MANAGEMENT AND ITS HANDLING. 3 Ontology-based modelling and reasoning in operational risks (Christian Leibold, Hans-Ulrich Krieger and Marcus Spies). 3.1 Introduction. 3.2 Generic and axiomatic ontologies. 3.3 Domain-independent ontologies. 3.4 Standard reference ontologies. 3.5 Operational risk management. 3.6 Summary. References. 4 Semantic analysis of textual input (Horacio Saggion, Thierry Declerck and Kalina Bontcheva). 4.1 Introduction. 4.2 Information extraction. 4.3 The general architecture for text engineering. 4.4 Text analysis components. 4.5 Ontology support. 4.6 Ontology-based information extraction. 4.7 Evaluation. 4.8 Summary. References. 5 A case study of ETL for operational risks (Valerio Grossi and Andrea Romei). 5.1 Introduction. 5.2 ETL (Extract, Transform and Load). 5.3 Case study specification. 5.4 The ETL-based solution. 5.5 Summary. References. 6 Risk-based testing of web services (Xiaoying Bai and Ron S. Kenett). 6.1 Introduction. 6.2 Background. 6.3 Problem statement. 6.4 Risk assessment. 6.5 Risk-based adaptive group testing. 6.6 Evaluation. 6.7 Summary. References. PART III OPERATIONAL RISK ANALYTICS. 7 Scoring models for operational risks (Paolo Giudici). 7.1 Background. 7.2 Actuarial methods. 7.3 Scorecard models. 7.4 Integrated scorecard models. 7.5 Summary. References. 8 Bayesian merging and calibration for operational risks (Silvia Figini). 8.1 Introduction. 8.2 Methodological proposal. 8.3 Application. 8.4 Summary. References. 9 Measures of association applied to operational risks (Ron S. Kenett and Silvia Salini). 9.1 Introduction. 9.2 The arules R script library. 9.3 Some examples. 9.4 Summary. References. PART IV OPERATIONAL RISK APPLICATIONS AND INTEGRATION WITH OTHER DISCIPLINES. 10 Operational risk management beyond AMA: new ways to quantify non-recorded losses (Giorgio Aprile, Antonio Pippi and Stefano Visinoni). 10.1 Introduction. 10.2 Non-recorded losses in a banking context. 10.3 Methodology. 10.4 Performing the analysis: a case study. 10.5 Summary. References. 11 Combining operational risks in financial risk assessment scores (Michael Munsch, Silvia Rohe and Monika Jungemann-Dorner). 11.1 Interrelations between financial risk management and operational risk management. 11.2 Financial rating systems and scoring systems. 11.3 Data management for rating and scoring. 11.4 Use case: business retail ratings for assessment of probabilities of default. 11.5 Use case: quantitative financial ratings and prediction of fraud. 11.6 Use case: money laundering and identification of the beneficial owner. 11.7 Summary. References. 12 Intelligent regulatory compliance (Marcus Spies, Rolf Gubser and Markus Schacher). 12.1 Introduction to standards and specifications for business governance. 12.2 Specifications for implementing a framework for business governance. 12.3 Operational risk from a BMM/SBVR perspective. 12.4 Intelligent regulatory compliance based on BMM and SBVR. 12.5 Generalization: capturing essential concepts of operational risk in UML and BMM. 12.6 Summary. References. 13 Democratisation of enterprise risk management (Paolo Lombardi, Salvatore Piscuoglio, Ron S. Kenett, Yossi Raanan and Markus Lankinen). 13.1 Democratisation of advanced risk management services. 13.2 Semantic-based technologies and enterprise-wide risk management. 13.3 An enterprise-wide risk management vision. 13.4 Integrated risk self-assessment: a service to attract customers. 13.5 A real-life example in the telecommunications industry. 13.6 Summary. References. 14 Operational risks, quality, accidents and incidents (Ron S. Kenett and Yossi Raanan). 14.1 The convergence of risk and quality management. 14.2 Risks and the Taleb quadrants. 14.3 The quality ladder. 14.4 Risks, accidents and incidents. 14.5 Operational risks in the oil and gas industry. 14.6 Operational risks: data management, modelling and decision making. 14.7 Summary. References. Index.


Journal of Applied Statistics | 1996

Data-analytic aspects of the Shiryayev-Roberts control chart: Surveillance of a non-homogeneous Poisson process

Ron S. Kenett; Moshe Pollak

The Shiryayev-Roberts control chart has been proposed as a powerful competitor of the Shewhart control chart and the CUSUM procedure, on theoretical grounds. We demonstrate here the application of a Shiryayev-Roberts control chart to a non-homogeneous Poisson process. We show that, from a data-analytic point of view, the Shiryayev-Roberts surveillance scheme has several advantages over classical CUSUM charts. A case study of power failure times in a computer centre is used to illustrate our main points.


Archive | 2010

Process Improvement and CMMI for Systems and Software

Ron S. Kenett; Emanuel Baker

Presenting the state of the art in strategic planning and process improvement, Process Improvement and CMMI for Systems and Software provides a workable approach for achieving cost-effective process improvements for systems and software. Focusing on planning, implementation, and management in system and software processes, it supplies a brief overview of basic strategic planning models and covers fundamental concepts and approaches for system and software measurement, testing, and improvements. The book represents the significant cumulative experience of the authors who were among the first to introduce quality management to the software development processes. It introduces CMMI and various other software and systems process models. It also provides readers with an easy-to-follow methodology for evaluating the status of development and maintenance processes and for determining the return on investment for process improvements. The authors examine beta testing and various testing and usability programs. They highlight examples of useful metrics for monitoring process improvement projects and explain how to establish baselines against which to measure achieved improvements. Divided into four parts, this practical resource covers: Strategy and basics of quality and process improvement Assessment and measurement in systems and software Improvements and testing of systems and software Managing and reporting data The text concludes with a realistic case study that illustrates how the process improvement effort is structured and brings together the methods, tools, and techniques discussed. Spelling out how to lay out a reasoned plan for process improvement, this book supplies readers with concrete action plans for setting up process improvement initiatives that are effective, efficient, and sustainable.


IEEE Transactions on Reliability | 1986

A Semi-Parametric Approach to Testing for Reliability Growth, with Application to Software Systems

Ron S. Kenett; Moshe Pollak

We consider the following general model for reliability growth: the distribution of times between failures belongs to a known parametric family (not necessarily exponential), and the parameter corresponding to the distribution of a particular time between failures is either an unknown constant or an unobservable random variable with a (possibly unknown) distribution which can depend on past observations. We propose that acceptable reliability can sometimes be formalized as a state in which the value of the parameters is lower than a level set before testing begins. We apply sequential detection methodology to the problem of ascertaining that an acceptable state of reliability has been attained and illustrate our approach by applying it to testing for reliability growth of a software system, using actual data.


Quality and Reliability Engineering International | 2007

Joseph M. Juran, a perspective on past contributions and future impact

A. Blanton Godfrey; Ron S. Kenett

This paper combines presentations by the authors in a special session dedicated to the work of Joseph M. Juran at the sixth annual conference of the European Network for Business and Industrial Statistics in Wroclaw, Poland. The paper offers an historical perspective of the contributions of J. M. Juran to management science emphasizing aspects of cause and effect relationships and Integrated Models. Specifically, the paper presents the Juran concepts of Management Breakthrough, the Pareto Principle, the Juran Trilogy® and Six Sigma. The impact of these contributions, put in an historical perspective of key thinkers who investigated cause and effect relationships, is then discussed. The impact of these contributions to modern Integrated Models is then assessed. Copyright


international conference on software and data technologies | 2013

Managing risk in open source software adoption

Javier Franch Gutiérrez; Angelo Susi; Maria Carmela Annosi; Claudia Patricia Ayala Martínez; Ruediger Glott; Daniel Gross; Ron S. Kenett; Fabio Mancinelli; Pop Ramsany; Cedric Thomas; David Ameller; Stijn Bannier; Nili Bergida; Yehuda Blumenfeld; Olivier Bouzereau; Dolors Costal Costa; Manuel Dominguez; Kirsten Haaland; Lidia López Cuesta; Mirko Mourandini; Alberto Siena

By 2016 an estimated 95% of all commercial software packages will include Open Source Software (OSS). This extended adoption is yet not avoiding failure rates in OSS projects to be as high as 50%. Inadequate risk management has been identified among the top mistakes to avoid when implementing OSS-based solutions. Understanding, managing and mitigating OSS adoption risks is therefore crucial to avoid potentially significant adverse impact on the business. In this position paper we portray a short report of work in progress on risk management in OSS adoption processes. We present a risk-aware technical decision-making management platform integrated in a business-oriented decision-making framework, which together support placing technical OSS adoption decisions into organizational, business strategy as well as the broader OSS community context. The platform will be validated against a collection of use cases coming from different types of organizations: big companies, SMEs, public administration, consolidated OSS communities and emergent small OSS products.

Collaboration


Dive into the Ron S. Kenett's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Galit Shmueli

National Tsing Hua University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Angelo Susi

fondazione bruno kessler

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xavier Franch

Polytechnic University of Catalonia

View shared research outputs
Researchain Logo
Decentralizing Knowledge