Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yun-Chao Bai is active.

Publication


Featured researches published by Yun-Chao Bai.


international conference on machine learning and cybernetics | 2003

The sub-key theorem on credibility measure space

Ming-Hu Ha; Yun-Chao Bai; Wen-Guang Tang

In 1970s, Vladimir N. Vapnik proposed statistical learning theory. The theory is considered as optimum theory on small samples statistical estimation and prediction learning. It has more systematically investigated the rational conditions of the empirical risk minimization discipline and the relations between the empirical risk and the expected risk on finite samples. In fact, the key theorem of learning theory plays an important role in statistical learning theory. Its importance results in paving the way for the subsequent theories and applications. However, some theories and definitions only suit to fixed probability measure. These restricted conditions reduce the applied range of theorem. In this paper, we will generalize the applied range by means of changing the probability measure space into credibility measure space. In new measure space, we give new concepts and new theorem on classical theoretical foundation.


international conference on machine learning and cybernetics | 2003

The key theorem of learning theory about fuzzy examples

Ming-Hu Ha; Wen-Guang Tang; Yun-Chao Bai

After the presence of the key theorem of learning theory, statistical learning theory has been formed. But the examples which it has studied are all random vectors, in theory there is no the case of fuzzy random vectors. In other words, when the characters are very difficult to distract (namely the characters are fuzzy) , we have to use fuzzy random vectors to be the examples. In this paper, we give one method to deal with this kind of problems, and give the feasibility of this method.


international conference on machine learning and cybernetics | 2006

Further Discussion on Quasi-Probability

Ming-Hu Ha; Zhi-Fang Feng; Er-Ling Du; Yun-Chao Bai

Quasi-probability is a kind of typical non-additive measure, which has similar structure with probability, and is widely applied in some practical application. In this paper, some new properties of quasi-probability are presented. The concepts of distribution function, expected value and variance on quasi-probability space are put forward and some related properties are given and proven


international conference on machine learning and cybernetics | 2009

On the application of rough sets to data mining in economic practice

Qun-Feng Zhang; Suyun Zhao; Yun-Chao Bai

Mathematical models play an important role in the studies of modern economics. But in many fields of economics, it is difficult to build mathematical models for complex phenomena. So data mining is getting more and more popular in discovering the potential pattern of economic knowledge from databases. As a powerful tool for data mining, rough set theory has been widely used. In this research, we draw guidelines from several cases of rough set application in economic practice. Furthermore, to avoid the drawbacks of the existing methods, we develop a methodology for rough analysis in economic sector by combining the advantages of the fuzzy variable precision rough set model.


international conference on machine learning and cybernetics | 2008

Structural risk minimization principle on Sugeno space

Yun-Chao Bai; Qun-Feng Zhang

In this paper, the idea of the structural risk minimization (SRM) on Sugeno measure space is presented; Borel-Cantelli lemma is proven on Sugeno measure space; a theorem is proven to answer the following question, is the structural risk minimization principle consistent on Sugeno measure space?


international conference on machine learning and cybernetics | 2006

The Key Theorem of Learning Theory with Samples Corrupted by Equality-Expect Noise on Quasi-Probability Space

Ming-Hu Ha; Er-Ling Du; Zhi-Fang Feng; Yun-Chao Bai

Based on statistical learning theory on probability space and good properties of quasi-probability, some important inequalities are proven on quasi-probability space in this paper. Furthermore, some new concepts of learning theory are given and the key theorem of statistical learning theory is given and proven when samples are corrupted by equality-expect noise on quasi-probability space


international conference on machine learning and cybernetics | 2005

The key theorem of statistical learning theory on possibility spaces

Yun-Chao Bai; Ming-Hu Ha

In this paper, we will further discuss the property of the credibility measure and give Tchebycheffs inequality and a large number theorem. On possibility measure spaces, we will give some new concepts of the empirical risk functional, the expected risk functional and the empirical risk minimization inductive principle (ERM) according to the classical statistical learning theory. At last, we will give and prove the key theorem of the statistical learning on possibility measure spaces.


international conference on machine learning and cybernetics | 2010

The bounds on the risk for real-valued loss functions on possibility space

Peng Wang; Yun-Chao Bai; Chun-Qin Zhang; Cai-Li Zhou

Statistical learning theory on probability space is an important part of Machine Learning. Based on the key theorem, the bounds of uniform convergence have significant meaning. These bounds determine generalization ability of the learning machines utilizing the empirical risk minimization induction principle. In this paper, the bounds on the risk for real-valued loss function of the learning processes on possibility space are discussed, and the rate of uniform convergence is estimated.


international conference on machine learning and cybernetics | 2010

Convergence rate of structural risk minimization principle on quasi-probability space

Yun-Chao Bai; Peng Wang

In this paper, the concepts of annealed entropy, growth function and VC dimension are proposed on quasi-probability space; the rate of uniform convergence of structural risk minimization principle based on VC dimension on quasi-probability space are given and proven. These lay the theoretical basis to systematically establish statistical learning theory and construe support vector machine on quasi-probability.


international conference on machine learning and cybernetics | 2006

Structural Risk Minimization Principle on Credibility Space

Yun-Chao Bai; Ming-Hu Ha; Jun-Hua Li

In this paper, the idea of the structural risk minimization (SRM) on credibility space is presented; two theorems are proven to answer two questions: is the structural risk minimization principle consistent on credibility space? (Does the risk for the functions chosen according to this principle converge to the smallest possible risk for the set S with increasing amount of observations?) What is the bound on the (asymptotic) rate of convergence?

Collaboration


Dive into the Yun-Chao Bai's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge