Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Edward W. Wild is active.

Publication


Featured researches published by Edward W. Wild.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2006

Multisurface proximal support vector machine classification via generalized eigenvalues

Olvi L. Mangasarian; Edward W. Wild

A new approach to support vector machine (SVM) classification is proposed wherein each of two data sets are proximal to one of two distinct planes that are not parallel to each other. Each plane is generated such that it is closest to one of the two data sets and as far as possible from the other data set. Each of the two nonparallel proximal planes is obtained by a single MATLAB command as the eigenvector corresponding to a smallest eigenvalue of a generalized eigenvalue problem. Classification by proximity to two distinct nonlinear surfaces generated by a nonlinear kernel also leads to two simple generalized eigenvalue problems. The effectiveness of the proposed method is demonstrated by tests on simple examples as well as on a number of public data sets. These examples show the advantages of the proposed approach in both computation time and test set correctness.


IEEE Transactions on Neural Networks | 2007

Nonlinear Knowledge in Kernel Approximation

Olvi L. Mangasarian; Edward W. Wild

Prior knowledge over arbitrary general sets is incorporated into nonlinear kernel approximation problems in the form of linear constraints in a linear program. The key tool in this incorporation is a theorem of the alternative for convex functions that converts nonlinear prior knowledge implications into linear inequalities without the need to kernelize these implications. Effectiveness of the proposed formulation is demonstrated on two synthetic examples and an important lymph node metastasis prediction problem. All these problems exhibit marked improvements upon the introduction of prior knowledge over nonlinear kernel approximation approaches that do not utilize such knowledge


ACM Transactions on Knowledge Discovery From Data | 2008

Privacy-preserving classification of vertically partitioned data via random kernels

Olvi L. Mangasarian; Edward W. Wild; Glenn Fung

We propose a novel privacy-preserving support vector machine (SVM) classifier for a data matrix A whose input feature columns are divided into groups belonging to different entities. Each entity is unwilling to share its group of columns or make it public. Our classifier is based on the concept of a reduced kernel K(A, B′), where B′ is the transpose of a random matrix B. The column blocks of B corresponding to the different entities are privately generated by each entity and never made public. The proposed linear or nonlinear SVM classifier, which is public but does not reveal any of the privately held data, has accuracy comparable to that of an ordinary SVM classifier that uses the entire set of input features directly.


IEEE Transactions on Neural Networks | 2008

Nonlinear Knowledge-Based Classification

Olvi L. Mangasarian; Edward W. Wild

In this brief, prior knowledge over general nonlinear sets is incorporated into nonlinear kernel classification problems as linear constraints in a linear program. These linear constraints are imposed at arbitrary points, not necessarily where the prior knowledge is given. The key tool in this incorporation is a theorem of the alternative for convex functions that converts nonlinear prior knowledge implications into linear inequalities without the need to kernelize these implications. Effectiveness of the proposed formulation is demonstrated on publicly available classification data sets, including a cancer prognosis data set. Nonlinear kernel classifiers for these data sets exhibit marked improvements upon the introduction of nonlinear prior knowledge compared to nonlinear kernel classifiers that do not utilize such knowledge.


Data Mining | 2010

Privacy-Preserving Random Kernel Classification of Checkerboard Partitioned Data

Olvi L. Mangasarian; Edward W. Wild

We propose a privacy-preserving support vector machine (SVM) classifier for a data matrix A whose input feature columns as well as individual data point rows are divided into groups belonging to different entities. Each entity is unwilling to make public its group of columns and rows. Our classifier utilizes the entire data matrix A while maintaining the privacy of each block. This classifier is based on the concept of a random kernel K(A,B’) where B’ is the transpose of a random matrix B, as well as the reduction of a possibly complex pattern of data held by each entity into a checkerboard pattern. The proposed nonlinear SVMclassifier, which is public but does not reveal any of the privately held data, has accuracy comparable to that of an ordinary SVM classifier based on the entire set of input features and data points all made public.


Optimization | 2011

Exactness conditions for a convex differentiable exterior penalty for linear programming

Olvi L. Mangasarian; Edward W. Wild

Sufficient conditions are given for a classical dual exterior penalty function of a linear program (LP) to be independent of its penalty parameter. This ensures that an exact solution to the primal LP can be obtained by minimizing the dual exterior penalty function. The sufficient conditions give a precise value to such a penalty parameter introduced in [O.L. Mangasarian, Exact 1-Norm support vector machines via unconstrained convex differentiable minimization, Tech. Rep. 05–03, Data Mining Institute, Computer Sciences Department, University of Wisconsin, Madison, Wisconsin, 2005. J. Machine Learn. Res. 7, 2006, pp. 1517–1530], where no quantification of the parameter was given. Computational results on LPs with up to one million variables or constraints compare favourably to ILOG CPLEX 9.0 users manual [ILOG CPLEX 9.0 Users Manual. ILOG, Incline Village, Nevada, 2003] and validate the proposed approach.


national conference on artificial intelligence | 2005

Giving advice about preferred actions to reinforcement learners via knowledge-based kernel regression

Richard Maclin; Jude W. Shavlik; Lisa Torrey; Trevor Walker; Edward W. Wild


Journal of Machine Learning Research | 2004

Knowledge-Based Kernel Approximation

Olvi L. Mangasarian; Jude W. Shavlik; Edward W. Wild


Journal of Optimization Theory and Applications | 2008

Multiple Instance Classification via Successive Linear Programming

Olvi L. Mangasarian; Edward W. Wild


DMIN | 2008

Privacy-Preserving Classification of Horizontally Partitioned Data via Random Kernels.

Olvi L. Mangasarian; Edward W. Wild

Collaboration


Dive into the Edward W. Wild's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jude W. Shavlik

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Lisa Torrey

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Trevor Walker

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge