Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yehuda Vardi is active.

Publication


Featured researches published by Yehuda Vardi.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2004

Mixed group ranks: preference and confidence in classifier combination

Ofer Melnik; Yehuda Vardi; Cun-Hui Zhang

Classifier combination holds the potential of improving performance by combining the results of multiple classifiers. For domains with very large numbers of classes, such as biometrics, we present an axiomatic framework of desirable mathematical properties for combination functions of rank-based classifiers. This framework represents a continuum of combination rules, including the Borda Count, Logistic Regression, and Highest Rank combination methods as extreme cases. Intuitively, this framework captures how the two complementary concepts of general preference for specific classifiers and the confidence it has in any specific result (as indicated by ranks) can be balanced while maintaining consistent rank interpretation. Mixed Group Ranks (MGR) is a new combination function that balances preference and confidence by generalizing these other functions. We demonstrate that MGR is an effective combination approach by performing multiple experiments on data sets with large numbers of classes and classifiers from the FERET face recognition study.


Mathematical Programming | 2001

A modified Weiszfeld algorithm for the Fermat-Weber location problem

Yehuda Vardi; Cun-Hui Zhang

Abstract.This paper gives a new, simple, monotonically convergent, algorithm for the Fermat-Weber location problem, with extensions covering more general cost functions.


IEEE Signal Processing Letters | 2004

Metrics useful in network tomography studies

Yehuda Vardi

To facilitate the development of statistical methods geared at analyzing data from networks, it is important to have metrics which define and measure distances between a networks links, its paths, and also between different networks. This is particularly important in the rapidly growing area of network tomography which plays a central role in studies of data, communication, and Internet traffic. We propose such metrics, outline some of their properties, and motivate them with two very recent applications. The proposed metrics are simple yet have appealing properties.


Pattern Recognition Letters | 1996

Efficient illumination normalization of facial images

P. Jonathon Phillips; Yehuda Vardi

This paper presents a simple method for normalizing non-linear illumination variations in facial images. Facial pixels are modelled by an empirical probability distribution, which is estimated by kernel density estimation. The illumination is normalized by transforming the empirical distribution function.


International Journal of Cancer | 1997

Radioimmunotherapy of experimental pancreatic cancer with 131I‐labeled monoclonal antibody PAM4

David V. Gold; Thomas M. Cardillo; Yehuda Vardi; Rosalynn Blumenthal

We examined the therapeutic efficacy of 131I‐labeled murine monoclonal antibody (MAb) PAM4 against human pancreatic cancers carried as xenografts in athymic nude mice. Animals bearing the CaPanI tumor (0.2 cm3) were either untreated or were given, 131I‐labeled nonspecific Ag8 antibody. By week 7 mean tumor size had grown 16.5 ± 8.4‐fold and 4.2 ± 2.5‐fold for the untreated and 131I‐Ag8‐treated animals, respectively. In contrast, animals administered 131I‐PAM4 exhibited marked regression of tumors to an average of 15% of initial tumor volume. Since most pancreatic cancer patients present with large tumor burdens, the limitation of 131I‐PAM4 treatment with respect to initial tumor size was investigated in animals bearing tumors of approximately 0.5 cm3, 1.0 cm3 and 2.0 cm3. Significant extension of survival time (>3‐fold increase) was noted for both the 0.5 cm3 and 1.0 cm3 131I‐PAM4‐treated groups, compared to their respective untreated controls. Even in the group bearing large 2.0‐cm3 tumors, survival was increased 2‐fold over the control group. To further improve anti‐tumor effects in large tumors, 2 injections of 131I‐PAM4 were administered at a 4‐week interval to animals bearing tumors of approximately 1.0 cm3. Significant extended survival was noted for the group receiving 2 doses when compared to the group receiving only 1 dose. Int. J. Cancer 71:660‐667, 1997.


Archive | 2002

A Robust Clustering Method and Visualization Tool Based on Data Depth

Rebecka Jörnsten; Yehuda Vardi; Cun-Hui Zhang

We present a robust clustering method based on a modified Weisz-feld algorithm for the multivariate median, and associated data depth. The multivariate medians are used to represent the clusters, while the induced relative L 1-depths are used to identify outliers and to select the number of clusters. We develop a cluster validation and visualization tool based on the within-cluster data depths, and the cluster data depths with respect to competing clusters. We apply our method to high-dimensional gene expression data, and several simulated data sets. Our method successfully identifies the number of clusters in noisy data sets, and generates accurate cluster assignments.


Information Retrieval | 2006

Extreme value theory applied to document retrieval from large collections

David Madigan; Yehuda Vardi; Ishay Weissman

We consider text retrieval applications that assign query-specific relevance scores to documents drawn from particular collections. Such applications represent a primary focus of the annual Text Retrieval Conference (TREC), where the participants compare the empirical performance of different approaches. P(K), the proportion of the top K documents that are relevant, is a popular measure of retrieval effectiveness.Participants in the TREC Very Large Corpus track have observed that when the target is a random sample from a collection, P(K) is substantially smaller than when the target is the entire collection. Hawking and Robertson (2003) confirmed this finding in a number of experimental settings. Hawking et al. (1999) posed as an open research question the cause of this phenomenon and proposed five possible explanatory hypotheses. In this paper, we present a mathematical analysis that sheds some light on these hypotheses and complements the experimental work of Hawking and Robertson (2003). We will also introduce C(L), contamination at L, the number of irrelevant documents amongst the top L relevant documents, and describe its properties.Our analysis shows that while P(K) typically will increase with collection size, the phenomenon is not universal. That is, the asymptotic behavior of P(K) and C(L) depends on the score distributions and relative proportions of relevant and irrelevant documents in the collection.


Archive | 2002

On the Bitplane Compression of Microarray Images

Rebecka Jörnsten; Yehuda Vardi; Cun-Hui Zhang

The microarray image technology is a new and powerful tool for studying the expression of thousands of genes simultaneously. Methods for image processing and statistical analysis are still under development, and results on microarray data from different sources are therefore rarely comparable. The urgent need for data formats and standards is recognized by researchers in the field. To facilitate the development of such standards, methods for efficient data sharing and transmission are necessary, that is compression. Microarray images come in pairs: two high precision 16 bits per pixel intensity scans (“red” and “green”). The genetic information is extracted from the two scans via segmentation, background correction and normalization of red-to-green image intensities. We present a compression scheme for microarray images that is based on an extension of the JPEG2000 lossless standard, used in conjunction with a robust L1 vector quantizer. The L1 vector quantizer is trained on microarray image data from a replicate experiment. Thus, the image pairs are encoded jointly. This ensures that the genetic information extraction is only marginally affected by the compression at compression ratios 8:1.


International Journal of Imaging Systems and Technology | 1998

Discrete Radon transform and its approximate inversion via the EM algorithm

Yehuda Vardi; D. Lee

The problem of reconstructing a binary function f, defined on a finite subset of a lattice ℤ, from an arbitrary collection of its partial‐sums is considered. The approach we present is based on (a) relaxing the binary constraints f(z) = 0 or 1 to interval constraints 0 ≤ f(z) ≤ 1, z ∈ ℤ, and (b) applying a minimum distance method (using Kullback‐Leiblers information divergence index as our distance function) to find such an f—say, fˆ—for which the distance between the observed and the theoretical partial sums is as small as possible. (Turning this fˆ into a binary function can be done as a separate postprocessing step: for instance, through thresholding, or through some additional Bayesian modeling.) To derive this minimum‐distance solution, we develope a new EM algorithm. This algorithm is different from the often‐studied EM/maximum likelihood algorithm in emission tomography and other linear‐inverse positively constrained problems because of the additional upper‐bound constraint (f ≤ 1) on the signal f. Properties of the algorithm, as well as similarities with and differences from some other methods, such as the linear‐programming approach or the algebraic reconstruction technique, are discussed. The methodology is demonstrated on three recently studied phantoms, and the simulation results are very promising, suggesting that the method could also work well under field conditions which may include a small or moderate level of measurement noise in the observed partial sums. The methodology has important applications in high‐resolution electron microscopy for the reconstruction of the atomic structure of crystals from their projections.


Annals of the Institute of Statistical Mathematics | 2000

Positron Emission Tomography and Random Coefficients Regression

Andrey Feuerverger; Yehuda Vardi

We further explore the relation between random coefficients regression (RCR) and computerized tomography. Recently, Beran et al. (1996, Ann. Statist., 24, 2569–2592) explored this connection to derive an estimation method for the non-parametric RCR problem which is closely related to image reconstruction methods in X-ray computerized tomography. In this paper we emphasize the close connection of the RCR problem with positron emission tomography (PET). Specifically, we show that the RCR problem can be viewed as an idealized (continuous) version of a PET experiment, by demonstrating that the nonparametric likelihood of the RCR problem is equivalent to that of a specific PET experiment. Consequently, methods independently developed for either of the two problems can be adapted from one problem to the other. To demonstrate the close relation between the two problems we use the estimation method of Beran, Feuerverger and Hall for image reconstruction in PET.

Collaboration


Dive into the Yehuda Vardi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ishay Weissman

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Rebecka Jörnsten

Chalmers University of Technology

View shared research outputs
Top Co-Authors

Avatar

David V. Gold

University of Rochester Medical Center

View shared research outputs
Top Co-Authors

Avatar

Hani Doss

Florida State University

View shared research outputs
Top Co-Authors

Avatar

J. Qin

Memorial Sloan Kettering Cancer Center

View shared research outputs
Top Co-Authors

Avatar

Jacqueline Spiegel-Cohen

Icahn School of Medicine at Mount Sinai

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge