Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Byron P. Roe is active.

Publication


Featured researches published by Byron P. Roe.


Nuclear Instruments & Methods in Physics Research Section A-accelerators Spectrometers Detectors and Associated Equipment | 2005

Boosted decision trees as an alternative to artificial neural networks for particle identification

Byron P. Roe; Hai Jun Yang; J. Zhu; Y. Liu; I. Stancu; Gordon McGregor

The efficacy of particle identification is compared using artificial neutral networks and boosted decision trees. The comparison is performed in the context of the MiniBooNE, an experiment at Fermilab searching for neutrino oscillations. Based on studies of Monte Carlo samples of simulated data, particle identification with boosting algorithms has better performance than that with artificial neural networks for the MiniBooNE experiment. Although the tests in this paper were for one experiment, it is expected that boosting algorithms will find wide application in physics.


Physical Review Letters | 2007

A Search for electron neutrino appearance at the Delta m**2 ~ 1- eV**2 scale

A. A. Aguilar-Arevalo; A. O. Bazarko; S. J. Brice; B. C. Brown; L. Bugel; J. Cao; L. Coney; J. M. Conrad; D. C. Cox; A. Curioni; Z. Djurcic; D. A. Finley; B. T. Fleming; R. Ford; F. G. Garcia; G. T. Garvey; C. Green; J. A. Green; T. L. Hart; E. Hawker; R. Imlay; R. A. Johnson; P. Kasper; T. Katori; T. Kobilarcik; I. Kourbanis; S. Koutsoliotas; E. M. Laird; J. M. Link; Y. Liu

A. A. Aguilar-Arevalo, A. O. Bazarko, S. J. Brice, B. C. Brown, L. Bugel, J. Cao, L. Coney, J. M. Conrad, D. C. Cox, A. Curioni, Z. Djurcic, D. A. Finley, B. T. Fleming, R. Ford, F. G. Garcia, G. T. Garvey, C. Green, J. A. Green, T. L. Hart, E. Hawker, R. Imlay, R. A. Johnson, P. Kasper, T. Katori, T. Kobilarcik, I. Kourbanis, S. Koutsoliotas, E. M. Laird, J. M. Link, Y. Liu, Y. Liu, W. C. Louis, K. B. M. Mahn, W. Marsh, P. S. Martin, G. McGregor, W. Metcalf, P. D. Meyers, F. Mills, G. B. Mills, J. Monroe, C. D. Moore, R. H. Nelson, P. Nienaber, S. Ouedraogo, R. B. Patterson, D. Perevalov, C. C. Polly, E. Prebys, J. L. Raaf, H. Ray, B. P. Roe, A. D. Russell, V. Sandberg, R. Schirato, D. Schmitz, M. H. Shaevitz, F. C. Shoemaker, D. Smith, M. Sorel, P. Spentzouris, I. Stancu, R. J. Stefanski, M. Sung, H. A. Tanaka, R. Tayloe, M. Tzanov, R. Van de Water, M. O. Wascko, D. H. White, M. J. Wilking, H. J. Yang, G. P. Zeller, E. D. Zimmerman


Technometrics | 1993

Probability and Statistics in Experimental Physics

Byron P. Roe

Preface. 1 Basic Probability Concepts. 2 Some Initial Definitions. 3 Some Results of Specific Distributions. 4 Discrete Distributions and Combinatorials,5.Specific Discrete Distributions. 6 The Normal (or Gaussian) Distribution and Other Continuous Distributions. 7 Generating Functions and Characteristic Functions. 8 The Monte Carlo Method: Computer Simulation of Experiments. 9 Queueing Theory and Other Probability Questions. 10 Two Dimensional and Multi-Dimensional Distributions.,11.The Central Limit Theorem. 12 Inverse Probability Confidence Limits. 13 Methods for Estimating Parameters. Least Squares and Maximum Likelihood. 14 Curve Fitting. 15 Bartlett S Function Estimating Likelihood Ratios Needed for an Experiment. 16 Interpolating Functions and Unfolding Problems. 17 Fitting Data with Correlations and Constraints. 18 Beyond Maximum Likelihood and Least Squares Robust Methods,References


Physical Review Letters | 2008

Measurement of muon neutrino quasielastic scattering on carbon

A. A. Aguilar-Arevalo; A. O. Bazarko; S. J. Brice; B. C. Brown; L. Bugel; J. Cao; L. Coney; J. M. Conrad; D. C. Cox; A. Curioni; Z. Djurcic; D. A. Finley; B. T. Fleming; R. Ford; F. G. Garcia; G. T. Garvey; C. Green; J. A. Green; T. L. Hart; E. Hawker; R. Imlay; R. A. Johnson; P. Kasper; T. Katori; T. Kobilarcik; I. Kourbanis; S. Koutsoliotas; E. M. Laird; J. M. Link; Y. Liu

A. A. Aguilar-Arevalo, A. O. Bazarko, S. J. Brice, B. C. Brown, L. Bugel, J. Cao, L. Coney, J. M. Conrad, D. C. Cox, A. Curioni, Z. Djurcic, D. A. Finley, B. T. Fleming, R. Ford, F. G. Garcia, G. T. Garvey, C. Green, J. A. Green, T. L. Hart, E. Hawker, R. Imlay, R. A. Johnson, P. Kasper, T. Katori, T. Kobilarcik, I. Kourbanis, S. Koutsoliotas, E. M. Laird, J. M. Link, Y. Liu, Y. Liu, W. C. Louis, K. B. M. Mahn, W. Marsh, P. S. Martin, G. McGregor, W. Metcalf, P. D. Meyers, F. Mills, G. B. Mills, J. Monroe, C. D. Moore, R. H. Nelson, P. Nienaber, S. Ouedraogo, R. B. Patterson, D. Perevalov, C. C. Polly, E. Prebys, J. L. Raaf, H. Ray, B. P. Roe, A. D. Russell, V. Sandberg, R. Schirato, D. Schmitz, M. H. Shaevitz, F. C. Shoemaker, D. Smith, M. Sorel, P. Spentzouris, I. Stancu, R. J. Stefanski, M. Sung, H. A. Tanaka, R. Tayloe, M. Tzanov, R. Van de Water, M. O. Wascko, D. H. White, M. J. Wilking, H. J. Yang, G. P. Zeller, E. D. Zimmerman


Physical Review D | 2000

Setting confidence belts

Byron P. Roe; Michael Woodroofe

We propose using a Bayes procedure with uniform improper prior to determine credible belts for the mean of a Poisson distribution in the presence of background and for the continuous problem of measuring a non-negative quantity


Physical Review Letters | 2017

Dark Matter Search in a Proton Beam Dump with MiniBooNE

A. A. Aguilar-Arevalo; M. Backfish; A. Bashyal; Brian Batell; B. C. Brown; R. Carr; A. Chatterjee; R. L. Cooper; Patrick deNiverville; R. Dharmapalan; Z. Djurcic; R. Ford; F. G. Garcia; G. T. Garvey; J. Grange; J. A. Green; W. Huelsnitz; I.L. de Icaza Astiz; G. Karagiorgi; T. Katori; W. Ketchum; T. Kobilarcik; Q. Liu; W. C. Louis; W. Marsh; C. D. Moore; G. B. Mills; J. Mirabal; P. Nienaber; Z. Pavlovic

theta


phystat | 2006

Boosted decision trees, a powerful event classifier

Byron P. Roe; Hai Jun Yang; J. Zhu

with a normally distributed measurement error. Within the Bayesian framework, these belts are optimal. The credible limits are then examined from a frequentist point of view and found to have good frequentist and conditional frequentist properties.


Nuclear Instruments & Methods in Physics Research Section A-accelerators Spectrometers Detectors and Associated Equipment | 2007

Studies of stability and robustness for artificial neural networks and boosted decision trees

Hai Jun Yang; Byron P. Roe; J. Zhu

The MiniBooNE-DM Collaboration searched for vector-boson mediated production of dark matter using the Fermilab 8-GeV Booster proton beam in a dedicated run with 1.86×10^{20} protons delivered to a steel beam dump. The MiniBooNE detector, 490xa0m downstream, is sensitive to dark matter via elastic scattering with nucleons in the detector mineral oil. Analysis methods developed for previous MiniBooNE scattering results were employed, and several constraining data sets were simultaneously analyzed to minimize systematic errors from neutrino flux and interaction rates. No excess of events over background was observed, leading to a 90% confidence limit on the dark matter cross section parameter, Y=ε^{2}α_{D}(m_{χ}/m_{V})^{4}≲10^{-8}, for α_{D}=0.5 and for dark matter masses of 0.01<m_{χ}<0.3u2009u2009GeV in a vector portal model of dark matter. This is the best limit from a dedicated proton beam dump search in this mass and coupling range and extends below the mass range of direct dark matter searches. These results demonstrate a novel and powerful approach to dark matter searches with beam dump experiments.


Physics Today | 1996

Particle Physics at the New Millennium

Byron P. Roe; Alexander Firestone

Consider the problem of classification of events between signal and background, given a number of particle identification (PID) variables. A decision tree is a sequence of binary splits of the data. To train the tree a set of known training events are used. The results are measured using a separate set of known testing events. Consider all of the data to be on one node. The best PID variable and best place on that variable to split the data into separate signal and background is found. There are then two nodes. The process is repeated on these new nodes and is continued until a given number of final nodes (called “leaves”) are obtained, or until all leaves are pure or until a node has too few events. There are several popular criteria to determine the best PID variable and best place on which to split a node. The gini criterion is used here. Suppose that event i has weight Wi. The purity P of a node is defined as the weight of signal events on the node divided by the total weight of events on that node. For a given node: gini = P (1 − P ) ∑ i Wi. gini is zero for P = 1 or P = 0. The best split is chosen as the one which minimizes gini. The next node to split is chosen by finding that node whose splitting maximizes the change in gini. In this way a decision tree is built. Leaves with P ≥ 0.5 are signal leaves and the rest are background leaves. Decision trees are powerful, but unstable. A small change in the training data can produce a large change in the tree. This is remedied by the use of boosting. For boosting, the training events which were misclassified (a signal event fell on a background leaf or vice versa) have their weights increased (boosted), and a new tree is formed. This procedure is then repeated for the new tree. In this way many trees are built up. The score from the mth individual tree Tm is taken as +1 if the event falls on a signal leaf and −1 if the event falls on a background leaf. The final score is taken as a weighted sum of the scores of the individual leaves. Two methods for boosting are considered here. The first is called AdaBoost. Define errm = weight misclassified/total weight for tree m. Let αm = β log [(1 − errm)/errm], where β is a constant. In the statistical literature β has been taken as one, but for the MiniBooNE experiment, β = 0.5 has been found to be the optimum value. The misclassified events have their weight multiplied by em . The weights are then renormalized so the sum of all of the training event weights is one. The final score is T = ∑Ntree m=1 αmTm. The second method of boosting considered here is called -boost or, sometimes, shrinkage. Misclassified events have their weight multiplied by e , where is a constant. For the MiniBooNE experiment, = 0.03 has been optimum. (The results vary only mildly as β or are changed a bit.) The final score is T = ∑Ntree m=1 Tm. -boost changes weights a little at a time, while AdaBoost can be shown to try to optimize each change in weights to minimize e where T is the score and y is +1 for a signal event and −1 for a background event. The optimum value is F = log prob/(1 − prob), where prob is the probability that y = 1, given the observed PID variables. In practice, for MiniBooNE, the two boosting methods have performed almost equally well. Boosting is described as using many weak classifiers to build a strong classifier. This is seen in Figure 1. After the first few trees, the misclassification fraction for an individual tree is above 40%. In the MiniBooNE experiment some hundreds of possible PID variables have been suggested. The most powerful of these have been selected by accepting those which are used most often as the splitting variable. Some care needs to be taken as some-


Physical Review D | 2017

Matter density versus distance for the neutrino beam from Fermilab to Lead, South Dakota, and comparison of oscillations with variable and constant density

Byron P. Roe

In this paper, we compare the performance, stability and robustness of Artificial Neural Networks (ANN) and Boosted Decision Trees (BDT) using MiniBooNE Monte Carlo samples. These methods attempt to classify events given a number of identification variables. The BDT algorithm has been discussed by us in previous publications. Testing is done in this paper by smearing and shifting the input variables of testing samples. Based on these studies, BDT has better particle identification performance than ANN. The degradation of the classifications obtained by shifting or smearing variables of testing results is smaller for BDT than for ANN.

Collaboration


Dive into the Byron P. Roe's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

G. H. Trilling

University of California

View shared research outputs
Top Co-Authors

Avatar

J. Kadyk

University of California

View shared research outputs
Top Co-Authors

Avatar

John L. Brown

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

T. Katori

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

G. T. Garvey

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar

J. M. Conrad

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge