Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Benjamin R. Barsdell is active.

Publication


Featured researches published by Benjamin R. Barsdell.


Science | 2013

A Population of Fast Radio Bursts at Cosmological Distances

David J. Thornton; B. W. Stappers; M. Bailes; Benjamin R. Barsdell; S. D. Bates; N. D. R. Bhat; M. Burgay; S. Burke-Spolaor; D. J. Champion; P. Coster; N. D'Amico; A. Jameson; S. Johnston; M. J. Keith; M. Kramer; Lina Levin; S. Milia; C. Ng; A. Possenti; W. van Straten

Mysterious Radio Bursts It has been uncertain whether single, short, and bright bursts of radio emission that have been observed are celestial or terrestrial in origin. Thornton et al. (p. 53; see the Perspective by Cordes) report the detection of four nonrepeating radio transient events with millisecond duration in data from the 64-meter Parkes radio telescope in Australia. The properties of these radio bursts indicate that they had their origin outside our galaxy, but it is not possible to tell what caused them. Because the intergalactic medium affects the characteristics of the bursts, it will be possible to use them to study its properties. Radio telescope data revealed four short, extragalactic, nonrepeating bursts of radio emission whose source is unknown. [Also see Perspective by Cordes] Searches for transient astrophysical sources often reveal unexpected classes of objects that are useful physical laboratories. In a recent survey for pulsars and fast transients, we have uncovered four millisecond-duration radio transients all more than 40° from the Galactic plane. The bursts’ properties indicate that they are of celestial rather than terrestrial origin. Host galaxy and intergalactic medium models suggest that they have cosmological redshifts of 0.5 to 1 and distances of up to 3 gigaparsecs. No temporally coincident x- or gamma-ray signature was identified in association with the bursts. Characterization of the source population and identification of host galaxies offers an opportunity to determine the baryonic content of the universe.


New Astronomy | 2010

Teraflop per second gravitational lensing ray-shooting using graphics processing units

Alexander C. Thompson; Christopher J. Fluke; David G. Barnes; Benjamin R. Barsdell

Abstract Gravitational lensing calculation using a direct inverse ray-shooting approach is a computationally expensive way to determine magnification maps, caustic patterns, and light-curves (e.g. as a function of source profile and size). However, as an easily parallelisable calculation, gravitational ray-shooting can be accelerated using programmable graphics processing units (GPUs). We present our implementation of inverse ray-shooting for the NVIDIA G80 generation of graphics processors using the NVIDIA Compute Unified Device Architecture (CUDA) software development kit. We also extend our code to multiple GPU systems, including a 4-GPU NVIDIA S1070 Tesla unit. We achieve sustained processing performance of 182 Gflop/s on a single GPU, and 1.28 Tflop/s using the Tesla unit. We demonstrate that billion-lens microlensing simulations can be run on a single computer with a Tesla unit in timescales of order a day without the use of a hierarchical tree-code.


Monthly Notices of the Royal Astronomical Society | 2013

The High Time Resolution Universe Pulsar Survey –VIII. The Galactic millisecond pulsar population

L. Levin; M. Bailes; Benjamin R. Barsdell; S. D. Bates; N. D. R. Bhat; M. Burgay; S. Burke-Spolaor; D. J. Champion; P. Coster; N. D'Amico; A. Jameson; S. Johnston; M. J. Keith; M. Kramer; S. Milia; C. Ng; Andrea Possenti; B. W. Stappers; David J. Thornton; W. van Straten

We have used millisecond pulsars (MSPs) from the southern High Time Resolution Universe (HTRU) intermediate latitude survey area to simulate the distribution and total population of MSPs in the Galaxy. Our model makes use of the scale factor method, which estimates the ratio of the total number of MSPs in the Galaxy to the known sample. Using our best fit value for the z-height, z=500 pc, we find an underlying population of MSPs of 8.3(\pm 4.2)*10^4 sources down to a limiting luminosity of L_min=0.1 mJy kpc^2 and a luminosity distribution with a steep slope of d\log N/d\log L = -1.45(\pm 0.14). However, at the low end of the luminosity distribution, the uncertainties introduced by small number statistics are large. By omitting very low luminosity pulsars, we find a Galactic population above L_min=0.2 mJy kpc^2 of only 3.0(\pm 0.7)*10^4 MSPs. We have also simulated pulsars with periods shorter than any known MSP, and estimate the maximum number of sub-MSPs in the Galaxy to be 7.8(\pm 5.0)*10^4 pulsars at L=0.1 mJy kpc^2. In addition, we estimate that the high and low latitude parts of the southern HTRU survey will detect 68 and 42 MSPs respectively, including 78 new discoveries. Pulsar luminosity, and hence flux density, is an important input parameter in the model. Some of the published flux densities for the pulsars in our sample do not agree with the observed flux densities from our data set, and we have instead calculated average luminosities from archival data from the Parkes Telescope. We found many luminosities to be very different than their catalogue values, leading to very different population estimates. Large variations in flux density highlight the importance of including scintillation effects in MSP population studies.


Monthly Notices of the Royal Astronomical Society | 2012

Accelerating incoherent dedispersion

Benjamin R. Barsdell; M. Bailes; David G. Barnes; Christopher J. Fluke

Incoherent dedispersion is a computationally intensive problem that appears frequently in pulsar and transient astronomy. For current and future transient pipelines, dedispersion can dominate the total execution time, meaning its computational speed acts as a constraint on the quality and quantity of science results. It is thus critical that the algorithm be able to take advantage of trends in commodity computing hardware. With this goal in mind, we present an analysis of the ‘direct’, ‘tree’ and ‘sub-band’ dedispersion algorithms with respect to their potential for efficient execution on modern graphics processing units (GPUs). We find all three to be excellent candidates, and proceed to describe implementations in c for cuda using insight gained from the analysis. Using recent CPU and GPU hardware, the transition to the GPU provides a speed-up of nine times for the direct algorithm when compared to an optimized quad-core CPU code. For realistic recent survey parameters, these speeds are high enough that further optimization is unnecessary to achieve real-time processing. Where further speed-ups are desirable, we find that the tree and sub-band algorithms are able to provide three to seven times better performance at the cost of certain smearing, memory consumption and development time trade-offs. We finish with a discussion of the implications of these results for future transient surveys. Our GPU dedispersion code is publicly available as a c library at http://dedisp.googlecode.com/.


Publications of the Astronomical Society of Australia | 2011

Astrophysical Supercomputing with GPUs: Critical Decisions for Early Adopters*

Christopher J. Fluke; David G. Barnes; Benjamin R. Barsdell; Amr H. Hassan

General-purpose computing on graphics processing units (GPGPU) is dramatically changing the landscape of high performance computing in astronomy. In this paper, we identify and investigate several key decision areas, with a goal of simplifying the early adoption of GPGPU in astronomy. We consider the merits of OpenCL as an open standard in order to reduce risks associated with coding in a native, vendor-specific programming environment, and present a GPU programming philosophy based on using brute force solutions. We assert that effective use of new GPU-based supercomputing facilities will require a change in approach from astronomers. This will likely include improved programming training, an increased need for software development best practice through the use of profiling and related optimisation tools, and a greater reliance on third-party code libraries. As with any new technology, those willing to take the risks and make the investment of time and effort to become early adopters of GPGPU in astronomy, stand to reap great benefits.


Monthly Notices of the Royal Astronomical Society | 2012

The High Time Resolution Universe Pulsar Survey - VI. An artificial neural network and timing of 75 pulsars

S. D. Bates; M. Bailes; Benjamin R. Barsdell; N. D. R. Bhat; M. Burgay; S. Burke-Spolaor; D. J. Champion; P. Coster; Nichi DAmico; A. Jameson; S. Johnston; M. J. Keith; M. Kramer; Lina Levin; A. G. Lyne; S. Milia; C.-Y. Ng; C. Nietner; Andrea Possenti; B. W. Stappers; David J. Thornton; W. van Straten

We present 75 pulsars discovered in the mid-latitude portion of the High Time Resolution Universe survey, 54 of which have full timing solutions. All the pulsars have spin periods greater than 100 ms, and none of those with timing solutions are in binaries. Two display particularly interesting behaviour; PSR J1054{5944 is found to be an intermittent pulsar, and PSR J1809{0119 has glitched twice since its discovery. In the second half of the paper we discuss the development and application of an articial neural network in the data-processing pipeline for the survey. We discuss the tests that were used to generate scores and nd that our neural network was able to reject over 99% of the candidates produced in the data processing, and able to blindly detect 85% of pulsars. We suggest that improvements to the accuracy should be possible if further care is taken when training an articial


New Astronomy | 2010

Computational advances in gravitational microlensing: a comparison of CPU, GPU, and parallel, large data codes

N. F. Bate; Christopher J. Fluke; Benjamin R. Barsdell; H. Garsden; Geraint F. Lewis

Abstract To assess how future progress in gravitational microlensing computation at high optical depth will rely on both hardware and software solutions, we compare a direct inverse ray-shooting code implemented on a graphics processing unit (GPU) with both a widely-used hierarchical tree code on a single-core CPU, and a recent implementation of a parallel tree code suitable for a CPU-based cluster supercomputer. We examine the accuracy of the tree codes through comparison with a direct code over a much wider range of parameter space than has been feasible before. We demonstrate that all three codes present comparable accuracy, and choice of approach depends on considerations relating to the scale and nature of the microlensing problem under investigation. On current hardware, there is little difference in the processing speed of the single-core CPU tree code and the GPU direct code, however the recent plateau in single-core CPU speeds means the existing tree code is no longer able to take advantage of Moore’s law-like increases in processing speed. Instead, we anticipate a rapid increase in GPU capabilities in the next few years, which is advantageous to the direct code. We suggest that progress in other areas of astrophysical computation may benefit from a transition to GPUs through the use of “brute force” algorithms, rather than attempting to port the current best solution directly to a GPU language – for certain classes of problems, the simple implementation on GPUs may already be no worse than an optimised single-core CPU version.


Monthly Notices of the Royal Astronomical Society | 2010

Analysing Astronomy Algorithms for GPUs and Beyond

Benjamin R. Barsdell; David G. Barnes; Christopher J. Fluke

Astronomy depends on ever increasing computing power. Processor clock-rates have plateaued, and increased performance is now appearing in the form of additional processor cores on a single chip. This poses significant challenges to the astronomy software community. Graphics Processing Units (GPUs), now capable of general-purpose computation, exemplify both the difficult learning-curve and the significant speedups exhibited by massively-parallel hardware architectures. We present a generalised approach to tackling this paradigm shift, based on the analysis of algorithms. We describe a small collection of foundation algorithms relevant to astronomy and explain how they may be used to ease the transition to massively-parallel computing architectures. We demonstrate the effectiveness of our approach by applying it to four well-known astronomy problems: Hogbom CLEAN, inverse ray-shooting for gravitational lensing, pulsar dedispersion and volume rendering. Algorithms with well-defined memory access patterns and high arithmetic intensity stand to receive the greatest performance boost from massively-parallel architectures, while those that involve a significant amount of decision-making may struggle to take advantage of the available processing power.


Monthly Notices of the Royal Astronomical Society | 2010

Analysing astronomy algorithms for graphics processing units and beyond

Benjamin R. Barsdell; David G. Barnes; Christopher J. Fluke

Astronomy depends on ever increasing computing power. Processor clock-rates have plateaued, and increased performance is now appearing in the form of additional processor cores on a single chip. This poses significant challenges to the astronomy software community. Graphics Processing Units (GPUs), now capable of general-purpose computation, exemplify both the difficult learning-curve and the significant speedups exhibited by massively-parallel hardware architectures. We present a generalised approach to tackling this paradigm shift, based on the analysis of algorithms. We describe a small collection of foundation algorithms relevant to astronomy and explain how they may be used to ease the transition to massively-parallel computing architectures. We demonstrate the effectiveness of our approach by applying it to four well-known astronomy problems: Hogbom CLEAN, inverse ray-shooting for gravitational lensing, pulsar dedispersion and volume rendering. Algorithms with well-defined memory access patterns and high arithmetic intensity stand to receive the greatest performance boost from massively-parallel architectures, while those that involve a significant amount of decision-making may struggle to take advantage of the available processing power.


Monthly Notices of the Royal Astronomical Society | 2012

Three-dimensional shapelets and an automated classification scheme for dark matter haloes

Christopher J. Fluke; A. L. Malec; P. D. Lasky; Benjamin R. Barsdell

We extend the two-dimensional Cartesian shapelet formalism to d-dimensions. Concentrating on the three-dimensional case, we derive shapelet-based equations for the mass, centroid, root mean square radius, and components of the quadrupole moment and moment of inertia tensors. Using cosmological N-body simulations as an application domain, we show that three-dimensional shapelets can be used to replicate the complex sub-structure of dark matter haloes and demonstrate the basis of an automated classification scheme for halo shapes. We investigate the shapelet decomposition process from an algorithmic viewpoint, and consider opportunities for accelerating the computation of shapelet-based representations using graphics processing units.

Collaboration


Dive into the Benjamin R. Barsdell's collaboration.

Top Co-Authors

Avatar

Christopher J. Fluke

Swinburne University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

F. K. Schinzel

University of New Mexico

View shared research outputs
Top Co-Authors

Avatar

G. B. Taylor

University of New Mexico

View shared research outputs
Top Co-Authors

Avatar

A. Jameson

Swinburne University of Technology

View shared research outputs
Top Co-Authors

Avatar

M. Bailes

Swinburne University of Technology

View shared research outputs
Top Co-Authors

Avatar

J. Dowell

University of New Mexico

View shared research outputs
Top Co-Authors

Avatar

Jonathon Kocz

Australian National University

View shared research outputs
Researchain Logo
Decentralizing Knowledge