Studies Supporting PMT Characterization for the IceCube Collaboration: Precise Photodiode Calibration
SStudies Supporting PMT Characterization for the IceCube Collaboration:
Precise Photodiode Calibration
Honours Award in Sciences
Wouter Van De Pontseele
Abstract
A laboratory set-up has been developed to more precisely measure the DOM optical sensitivityas a function of angle and wavelength. DOMs are calibrated in water using a broad beam oflight whose intensity is measured with a NIST calibrated photodiode. This study will refine thecurrent knowledge of the IceCube response and lay a foundation for future precision upgrades to thedetector. Good understanding of the photodiode readout is indispensable for the DOM calibration.Corrections to the photodiode measurements due to the amplifier circuit were investigated. Toaccomplish this, a general software structure has been added to the existing framework. Because theset of parameters in the source sector is still growing, modularity and a high level of automation wereimportant objectives. The software features a large array of graphical tools to intercept problemsat low level while the analysis can be easily adapted to the needs of foreseeable situations.
Christopher Wendt - University of Wisconsin–MadisonProf. Dr. Dirk Ryckbosch - Ghent UniversityOctober 2015 a r X i v : . [ phy s i c s . i n s - d e t ] N ov ontents Introduction
The neutrino was discovered by Cowan and Reines during an experiment based on inverse β -decayin 1956 [1]. Soon after the observation, it was realised that the particle could be used as an idealastronomical messenger. Neutrinos are leptons which have no electrical charge and almost no mass, asa result, they only interact due to the weak force. On their journey from the edges of the Universe theytravel essentially without absorption or deflection by magnetic fields. This gives them an advantageover photons, that also interact electromagnetically. So, high-energy neutrinos may reach us unscathedfrom cosmic distances, from the inner neighbourhood of black holes, and hopefully, tell us things aboutthe furnaces where cosmic rays are born. Another thing to keep in mind is that cosmogenic neutrinosare signatures of hadronic interactions, while the origin of photons is more ambiguous.On the contrary, the above mentioned aspects of neutrinos make them very hard to detect. Hugedetectors are needed to accumulate statistically significant data from experiments. Theoreticallybased estimates predict that the observation of potential cosmic accelerators such as gamma-raybursts and quasars require a cubic kilometre-sized detector, at the time this was realised, around1970, a daunting technical challenge. Given the detector’s required size, it seemed logic to use largevolumes of natural water and transform them to Cherenkov detectors that catch the light produced byneutrinos interacting with nuclei. The first early effort was the DUMAND Project (Deep UnderwaterMuon And Neutrino Detector Project). DUMAND was a proposed underwater neutrino telescopeto be built in the Pacific Ocean, off the shore of the island of Hawaii five kilometres beneath thesurface. It would have included thousands of optical sensors occupying a cubic kilometre of the ocean.Work began in about 1976, at Keahole Point, but the project was cancelled in 1995 due to technicaldifficulties. Despite this discontinuation, DUMAND paved the way for later efforts by pioneering manyof the detector technologies in use today, and by inspiring the deployment of a smaller instrument inLake Baikal, as well as efforts to commission neutrino telescopes in the Mediterranean Sea. The firsttelescope on the scale envisaged by the DUMAND collaboration was realized by transforming a largevolume of the extremely transparent, natural deep Antarctic ice into a particle detector, the AntarcticMuon and Neutrino Detector Array (AMANDA). In operation from 2000 to 2009. It represented aproof of concept for the kilometre-scale neutrino observatory, IceCube [2]. The energy of a given neutrino, whether it is low or extremely high, gives us some clues about howand where it was produced. The different origins are depicted in figure 1.
Low energy neutrinos can be divided in two categories. The first is the cosmic neutrino back-ground (C ν B), a remnant of the Big Bang, which cannot be detected yet. Nevertheless, their charac-teristics can be predicted from cosmological models. The C ν B has a very high flux but extremely lowenergy, they are indicated on figure 1 as cosmological ν .The second category is dominant from KeV up to several MeV. Those neutrinos mainly find theirorigin in nuclear processes, such as the ones produced in nuclear reactors, the Sun or the centre of anexploding supernova. High-energy neutrinos come dominantly from the decay of pions and kaons, those heavierparticles have their origin in collisions of comic rays with nitrogen and oxygen in the atmosphere.These neutrinos are therefore called “atmospheric” in figure 1. Their energy extends from a few MeVsup to tenths of a PeV. The first generations of Cherenkov detectors made clear that these atmosphericneutrinos would be a major background, at least for energies below 1 PeV, to searches for non-thermalastronomical sources where cosmic rays are accelerated. The spectrum of cosmic neutrinos from thesesources extends to energies beyond those characteristic of atmospheric neutrinos.2 igure 1:
Measured and expected fluxes of natural and reactor neutrinos. The energy range from keV toseveral GeV is the domain of underground detectors. The region from tens of GeV to about 100PeV, with its much smaller fluxes, is addressed by Cherenkov light detectors; underwater and in ice.The predicted highest energies are only accessible with detectors one to three orders of magnitudelarger than IceCube. [3]
Very-high-energy and Ultra-High-Energy (UHE) neutrinos are categories that start arounda few TeVs but can reach the PeV scale in rare occasions, the record of the highest neutrino evercaptured is a muon neutrino of more than 2.6 PeV at IceCube in August 2015. Neutrinos which sucha high energy cannot be created from within the solar system and are called cosmogenic. They canbe divided in two kinds. The first kind of cosmogenic neutrinos were directly created from hadronicprocesses in or near the most extreme objects in our Universe, those powered by black holes andneutron stars.The the second kind are decay products of pions produced by the interactions of cosmic rays withCMB photons. Cosmic rays above a threshold of around 4 × eV interact with the microwavebackground, introducing an cut-off feature in the cosmic-ray spectrum, the Greissen-Zatsepin-Kuzmin(GZK) cut-off [4]. The value of this cut-off is estimated in appendix A. This phenomena limits themean free path of extragalactic cosmic rays propagating in the microwave background to roughly 75megaparsec. Therefore, neutrinos are the only probe of the UHE sources at longer distances. The GZKflux shares the high-energy neutrino sky with neutrinos from gamma-ray bursts and active galacticnuclei. However advanced the detector may be, most neutrinos will stream trough it without any kind ofinteraction. The few that interact with a nucleus create secondary muons or even hardronic or elec-tromagnetic showers. The charged particles with enough kinetic energy radiate Cherenkov light.Cherenkov light is radiated by charged particles moving faster than the speed of light in themedium; in ice, this is 75% of the speed of light in a vacuum. A typical Cherenkov detector consistsof some thousand photomultiplier tubes (PMTs). PMTs detect this blue and near-UV light. With asufficient density of PMTs, neutrinos with energies of only a few MeV may be reconstructed. The waterCherenkov technique was first used in kiloton detectors, optimized for relatively low-energy (GeV)neutrinos. The two most successful first-generation detectors were the Irvine-Michigan-Brookhaven(IMB) and Kamiokande detectors. Both consisted of tanks containing thousands of tons of purified3 igure 2:
Solar image using neutrinos captured with the Super-Kamiokande Cherenkov detector. water, monitored with thousands of PMTs on the top and sides of the tank. Although optimized forGeV energies, these detectors were also sensitive to lower energy neutrinos; IMB and Kamiokandelaunched neutrino astronomy by detecting some 20 low-energy (MeV) neutrino events from supernova1987A. Their success, as well as the accumulating evidence for the “solar neutrino puzzle”, stimulatedthe development of two second-generation detectors. Super-Kamiokande is a 50,000-ton version ofKamiokande, and the Sudbury Neutrino Observatory (SNO) was a 1,000-ton, heavy-water D O-baseddetector. Together, the two experiments clearly showed that neutrinos have mass by observing flavouroscillations in the solar and atmospheric-neutrino beams, thus providing the first evidence for physicsbeyond the Standard Model. Arthur B. McDonald, Takaaki Kajita were awarded the Nobel Prize inPhysics (2015) for their contributions to SNO and Super-Kamiokande.In summary, the field has already achieved spectacular success: neutrino detectors have “seen”the Sun and detected a supernova in the Large Magellanic Cloud in 1987. Both observations were oftremendous importance; the former showed that neutrinos have a tiny mass, opening the first crackin the Standard Model of Particle Physics, and the latter confirmed the theory of stellar evolution aswell as the basic nuclear physics of the death of stars.
IceCube, the South Pole neutrino observatory, is a cubic-kilometre particle detector made of Antarc-tic ice and located near the Amundsen-Scott South Pole Station. It is buried beneath the surface,extending to a depth of about 2,500 meters. As hinted before, the construction of IceCube has beenlargely motivated by the opening of a new astronomical window. This allows us to learn about themost extreme places in our universe.There are two options for such a kilometre neutrino detector: liquid water or clear ice. As watercan have a very good optical quality, it has the benefit of very good angular resolution. On the otherside, decay of potassium and bioluminescence create bursts of background light that result in detectordead time. Another issue is the necessity to track the exact position of the PMTs because of the watercurrents.Compared to water, natural deep ice has a shorter scattering length but a longer attenuation lengthwhich leads to an absorption length of more than 100m. Because the ice is extremely sterile, it alsobenefits from the absence of decays and bioluminescence [5]. Appropriate reconstruction simulationsshowed that the PMTs can be placed a little further apart from each other in ice than in water.Nevertheless, there is still a large scale water Cherenkov planned in the Mediterranean Sea, KM3NeT.IceCube can measure neutrinos with energies above a few dozen GeV, which allows for measuringboth the atmospheric and the extraterrestrial fluxes of neutrinos. Although atmospheric neutrinos4 igure 3:
Actual design of the IceCube neutrino detector with 5,160 optical sensors viewing a kilometre cubedof natural ice. The signals detected by each sensor are transmitted to the surface over the 86 cablesto which the sensors are attached. IceCube encloses it’s smaller predecessor, AMANDA. are regarded as annoying background in most cases it is useful to calibrate the detector and compareresults with other smaller Cherenkov detectors. The detector’s major sensitivity is reached above theTeV scale, where the extraterrestrial flux is expected to become increasingly more dominant over theatmospheric neutrinos.The in-ice component of IceCube consists of 5,160 digital optical modules (DOM), each with aphotomultiplier tube and associated electronics. The DOMs are attached to vertical strings, frozeninto 86 boreholes, and arrayed over a cubic kilometre from 1,450 meters to 2,450 meters depth. Thestrings are deployed on a hexagonal grid with 125 meters spacing and hold 60 DOMs each. Thevertical separation of the DOMs is 17 meters. Eight of these strings at the centre of the array weredeployed more compactly, with a horizontal separation of about 70 meters and a vertical DOM spacingof 7 meters. This denser configuration forms the DeepCore subdetector. IceTop consists of 81 stationslocated on top of the same number of IceCube strings. Each station has two tanks, each equipped withtwo downward facing DOMs. IceTop, built as a veto and calibration detector for IceCube, also detectsair showers from primary cosmic rays. The surface array measures the cosmic-ray arrival directions inthe Southern Hemisphere as well as the flux and composition of cosmic rays. All the parts describedare illustrated in figure 3.There are several essential steps in the converting process of the messages from individual DOMsinto light patterns that reveal the direction and energy of muons and neutrinos. The Cherenkov light isemitted by secondary particles after a neutrino interaction. Photomultipliers transform the Cherenkovlight into electrical signals using the photoelectric effect. These signals are captured by computer chipsthat digitize the shape of the current pulses. The information is sent to the computers collecting thedata, first by cable to the IceCube Lab at the surface of the ice sheet and then via magnetic tape or,more recently, to the disk array storage. More interesting events are sent by satellite to the IceCubedata centre in Madison, Wisconsin.Essentially, IceCube consists of 5160 freely running sensors sending time-stamped, digitized wave-5orms of the light they detect to the surface. The local clocks in the sensors are kept calibratedwith nanosecond precision. This information allows the scientists to reconstruct neutrino events andinfer their arrival directions and energies. The complete IceCube detector observes several hundredneutrinos per day with energies above 100 GeV. The DeepCore array at the heart of IceCube identi-fies a smaller sample with energies as low as 10 GeV. It is the DeepCore part that is able to detectatmospheric neutrinos and creates the opportunity to study neutrino oscillations.6 igure 4:
An LC-130 rocket assisted take off.
Each digital optical module is and integrated package containing a large photomultiplier, a high voltagecircuit, LED flasher calibration board and a digital data acquisition system. The DOM mainboardis the core of the IceCube data acquisition system. It contains logic responsible for reading out,digitizing and buffering the PMT signals. The layout of the module is showed in figure 5. All internalsare housed in a glass casing.The DOM development had to overcome challenging design problems. There were the extremes ofhigh reliability in a remotely-deployed high pressure and low temperature environment. The borosil-icate glass sphere encasing the DOM internals protects them from the immense pressures, up to 400ATM on ice refreeze, and is selected for high UV transmission. To be more specific, the case is ableto withstand pressures up to 70 MPa and transmits light with wavelengths longer than about 350nm.Besides that there was a stringent limit on the allowed radioactivity of the vessel to maintain a max-imal dark noise rate of 500Hz. The electronics in it had to be operational from room temperaturefor testing down to − ◦ C. The reliability requirement had to be on the same level as for satellitesbecause the DOMs are totally inaccessible after deployment. To assure this, the boards and completedDOMs were subject to stringent testing. Prototype boards were subjected to HALT (Highly Acceler-ated Lifetime Test) cycling, including high and low temperatures, rapid temperature cycling, and highvibration levels. Thermal imaging was also used to check for hot spots. All of the production boardswere subjected to HASS testing, a less-stressful version of HALT. Ninety-eight percent of the DOMssurvive deployment and freeze-in completely; another 1% are impaired, but usable (usually, they havelost their local coincidence connections). Post-freeze-in reliability has been excellent, the estimated15-year survival probability is 94%.Since all the fuel has to be flown in on LC-130’s, the polar version of the C-130 cargo plane with ski-equipped landing gear, the DOMs also had to be as low power as possible. For entertaining purposes,the plane can be found in figure 4. With the currently used configuration, each digital optical moduleconsumes about 3.5 W.Another requirement was a huge dynamic range: from single photon hits, caused by passingcosmic-ray muons, up to tens of thousands of photons from immense electromagnetic showers. Thetime tagging of events had to be of nanosecond precision. The system response to a single photo-electron (SPE) is a pulse with an average amplitude of about 10 mV and a width of 5 ns. The timingis slightly sensitive to where the photoelectrons hit the photo-cathode; photons striking the edges of7 igure 5:
Graph of cosmic microwave background spectrum measured by the FIRAS instrument on the COBE.This peaks at microwave wavelengths at a temperature of around 2.7K.Schematic drawing of a digitaloptical module [6] the PMT are recorded, on average, 3 ns later than those reaching the centre. Their time resolution isalso worse.Last but not least, The DOMs had to be cost effective and reliably producible in quantities of afew thousand units [7].Figure 5 shows the main parts of the optical module, which has a total diameter of 35 cm. Thelargest part is occupied by a 25 cm photomultiplier tube from Hamamatsu and associated electronics[8]. The amplifying section has ten dynodes and runs at a gain of 10 at 1500 V. A mu-metal magneticshield reduces the magnetic field of the earth with a factor two. The PMT is optically coupled tothe pressure vessel using an optical gel and is sensitive to 350-650 nm photons. The high voltage of1300 − µ scoincidence window. The hit rate in this mode depends on a DOM’s depth, through both the muonflux and the optical properties of the ice, but is typically 3 to 15 Hz.In early 2009, IceCube started taking data in “Soft Local Coincidence” mode. In addition to thecomplete data for coincident hits, a more selective part of the data was sent to the surface for isolatedhits. These hits are recorded at the PMT’s dark rate, typically 350 Hz. Although most of these hitsare noise, they are useful in many analyses. 8 igure 6: Maps of optical scattering (left) and absorption (right) for deep South Pole ice. The depth depen-dence between 1100 and 2300 m and the wavelength dependence between 300 and 600 nm for theeffective scattering coefficient and for absorptivity are shown as shaded surfaces, with the bubblecontribution to scattering and the pure ice contribution to absorption superimposed as (partiallyobscured) steeply sloping surfaces. The dashed lines at 2300 m show the wavelength dependences:a power law due to dust for scattering and a sum of two components (a power law due to dustand an exponential due to ice) for absorption. The dashed line for scattering at 1100 m shows howscattering on bubbles is independent of wavelength. The slope in the solid line for absorptivity at600 nm is caused by the temperature dependence of intrinsic ice absorption [5].
Determining the time and amplitude of an observed light pulse requires careful calibration of theinstrumentation. IceCube uses a variety of methods to ensure this. The primary timing calibration is“RapCal”: Reciprocal Active Pulsing. RapCal timing calibrations are performed automatically everyfew seconds. During each calibration, the surface electronics send a timing signal down to each DOM,which waits a few µ s until cable reflections die out, and then sends an identical signal to the surface.The surface and DOM electronics use identical DACs and ADCs (Analog-to-digital converters) to sendand receive signals, so the transmission times in each direction are identical. Even though the 3 . ≈ µ s, the transmission time is determined to less than 3ns. Other timing calibrations measure the signal propagation delay through the PMT and electronics.Each main board includes a UV LED, which may be pulsed on command. The LED pulse currentis recorded, along with the PMT signals. The difference determines the PMT transit time, plus thedelay in the delay line and other electronics. Amplitude calibrations are also done with the On-BoardLED. It is flashed repeatedly at low intensity. A charge histogram is accumulated and sent to thesurface, where it is fit to find the single photo-electron peak. This is done for a range of high voltages,and the high voltage is set to give 10 PMT gain. These calibrations are extremely stable over timeperiods of months. Each DOM also contains a ‘flasher’ board with 12 LEDs mounted around its edges.These LEDs are used for a variety of calibrations, measuring light transmission and timing betweendifferent DOMs. The multiplicity of LEDs is particularly useful for linearity calibrations. The LEDsare flashed individually, and then together, providing a ladder of light amplitudes that can be used todetermine the saturation curve.Further more, mounted between two DOMs on a cable, there are extra modules containing a 337-nm N laser. The laser beam is shaped to emit light in the shape of a Cherenkov cone, forming areasonable approximation to a cascade. The light output is well-calibrated, and an absorber wheel al-lows for variable intensities. Although the 337-nm light does not propagate as far as typical Cherenkovradiation (peaked around 400 nm), it provides a reasonable simulation of cascades up to PeV energies.Why 400 nm photons propagate further can be deduced from a combination of the scattering andabsorbtion characteristics of the ice as can be seen in figure 6.9 igure 7: Illumination system and water tank. Photodiodes are placed as shown and used as described inthe text: (PD1) Relative measurement of beam intensity for bright or dim source settings; (PD2)NIST calibrated, used to calibrate PD3 when temporarily mounted at transfer station; (PD3) Directmeasurement of beam flux at DOM for bright source settings.
Depending on the energy of the neutrino and the distance from secondary particle tracks, PMTscan be hit by up to several thousand photons within a few hundred nanoseconds. The number ofphotons per PMT and their time distribution is used to reject background events and to determinethe energy and direction of each neutrino. The detector energy scale was established from previouslab measurements of DOM optical sensitivity, then refined based on observed light yield from stoppingmuons and calibration of ice properties. A laboratory set-up has been developed to more preciselymeasure the DOM optical sensitivity as a function of angle and wavelength. DOMs are calibrated inwater using a broad beam of light whose intensity is measured with a NIST (National Institute ofStandards and Technology) calibrated photodiode. This study will refine the current knowledge of theIceCube response and lay a foundation for future precision upgrades to the detector.The lab set-up is designed to measure the single photon detection efficiency of a DOM whenilluminated in a similar way as from neutrino events in IceCube, where light typically travels 5-200 mfrom its source before possible detection. Because the DOM diameter is only 35 cm, at such distancesit can be accurately described by its efficiency to detect photons arriving from a particular angle,averaging over all possible points of arrival at the photocathode. This situation is mimicked in theset-up by using a uniform light beam from a source 6.4 m away, as can be seen in figure 7.In IceCube, each DOM has its PMT facing downward, and the sensitivity does not vary significantlywith rotation around this vertical axis. The sensitivity does depend on the polar angle of illuminationrelative to the DOM axis, so the lab set-up includes a motorized mounting shaft for inclining theDOM axis relative to a fixed vertical beam. This is illustrated in figure 9. The DOM is immersed inwater in order to closely model reflection and refraction effects occurring at the optical interface as inthe case that the DOM would be embedded in ice. Effects of PMT gain, discriminator threshold andelectronics calibration are accounted for by using the same software and procedures as in standardIceCube operations.The set-up as sketched in figure 7 is made up of two major parts: the illumination system and thewater tank. The corresponding parts are photographed in figure 8. There are several sources of lightavailable. The spectral output of the LEDs and the laser is depicted in figure 11. Any of those can bedirected to shine towards a diffuser box as indicated in figure 10. The diffuser acts as an integratingsphere and produces a homogeneous beam that does not depend on the kind of source that was used.The box has two outputs. The main output goes to the water tank after being deflected by a mirror.Photodiode 1 (PD1) is mounted on a secondary port of the source diffuser and is used to preciselymeasure changes in beam intensity. The beam geometry is defined by a series of apertures in the10 igure 8:
Photographs of the acual set-up as sketched in figure 7. A more detailed view of the illuminationsystem can be found in figure10.
Figure 9:
Schematic of cage where photodiode 3 (PD3) and the DOM are submerged. Because the the orien-tation of incident light cannot be adjusted, there is equipment installed to position PD3 and changethe angle of the DOM with respect to the light. igure 10: The currently installed light sources include: a pulsed diode laser (405 nm), pulsed LEDs (400nm), continuous LEDs (370 nm, 400 nm, 450 nm), and lamp with monochromator (320-700 nm).Beam intensity entering the diffuser box can be varied over a wide range by means of LED current,repetition rate of pulsed sources, neutral density optical filters, and a simple shutter.
Figure 11:
Spectral output characteristics of the LEDs and the laser. ◦ C avoids introducing humidity to the tunnel and source region,where it might affect electronic or optical elements. The low temperature also reduces the DOM darknoise rate and discourages bacterial growth in the water.Surrounding the water tank, Helmholtz coils are arranged for separate control of the magneticfield in each direction. These coils cancel the ambient field in the lab and create a field relative to theDOM axis. The field can be generated to mimic that experienced at the South Pole for all possiblerotation angles of the DOM in the set-up. A blackout curtain hangs from the open end of the tunneland is secured around the water tank by a belt.All system elements are controlled and monitored by Python scripts running on a standard IceCubedata acquisition computer (DOMHub) [10]. 13ource setting Photon flux PD1 PD3 DOMBright 10 /cm /sec 50 nA 100fA saturationDim 10 /cm /sec 500 fA too low 100-1000 Hz Table 1:
Approximate response of PD1, PD3 and DOM for bright and dim beams.
With PD3 installed in the water tank and one of the light sources turned on, a PD3 current around100 fA can be obtained if the source settings are bright enough. Such a current enables measurementswith a precision of 0.5%. This gives a direct measurement of the beam flux in the tank, typically 10 photons/cm /sec at high brightness settings. However, the PD3 current signal cannot be preciselymeasured at the much lower brightness settings needed for DOM sensitivity calibration. For beamfluxes below 100 photons/cm /sec, PD1 is used. This photdiode is located much closer to the source.The larger PD1 signal is still proportional to the beam flux, but with an scale factor that has beencalibrated against the direct PD3 measurement [9]. For this purpose a bright PD3 measurement anda PD1 measurement has been done simultaneously, yielding the beam flux scale factor:Flux = 0 . /sec · (PD1 current / fA)Since PD1 and PD3 were both observing outputs of the source diffuser box, this scale factor appliesequally when the source is operated in dim mode. The switchable gain preamp used with PD1facilitates measurement of both bright and dim beams with good precision (Table 1). For example,a dim beam with 10 photons/cm /sec at 400 nm gives DOM count rates up to 1000 Hz, with PD1currents around 500 fA. Such dim measurements are then made for each polar angle and wavelengthand used to calculate the final DOM photon sensitivity. From the previous discussion it became clear that PD1 needs to cover a very large range of intensitieswhile maintaining small errors. To accomplish this goal an advanced pre-amplifier circuit was addedto the ADC board. The schematic can be found in figure 12. The amplifying process consists of threestages, all of those use the principle of operational amplifiers. A short introduction to operationalamplifiers is found in appendix B. The important point in the first amplification is that in the gain isproportional to the resistor that is used in the circuit. In the first stage in the amplifying process a gainresistor could be chosen from a set of six resistors ranging from 10 kΩ up to 1 GΩ. The second gainparameter was the channel. Two channel options were implemented in the software: ch0 representedthe normal signal, while ch1 amplified the original signal by a factor 101. The third parameter in thefine tuning of the total gain was called the ADC-gain and generated by two separated low-noise, low-distortion commercial operational amplifiers (MAX4252). Each of those had 1,2,4 and 8 as possiblegains, all of them were implemented in the code but during the effective measurements only options1 and 8 were used to limit the parameter space.The goal of the project was to determine relative errors on the different gain combinations. Thoseratios will then be used as correction factors in the further experiment, as described in section 4.
The incoming light can be regulated by adjusting the following parameters:
Source type
The installed sources were described in figure 10. For this project only LEDs wereused because a steady source was required and there was no need for a specific wavelength.14 igure 12:
The preamp circuit that connects the output of PD1 to the controller board. filterwheel1 position transmission coefficient1 14 0.0025995 0.0003926 0.000247
Table 2:
Attenuation measurements of different filterwheel1 settings.
LED number
At the moment of the experiment, there were five LEDs installed with slightly dif-ferent characteristics (see figure 11). Number 0 was used in almost all cases, occasionally LED 3 wasused because of its slightly higher brightness level. This made it possible to accumulate more datafor the smallest gain settings. Number 3 was also used to test the source independency of certainbehaviour of the measurements.
Brightness range
The circuit that drives the LEDs had two parameters of which the first was therange: bright or dim. The bright range was about a factor 40 brighter than de dim range.
Brightness Digital-to-Analog Converter (DAC) setting
This second source parameter acceptsvalues from 0 up to 65535. The effective luminosity produced by the LEDs was not linear and inpractice there was a threshold around 17k and an upper bound around 60k. In the bright rangethe LED output became constant when values over 50k were used. The combination LED 0 and 3,dim/bright range and DAC setting already gave a pretty good coverage of the possible luminositiesas is made visual in figure 13
Filterwheels
The last piece of hardware that makes it possible to further tune the amount of lightthat reaches PD1 is the so called filterwheel1 (fw1). Positions 1,4,5 were used for measurements.There is also a filterwheel2 located behind PD1, this was used on position 6 to prevent bright lightfrom reaching the DOM PMT. position 2 and 3 of the filterwheels are reserved for bandpass filters and15 igure 13:
Behaviour of the LED output measured with PD1 for the different settings used in the furtheranalysis. All measurements showed in this plot were made without an extra filter in between theLED and PD (filterwheel 1 on position 1). For low DAC settings, the behaviour is non-linear. are not used. The T coefficients of positions 1,4,5 and 6 can be fount in table 2. This extra parameterenables to go to the very lowest brightness levels. The calibration of the the highest gains to amplifythe PD1 signal on these brightness levels is extremely important since it matches the DOM south-poleconditions more accurately than brighter settings.If we group the gain settings and the source settings together, one measurement can be definedby: • Source settings: – source type (LED) – LED number – Brightness range – Brightness DAC setting – fw1 and fw2 position • Readout settings: – Resistor gain (10k, 100k, 1M, 10M, 100 or 1G) – ADC gain for ch0 – ADC gain for ch1
To keep things comprehensible, the the ADC-gain for both channels will always be the same duringthe data taking. More specific both ch0 and ch1 will have ADC-gain 1 or 8.A main concern during the automation process is that the gain setting under investigation hasto be matched with the right range of source settings. This restriction arises because of the limitingrange of the counting rate of the ADC controller board. The digital readout was an integer valuebetween 0 and 32767.
Once the output parameters are known there are still a few parameters that need to be set consideringthe structure of a measurement. Figure 14 will serve as an example to clarify their meaning.16 igure 14:
Intermediate result of an example of a PD1 measurement. This is one block with specific sourceand gain settings. White coloured blocks indicate settling time , red ones are off time to determinethe background off-set and the green part is on time . The right part is just zoomed in. • settling time : buffer period before on/off data-taking (10 s by default). • on time : time that the source is on and that the data is kept for analysis (20 s by default). • off time : time that the source is off and that the data is kept for analysis (10 s by default).Those parameters were only changed in a try to resolve some peculiar behaviour that will bediscussed in section 8. Most of the time, there default value was used. With those parameters, onemeasurement took exactly 70 seconds. 17 Automatization of the measurement process
From the preceding, it becomes clear that there are al lot of source parameters to play with. Evenmore challenging is the number of different gain combinations that has to be investigated. To calculatethe relative errors on the different gain combinations, one has to scan over all the combinations of gainsettings and try to find as much as possible cases were different gain settings can be used to measurethe brightness of a source with identical source parameters. The ratio of the lowest and the highestgain is:
Gain highest
Gain lowest = 1G · · · · ≈ . × . (1)At the moment we neglect possible error on the gain settings because they are believed to be small,according to manufacturers supplied data. As this number is far larger than the counting range ofthe ADC board, it will be impossible to measure even half of the combinations with a constant sourcesetting. To accumulate data and investigate source dependencies, the goal is to measure all gainsettings for the whole range of possible brightness levels that results in an output count rate that liesin the interval of the ADC board. From this, we can already anticipate on the fact that we will needto combine intermediate results if we want to cover the whole range of gain ratios.Another remark is that each single measurement takes 70 seconds, and has to be done at least 3times (cycles) to be sure that the result is time independent and that the source is stable. As a result,the total scan over all the options takes a few days up to a full week.Those two arguments ask for an automated programme that runs on a dedicated hub that isinstalled next to the experimental set-up and is configured exactly the same as the ones used on thesouth-pole. Another requirement was that the analysis of the data could be done on the same locallyinstalled computer. For this last task we quickly ran into problems because of the limiting analysiscapacities of the standard IceCube configured software set-up. A compromising solution was foundin the form of Enthought Canopy : a comprehensive Python analysis environment that provides easyinstallation of the core scientific analytic and scientific Python packages, creating a robust platformyou can explore, develop, and visualize on.
One can divide the way the program works in three parts:1. The preparation of a data taking session (figure 15).2. The actual data taking and partial processing (figure 16).3. The calculation of the gain ratios and output of supplementary graphs (figure 17).
The user can specify which part of the parameter space that the session should compromise. Duringone run the source is kept in a constant brightness range, also, the filterwheel positions are not changed.The only source parameters that is varied, is the
Brightness DAC setting . Furthermore one can selectthe gain configurations that should be included. The script creates a number of files which are usedin step 2.
This is the part in which raw data is taken and processed to a format that can be used for higher levelanalysis.
TakeData.py first initialise some electronic components and resets all the sources. After this,it calls two modules in following order:
GenerateData.py and
SummerizePD.py .18 igure 15:
Part one: the generation of configuration files that specify the parameters and will be used as inputto start a data-taking session.
Figure 16:
Part two:
TakeData.py uses the files generated in the folder
Configs to do the actual measurements.It uses some lower level scripts for the handling of the readout hardware. The script produces a .txt -file for each
Brightness DAC setting with a summary of the measurement. Alongside that, italso creates some files for diagnostic purposes. igure 17: Part three:
CalculateRatios.py processes the PD currents registered by
TakeData.py to ratiosbetween gain configurations and the errors on those ratios. It is the part of the program where theactual analysis is done. The script has several higher level parameters to investigate the outputand isolate possible causes of problems. To make this possible there is a rich variety of outputavailable. enerateData.py takes one .config -file as input to specify the parameters. The light is turnedon and off between all the measurements to determine the dark current. Besides that the sourceparameters are fixed. The total session can be divided in cycles. All gain configurations that areincluded in a cycle are specified in the input file (see figure 15). Over those settings is looped ina random order. The randomization procedure has been introduced to investigate and reduce timedependent effects in later analysis. The raw output is once per second written to a file for eachrequested gain combination. It is formatted in two ways: the number of counts and the convertedcurrent value in fA. GenerateData.py is included as an appendix.
SummerizePD.py is launched by
TakeData.py when
GenerateData.py has completed all the cy-cles.
SummerizePD.py can be regarded as a first step in the data processing chain. For each gaincombination, it calculates an average of the difference between the off and on parts of a measurementblock, see figure 14. Afterwards, this average is combined over different cycles. The resulting dark-subtracted average currents are reported for PD1 ch0 (x1) and ch1 (x101) in fA. To accomplish this,several steps are needed :1. The averages of all the on and all the off blocks are calculated. The standard deviation in thisgroup of points is calculated as σ = (cid:113) V arN points . Where
Var is the variance of those points and N is the number of points. In the default case, for both on and off , N is 20.2. If the on block average was higher than 28000 counts, information for that configuration wasnot calculated further because of saturation effects.3. The off current was subtracted from the corresponding on current. The new error used is simply σ subtracted = (cid:113) σ on + σ off . If the resulting dark-subtracted average value is less than 50 counts,the configuration is skipped because the signal/noise level is to low.4. The values that were found over different cycles are compared. There are several ways to combinethese. This is currently implemented as follows:(a) Calculate the weighted mean of those block-averages : µ = (cid:80) i x i σ i (cid:80) j σ j and σ µ = 1 (cid:80) i σ i (2)(b) Take the variance of those block-averages , from this variance follows an extra spread σ cycles = (cid:115) V arN cycles . (3)(c) The combined variance on µ is σ stat = (cid:113) σ cycles + σ µ .(d) At least, an additional error which takes the ADC count resolution into account has beenadded: σ final = (cid:112) σ stat + (1 / .5. Convert the result to fA and write it to an output file.The way the errors are calculated may seem a little heuristic. The current roadmap is proposedbecause more rigorous error estimates returned errors that were to small to be consistent. This couldbe explained by instabilities and slightly time dependant behaviour in the source section. Summer-izePD.py has several checks to monitor these unexpected non reproducibilities.There is check that looks for inconsistencies between the averaged PD current off-set before andafter the pulse. The goal of this check is to eliminate possible drifts in the off-set. Here explained for one gain configuration. igure 18: PD1 currents for the gain setting ( resistor gain
ADC gain ch0 gain ). The source settingswere Led0-DACbright20k fw16. There is a time gap between the double red off -periods becausemultiple gain configurations were measured in between two cycles of the same settings. Thismeasurements contains six cycles. Another behaviour that is monitored and lead to the need for enlarged errors is the so called sagging phenomenon, illustrated in figure 18. By sagging , the drop in PD current during a pulse ismeant. The amount of sagging in the on -time region of a block is quantified by the 85 th percentileminus the 15 th percentile of the on -time data. A more thoroughly discussion about this effect will willbe postponed until chapter 8.The final consistency check is between the resulting dark-subtracted averages of the different cycles.In the ideal case there should be no noticeable differences between two cycles of the same measurement.But, as can bee seen from figure 18, there were differences. Because we were unable to fully determinethe origin of those differences they were included as an extra error.The program includes several options to make plots and register the worst events in different textfiles for further problem-solving purposes. As can be seen in figure 17, the input for the calculations of the ratios are the text files created in the processed folder. For each file,
CalculateRatios.py calculates all the possible ratios with their error.The ratios are stored in the following way:highest total gain lowest total gain I high I low σ ( I high I low )example (100k, 8, 1) (10M, 1, 0) 0.998 0.041Where (100k, 8, 1) represents the gain combination: resistor gain of 100kΩ, read out channel 1(x101) and ADC gain set to 8. I high I low represents the ratio of the dark-subtracted average currents ofthose two gain settings. Notice that there has been averaged over a lot of different source settings toobtain this ratio. Nevertheless, the ratio was assumed to be brightness independent so should give aconsistent value for those different source configurations.22ince there is carefully scant over the the parameter space of the source, there are a lot of mea-surements which differ only slightly in PD current. This enables us to have lots of values for mostratios. The weighted average and weighted error is calculated in the same way as was done previously.If a specific value of a ratio, coming from one fixed source configuration, is incompatible within threesigma deviations from the weighted mean, it is called a ”Bad point” and indicated in red on most ofthe following graphs. Notice that those points are not thrown away for further analysis. All those badpoints are grouped together and written to a file.At the end of this procedure, a list of direct ratios is found. The program first tries to isolatethe causes of the bad points in two ways. First of all it checks if a certain source setting gives rise toan unusual high number of bad points in it. Secondly it searches for ratios that contain a relativelybig amount of bad points . Those source specific file names and those bad ratios are also written to afile. There is also an option included to redo the measurements for the bad source settings to see ifpeculiarities are reproducible. The thresholds for this selection were tuned during the project. Becausethere were some hints that indicated a possible problem with certain gain settings , the bad ratioscalculated up to now were excluded from the current list.The problem with the current list, certainly after the removal of bad ratios is that it does notcontain all the possible combinations, even compared to an intermediate reference gain configuration.another point of interest that needs to be addressed is that the errors grow fast when the differencebetween the lowest and the highest gain setting increases. This is due to the lack of source settingsin the overlap region of both gain settings. Nevertheless, both problems can be greatly reduced bycombining ratios to calculate new ratios that were out of reach or only roughly know before. As statedbefore, the total gain range is ≈ . × , while the upper bound on the ratio of gains that canbe calculated in a direct manner is 28000 / ≈ = 250000. If we take the reference gain setting somewhere in the middle itshould in theory be possible to compare that gain setting with all the other ones and determine therelative errors.In practice this is implemented as follows:1. One gain is taken as reference gain, G R , all the indirect ratios will only be calculated with respecttowards this gain.2. A list of the weighted averages of direct ratios that includes the reference setting is created.3. Loop over al the gain combinations in this list. To clarify the procedure, lets focus on one gainsetting on this list and call is G I .4. The total gain of this last one is compared towards the total gain of the one we are focussingon. Without loss of generality, we can assume that the last one is higher: total gain ( G R ) 23. The double loop structure calculates all the possible combinations that can be reached by ap-plying strategy.9. A total list of indirect ratios is constructed. Because in most occasions there are several ways to reach a final setting G F , a weighted mean and weighted error is calculated.Notice that the condition total gain ( G R ) < total gain ( G I ) < total gain ( G F ) is not strictly neces-sary for the strategy. This is just an optimisation since leaving this one out would not lead to a largernumber of gain combinations that could be calculated or to a higher precision on the ones that werealready within reach. On the contrary, leaving out this condition increases the risk of deterioratinggood direct gain calculations because of the combination with worse intermediate ones.At this phase, there are two lists of ratios compared to the G R reference setting: the direct onesand the indirect ones. Those can be merged into one super combined list by using the following set ofrules: • case 1: The gain setting under consideration is included in both lists. Both values are checkedfor compatibility, if they are compatible, the weighted average and weighted error is included inthe super combined list . • case 2: The gain setting only occurs in one of the lists. As it is not possible to check this valueit is simply added to the super combined list .All lists mentioned are kept for further analysis. They are written to files in the folder Results : Ratios.txt and Comparedto(R,ADC-gain,Ch).txt (see figure 17). Several intermediate gain settings, G I , can be used to reach a final gain setting, G F . G R 14 15Number of intermediate ratios for G R 67 71Indirect gain settings compared with G R 20 21 Table 3: Some numbers that are used to fine tune the analysis parameters and serve as a quick way to seewere things could be improved. The retrieved values for board 1 and board 2 are quite similar. Apoint is the division of two dark-subtracted, cycle-average0,d currents of different gain settings forfixed source parameters. Table 3 gives a quick overview of what could be expected from the results. It groups all the data ofseveral weeks of data taking and only makes a hard distinction in the ADC hardware that was used.The full process was done with two different boards to further isolate the cause of some anomalies(see also section 8).The following parameters can be adjusted: • sigmasingleratio = 3 : If we combine the error of an individual point with the error on the meanof a certain ratio, and the point lies further away than 3 times that combined error from themean, this point is marked as red/bad point. • sigmacombinedratios = 1.5 : If a ratio can be calculated in both direct and indirect ways, thosetwo weighted averages are checked for consistency before combining them. They are consistenceif they lie in the interval of 1.5 times their combined error. • ratiobadpoints = 0.2 : Some ratios show strange behaviour. If 20% of the direct measured pointsare not compatible, all the direct data of that ratio will not be used for further analysis. • numberofbadpointsinfile = 8 : It is also suspicious if one fixed source settings run leads to a lotof bad data, this could mean that there are some problems with the led for example, files whichlead to more than 8 bad point will be listed for analysis purposes. One of the most important output files of CalculateRatios.py is OverviewRatios.pdf . It gives a visualoverview of all the direct ratio combinations and is a good tool to spot unexpected behaviour. Anexample can be seen in figure 19. Since the different intermediate ways to calculate a gain were not compared with each other automaticbefore combining them to a weighted indirect value, it was important to have a graphical representationof this process. An example is showed in figure 20.25 igure 19: Example of one of the 160 ratios that could be calculated in a direct manner. The figure usesdata for the two different gain settings indicated in the title and evaluates their ratio in a lot ofdifferent source conditions: filterwheel position, LED brightness range and LED brightness DACsetting are varied. The dotted lines indicate the 0.5% error zone to guide the eye. The blue line isthe weighted average of the points, including the red ones, and the small, more shallow, blue zonearound it is the weighted average error on the mean. Bad points are indicated in red. Figure 20: Example of one of the ratios that could be calculated in both direct and an indirect manner. Thiskind of plots enables a more graphical approach to check the consistency. On the right side theindirect calculations are shown separately with their error. In this case there were four possiblegain configurations that could be used as intermediate gain. On the left side, the weighted averageis given and compared with the direct measurement. The shaded box indicates 1.5 times thecombined error. In this cases the two are compatible and are combined. igure 21: The direct ratio graph corresponding with the empty line of gain setting (1M,8,0) in table 4.Because there are more than 20% of the point inconsistent when compared with the weightedaverage, all the data in this plot was automatically removed from the indirect ratio calculations. If the built-in checks at all stages of the program are reassuring and a the produced graphs areevaluated for peculiarities, the final textual output could be trusted. After one has designated onegain setting the status of reference gain, ratios towards the other settings are calculated. In table 4 theoutput is given for ADC board 2 and reference gain setting (10M,1,0) . Notice that one of the emptyrows is the reference gain, this is a trivial row of course. The reason that the ratio with respect tothe setting (1M,8,0) could not be calculated is more concerning. This is the setting that has a lowertotal gain value as close as possible to the reference one. For that reason it could not be calculatedindirectly. This leads to the fact that this ratio was skipped in the final analysis because more than20% of the points that were used were incompatible with the weighted average. The overview of thosepoints for the ratio under consideration is showed in figure 21. A more careful analysis of this problemis postponed to the next section. 27 irect Indirect Combined Gain ratio σ ratio σ ratio σ (10k, 1, 0) / / 0.99869 0.00056 0.99869 0.00056(10k, 8, 0) 1.00404 0.00666 0.99547 0.00021 0.99548 0.00021(10k, 8, 1) 1.00005 0.00008 / / 1.00005 0.00008(10k, 1, 1) 0.99990 0.00030 1.00015 0.00011 1.00012 0.00010(100k, 1, 0) 0.99958 0.00370 0.99893 0.00017 0.99893 0.00017(100k, 8, 0) 0.99532 0.00033 0.99400 0.00012 0.99532 0.00033(100k, 1, 1) 0.99985 0.00004 / / 0.99985 0.00004(100k, 8, 1) 0.99975 0.00008 0.99979 0.00009 0.99977 0.00006(1M, 1, 0) 0.99973 0.00026 0.99997 0.00010 0.99994 0.00009(1M, 8, 0) / / / / / /(1M, 1, 1) 0.99931 0.00009 0.99931 0.00010 0.99931 0.00007(1M, 8, 1) 0.99954 0.00086 1.00076 0.00008 1.00075 0.00008(10M, 1, 0) / / / / / /(10M, 8, 0) 0.99891 0.00007 0.99896 0.00008 0.99893 0.00005(10M, 1, 1) 0.99967 0.00099 1.00068 0.00008 1.00067 0.00008(10M, 8, 1) / / 0.99892 0.00047 0.99892 0.00047(100M, 1, 0) 0.99905 0.00009 0.99901 0.00006 0.99903 0.00005(100M, 8, 0) 0.99779 0.00086 0.99775 0.00009 0.99775 0.00009(100M, 1, 1) / / 0.99780 0.00042 0.99780 0.00042(100M, 8, 1) / / 0.99147 0.00062 0.99147 0.00062(1G, 1, 0) 1.00005 0.00099 1.00107 0.00008 1.00106 0.00008(1G, 8, 0) / / 0.99777 0.00047 0.99777 0.00047(1G, 1, 1) / / 0.99137 0.00064 0.99137 0.00064(1G, 8, 1) / / / / / / Table 4: Information contained in Comparedto(10M,1,0).txt . The reference gain setting used was: resistor gain10 × Ω, ADC gain x1 and channel 0 (x1). Of the 24 combinations that were tested, G R could becompared with 21 others, only the highest gain setting (1 G, , 1) was out of reach. igure 22: The absolute amount of sagging, as defined in the text, plotted in function of PD1 current. Thepanels on the left show data taken with the first board while the data on the right was taken withthe second one. A distinct plot is made for the two different filterwheel positions. Colours indicatethe ADC gain settings used for the particular data points. At first sight the textual results of the program look very promising. In most cases this is indeedbelieved to be the case, but there were some hints that not everything was consistent and that the set-up had some peculiar behaviour when pushed to the limits. A large proportion of time was invested inthe development of methods to identify, and where possible, eliminate these peculiarities. Despite theeffort not all problems could be completely resolved. The three most important ones will be disclosedin this section. The sagging phenomenon was already briefly discussed in the context of figure 18. Sagging refers tothe drop in PD current during a pulse. The minimal amount of sagging in the on -time region of ablock is quantified by the 85 th percentile minus the 15 th percentile of the on -time data. This has tobe averaged over the cycles: (cid:80) cycles ( p ( I on ) − p ( I on )) N cycles . (5)29 igure 23: An example of the anomaly at low counts in the low brightness range. The effect is independentof the filterwheel position and the DAC brightness setting of the source. This quantity is plotted in figure 22. The amount of sagging of the PD current in each ON periodis linearly correlated to the brightness and does, according to this first analysis, not depend on thegain settings. For each brightness one can put a lower limit on the amount of sagging that will occur.The independence of the ADC board used for the analysis is striking. This does highly suggest thatthe problem is caused by instabilities in the source sector. This should be tested in the future byrepeating the analysis with LED 3 or the laser. Beside this, a structure of many branches appearstowards larger sagging, this has yet to be explained by a more thorough study. Bellow a PD current of approximately 20000 fA the ratio tends to drops. This means that thelower gain setting has too much counts or the higher setting too few. The current suggestion is ananomaly of the hardware in cases where the gain settings used are relatively high in combination withmeasurements where the number of counts is relatively low. In this situation the count rate for thelowest gain setting is thought to be slightly overestimated, resulting in an error on the gain ratio upto 4%. The effect appears with both ADC board 1 and ADC board 2. An example of the situation isdepicted in figure 23. The ratios where this effect showed up and those where it does not are listed inthe manual. This division in two groups is completely identical for both ADC boards.The importance of this peculiarity arises because of the long term goal of the set-up: calibratingthe actual DOMs in a more precise way than ever done before. For this purpose, the most dim sourceregions and the highest gain settings are indispensable. A lot of bad ratios have something to do with the combination of ADCx8 and Ch0. In a certainbrightness range, dependent on the resistor gain, this gives strange behaviour. The range correspondsto a fixed number of ADC counts around 2000-8000. So brightness × total gain is a constant for thisproblem. By looking for overlap in this zone for different source settings we were able to exclude thesource as a cause of it. The problem is reproducible with different LEDs, filterwheel positions and30 igure 24: An example of the ADCx8 Channel 0 problem. From this example, one can immediately see thatthe problem is independent of the LED that was used. even ADC boards. An example is given in figure 24. Notice that the magnitude of the deviations arearound 0.5% for these anomalies. Currently this effect is thought to be caused by non-linearities inthe amplifying circuit. 31 Conclusion and outlook A laboratory set-up has been developed to more precisely measure the DOM optical sensitivity as afunction of angle and wavelength. DOMs are calibrated in water using a broad beam of light whoseintensity is measured with a NIST calibrated photodiode. This study will refine the current knowledgeof the IceCube response and lay a foundation for future precision upgrades to the detector. Goodunderstanding of PD readout is indispensable for DOM calibration. The main goal of the project wasto investigate corrections on the photodiode measurements due to the amplifier circuit. To accomplishthis, a general software structure has been added to the already existing framework of the laboratoryset-up. Since the set of parameters in the source sector is still growing, modularity and a high levelof automation were important objectives. The software features a large array of graphical tools tointercept problems at a low level while the analysis can be easily adapted to the needs of foreseeablesituations. A manual has been written that will guide the further development of the PD software.At current stage the errors in the different amplification chains are almost everywhere determinedup to the level of 0.1%. The brightness averaged ratios of the different gain configurations are almostalways within a 1% deviation, in accordance with expectations and demands for the experiment.Nevertheless some ratios showed unexpected brightness dependence. The peculiarities could beisolated but not fully resolved. Further studies should be able to diminish these effects by makinghardware improvements to both the source as the amplification sector. Already with the currentoutcome, it is perfectly possible to select certain gain configurations and brightness levels for thefinal calibration of the optical IceCube modules. Those settings can be trusted over a large range ofbrightnesses with errors less than 0.1% on their ratio compared with the ideal gain setting. The ratiosfound in this study are then implemented in the overall calculations concerning the DOM calibration.This study has carefully documented those settings and their corresponding brightness range.32 Greissen-Zatsepin-Kuzmin (GZK) cut-off Consider a high energetic proton as a cosmic ray particle. As space is permeated with cosmic back-ground radiation, we could expect that the proton will scatter off such a photon. Throughout thecalculation we will assume that the initial energy of the proton is high enough to treat it purelyrelativistic, we will check this afterwards. During a scattering process, new particles can be cre-ated without violating conservation of energy and momentum. As we are interested in high energyneutrinos, consider the following reaction: p + CR + γ CMB → ∆ + → n + π + → n + µ + + ν µ (6)The created neutron is free and will decay to p + + e − + ¯ ν e in almost al cases, the mean lifetime ofthis decay is 881 . ± . µ + → e + + ν e + ¯ ν µ . The neutrino of interest forthe IceCube observatory is the first one as this one has the chance to be very energetic if the initialproton was very energetic. So, the question of interest is: what is the minimum proton energy thatis required to let this reaction take place? As this is only an estimate we will use a fixed energy of 3K for the CMB photon. In reality this energy follows a black body distribution with a peak around2.725 K, this fit is illustrated in figure 25. Conservation of relativistic momentum in the centre ofmass leads to: ( p p + p γ ) = ( p n + p + π ) (7) m p + 2 p p · p γ = ( m n + m π ) (8)The right hand side of the equation simplified because the lowest proton energy that can yield thesetwo particles will produce them both at rest in the centre of mass frame. And as the relativisticmomentum is Lorentz invariant, we can choose the frame in which we want to do the calculations.To maximise the energy available from the collision, we make the momenta of the two particles inopposite directions. The 4–momentum of the proton is ( E p , E p ) and of the photon ( E γ , − E γ )2 p p · p γ = ( m n + m π ) − m p (9)4 E p E γ = ( m n + m π ) − m p (10)Which finally leads to: E p > ( m n + m π ) − m p E γ (11) • E γ = 3K = 2 . × − MeV • m p = 938 . 27 MeV • m π + = 139 . 57 MeV • m n = 939 . 57 MeVThis gives an estimate for the lower bound proton energy: ≈ × eV. As anticipated,this valueis only a rough estimate, the statistical spread of the photon energy, especially the large tails towardssmaller wavelengths, means that there are CMB photons with a much higher energy. This will lowerthe bound on the proton energy. Also, the process that we have discussed, p + γ → π + + n , is not theonly microwave background scattering process for high energy protons. In particular, a second process, p + γ → p + π , also takes place, and is energetically preferred because the final state particles arelighter. Taking into account these and other details, the energy at which you begin to see suppressionof GZK photons is in fact around 3 × eV.One can now proceed to calculate the mean propagation length that such a proton would be ableto travel in space. For this, one would need an estimate of the cross section for pion production. TheBreit-Wigner formula with the lowest lying nucleon resonance ∆ + as intermediate state can be used.The final result is in the order of ten megaparsec. This leads us to a tentative conclusion that wecannot use UHE cosmic rays to look for extreme sources beyond our own cluster.33 igure 25: Graph of the cosmic microwave background spectrum, measured by the FIRAS instrument on theCOBE. The spectrum can be perfectly fitted to a black body spectrum at a temperature of around2.7K. igure 26: A current-to-voltage converter [11]. B Operational Amplifiers operational amplifiers (or op-amps) are among the most widely used building blocks for the construc-tion of electronic circuits. A comprehensive introduction to the various facets of these integratedcircuits can be found in any basic electronics textbook [11]. A specific type of op-amp that shouldbe mentioned in this work is the current-to-voltage converter. Some sensors, like photodiodes forexample, operate such that the physical quantity being measured is represented by the magnitude ofthe current produced at its output, rather than by the magnitude of a voltage. This illustrates one ofmany situations where we may wish to convert a varying current into a corresponding varying voltage.A circuit to perform this transformation is shown in figure 26. The input current can be related tothe output voltage in the case of an idealised op-amp: V = − I i R Thus, the output voltage is directly proportional to the input current and the gain is proportional tothe resistance used. 35 Code excerpts and manual The tree of python files in the project directory on the local hub computer takes the following form: • Configs – AutomaticConfGenerator.py • Modules – GenerateData.py – summarizePD.py – PlotResults.py • TakeData.py • CalculateRatios.py • SpecialCases – summarizePD extraplot.py – Plot RawData.py – Plot RawData OneGain.py – Datadivider.py – SoftwareVersion.py – callSummPD.py Also the more low level file Sources.py was adapted to make later applications and extensions morepractical. The python code of two important python parts of the program are given in this appendix: GenerateData.py and CalculateRatios.py .To support further development of the program, an extensive manual was written with guidelinesand technical details of the code. Both the manual, the full data and the code are available on request.36 enerateData.py from time import gmtime, strftime4: from Modules import summarizePD5: import os,time,sys,string6: from random import shuffle7: from copy import deepcopy8:9: def getGlobalChannel (pd, ch):10: if pd ==1:11: globalChannel = ch12: elif pd ==2:13: globalChannel = ch+214: elif pd ==3:15: globalChannel = ch+416: return globalChannel17:18: def convertPreampconfig (a):19: b=[]20: for i in range(0,len(a)):21: b.append(( "B" ,a[i]))22: return b23:24: def logData (source_state, count, pd,fAPerCount,d,t0):25: nch=226: while count>0:27: start_time=time.time()28: sys.stdout.write( "%f\t%d" %(start_time-t0, source_state))29: avgList=d.getADCAveragesReset().split( " " )30: for ch in range(nch):32: globalCh = getGlobalChannel(pd, ch)33: avg=float(avgList[globalCh*3+1])*fAPerCount[pd, ch]34: sys.stdout.write(string.ljust( "\t%.8g" %avg,9))35: sigma=float(avgList[globalCh*3+2])*fAPerCount[pd, ch]36: sys.stdout.write(string.ljust( "\t%.8g" %sigma,9))37: for ch in range(nch):38: globalCh = getGlobalChannel(pd, ch)39: 40: avg=float(avgList[globalCh*3+1])41: sys.stdout.write(string.ljust( "\t%.8g" %avg,9))42: sigma=float(avgList[globalCh*3+2])43: sys.stdout.write(string.ljust( "\t%.8g" %sigma,9))44: sys.stdout.write( "\n" )45: sys.stdout.flush()46: delay=start_time+1-time.time()47: if delay>0: time.sleep(delay)48: count-=149:50: def setupConfigs (preampConfigs, pd, adcGain,d): 51: nch=252: fAPerCount = {}53: preampType,transimpedance=preampConfigs54: d.setPreampType(pd-1,preampType)55: d.setTransimpedance(pd-1,transimpedance)56: for ch in range(nch):57: globalCh = getGlobalChannel(pd,ch)58: d.setADCGain(globalCh,adcGain)59: fAPerCount[pd,ch] = d.fAPerCount(globalCh)60: return fAPerCount61:62: def WriteConfigFromDict (pd,date,startTemp,endTemp,**dict):63: string = "comment\t%s" %dict[ ’RunName’ ] + "\n" 64: string+= "config\tstart\t%s" %str(date) + "\n" 65: string+= "config\tledNumber\t%i" % dict[ ’led’ ] + "\n" 66: string+= "config\tbrightnessrange\t%s" % dict[ ’brightnessrange’ ] + "\n" 67: string+= "config\tbrightness\t%i" % dict[ ’brightness’ ] + "\n" enerateData.py 68: string+= "config\tfw1position\t%i" % dict[ ’fw1position’ ]+ "\n" 69: string+= "config\tfw2position\t%i" % dict[ ’fw2position’ ]+ "\n" 70: string+= "config\tsettling_time\t%d" %dict[ "settling_time" ]+ "\n" 71: string+= "config\ton_time\t%d" %dict[ "on_time" ]+ "\n" 72: string+= "config\toff_time\t%d" %dict[ "off_time" ]+ "\n" 73: string+= "config\tncycles\t%i" %dict[ "ncycles" ]+ "\n" 74: string+= "config\tstartTemp\t%d\tendTemp\t%d\n" %(startTemp,endTemp)75: string+= "config\tscanAxes\t%s\t%s" %( "rG" , "adcGain" )+ "\n" 76: string+= "config\tscanNPoints\t%d\t%d" %(len(dict[ ’preampConfigs’ ]),len(dict[ ’adcGains’ ]))+ "\n" 77: string+= "config\tPD\t%i" %pd+ "\n" return string 79:80: def generatedata (d,m,source,**dict):81:82: startTemp=d.getPreampTemp(0)83: pd=2 84: RunName=dict[ ’RunName’ ]85: settling_time = dict[ "settling_time" ]86: off_time=dict[ "off_time" ]87: on_time=dict[ "on_time" ]88: 89: fAPerCount ={}90: preampConfigs = {}91: os.mkdir(os.path.join( "Runs" ,RunName))92:93: sourceParameters={ "lednumber" :dict[ ’led’ ], "brightnessrange" :dict[ ’brightnessrange’ ], "brightness" :dict[ ’brightness’ ], "fw1position" :dict[ ’fw1position’ ], "fw2position" :dict[ ’fw2position’ ]}94: source.setParameters(**sourceParameters)95: print "source:\t%s" % ’\t’ .join(str(s) for s in source.getParameters())96:97: 98: illum=open(os.path.join( "Runs" ,RunName, ’illum.log’ ), ’w’ )99: oldstdout = sys.stdout100:101: "%Y-%m-%d %H:%M:%S" , gmtime())103: preampConfigs=convertPreampconfig(dict[ ’preampConfigs’ ])104: 105: for i in range(0,dict[ "ncycles" ]):106: tempreslist=deepcopy(dict[ ’preampConfigs’ ])107: shuffle(tempreslist)108: for R in tempreslist:109: tempadclist=deepcopy(dict[ ’adcGains’ ])110: shuffle(tempadclist)111: for x in tempadclist:112: fAPerCount[dict[ ’preampConfigs’ ].index(R)]=setupConfigs(preampConfigs[dict[ ’preampConfigs’ ].index(R)],pd,x,d)113: sys.stdout=oldstdout114: print "rg%s_adc(ch0)%s_adc(ch1)%s:\t cycle %i of %i" %(R,x,x,i+1,dict[ "ncycles" ])115: illum.write( "start\t%s\t%i%i\t%i\t%i" %(R,x,x,dict[ ’preampConfigs’ ].index(R),dict[ ’adcGains’ ].index(x)))116: illum.write( "\n" )117: t0=time.time()118:119: sys.stdout=illum120: ’preampConfigs’ ].index(R)],d,t0)123: ’preampConfigs’ ].index(R)],d,t0)126: enerateData.py ’preampConfigs’ ].index(R)],d,t0)129: print "stop" "Runs" ,RunName, ’config.log’ ), ’w’ )134: endTemp=d.getPreampTemp(0)135: config.write(WriteConfigFromDict(pd,date,startTemp,endTemp,**dict))136: for step,R in enumerate(dict[ ’preampConfigs’ ]):137: for x in dict[ ’adcGains’ ]:138: fAPerCount[step]= setupConfigs(preampConfigs[step],pd,x,d)139: config.write( "config\trG=%s\t: counts for adcGain %i ch0(1x) and adcGain %i ch1(x101)\n" %(R,x,x))140: config.write( "config\tfAPerCount\t%.8g\t%.8g\n" %(fAPerCount[step][pd,0],fAPerCount[step][pd,1]))141: 142: config.close()143: print "Data successfully taken. Starting to process..." ’Runs’ , dict[ "RunName" ]))145: print "The configfile was successful processed!" alculateRatios.py import os4: from itertools import combinations5: from numpy import sqrt6: import matplotlib.pyplot as plt7: from matplotlib.backends.backend_pdf import PdfPages8: from copy import deepcopy9: from Modules import PlotResults10:11: def get_immediate_subdirectories (a_dir):14: return [name for name in os.listdir(a_dir)15: if os.path.isdir(os.path.join(a_dir, name))]16:17: def gain (key):18: val=key[0]*key[1]19: return val if key[2]==0 else def convertToRatio (tocompare):22: ratiosToCompare={}23: for key1, key2 in combinations(tocompare.keys(), r = 2):24: pdcurr=(tocompare[key1][0]/tocompare[key1][1]**2+tocompare[key2][0]/tocompare[key2][1]**2)/(1/tocompare[key1][1]**2+1/tocompare[key2][1]**2)25: ratio=tocompare[key1][0]/tocompare[key2][0] 26: error= sqrt(tocompare[key1][1]**2+tocompare[key2][1]**2) if gain(key1)>gain(key2):28: ratiosToCompare[(key1,key2)]=[ratio,ratio*error, pdcurr] else :30: ratiosToCompare[(key2,key1)]=[1./ratio,error/ratio, pdcurr] return ratiosToCompare32:33: def convertFileToRatio (subdirectory,currentname):34: with open(os.path.join( "Processed" ,subdirectory,currentname)) as current:35: lines = current.readlines()36: lines.pop(0)37: tocompare={}38:39: for line in lines:40: lineparts=line.split( ’\t’ )41: if lineparts[4]!= "nan" :42: val0=float(lineparts[4])43: relerror0=float(lineparts[5])/val0 45: tocompare[int(float(lineparts[2])),int(lineparts[3][4]),0]=[val0,relerror0]46: if lineparts[6]!= "nan" :47: val1=float(lineparts[6])48: relerror1=float(lineparts[7])/val1 50: tocompare[int(float(lineparts[2])),int(lineparts[3][-2]),1]=[val1,relerror1]51: return convertToRatio(tocompare) 52: 53: def mergeRatios (total,current):54: for key in current:55: valmean=current[key][0]/(current[key][1]**2)56: valsigma=1./(current[key][1]**2)57: if key in total:58: total[key][0]+=valmean59: total[key][1]+=valsigma60: else : alculateRatios.py 61: total[key]=[valmean,valsigma]62: return total63:64: def updateRatios (ConfigRatios):65: totalRatios={}66: for k in ConfigRatios:67: totalRatios=mergeRatios(totalRatios,ConfigRatios[k])68: for key in totalRatios:69: error=totalRatios[key][1]70: totalRatios[key][0]=(totalRatios[key][0])/error71: totalRatios[key][1]=sqrt(1./error)72: return totalRatios73:74: def dictOfDicts (subdirectory,ConfigRatios): 75: filelist=os.listdir(os.path.join( "Processed" ,subdirectory))76: for file in filelist:77: ConfigRatios[file]=convertFileToRatio(subdirectory,file) 78: return ConfigRatios79:80: def consistencyCheck (totalratio,totaldict):81: i=082: length=083: badpoints={} 84: badratios={} for key in totaldict:86: errorlist=[]87: length+=len(totaldict[key])88: for subkey in totaldict[key]:89: if subkey in badratios: badratios[subkey][0]+=190: else : badratios[subkey]=[1,0,[]]91: abserror=sigmasingleratio*sqrt(totalratio[subkey][1]**2+totaldict[key][subkey][1]**2) 92: dif=abs(totalratio[subkey][0]-totaldict[key][subkey][0])93: if dif>abserror:94: i+=195: errorlist.append(subkey)96: badratios[subkey][1]+=197: badratios[subkey][2].append(key)98: print str(i)+ " of the " + str(length) + " ratios were not compatible with the averaged mean." return badpoints, badratios103:104:105: def upgradeRelativeRatio (exact,relative,total):106: l=0107: combined={}108: ConsCheck={}109: for key in relative:110: if gain(key)>gain(exact): for key2 in total:113: if key2[1]==key: 114: comratio=relative[key][0]*total[key2][0]115: comsigma=comratio*sqrt( (relative[key][1]/relative[key][0])**2 + (total[key2][1]/total[key2][0])**2 )116: tempval=comratio/comsigma**2117: temperror=1./comsigma**2118: l+=1119:120: 121: if key2[0] in combined:122: alculateRatios.py atio, comsigma))126: else :127: else : for key2 in total: 132: if key2[0]==key: 133: comratio=relative[key][0]/total[key2][0]134: comsigma=comratio*sqrt( (relative[key][1]/relative[key][0])**2 + (total[key2][1]/total[key2][0])**2 )135: tempval=comratio/comsigma**2136: temperror=1./comsigma**2137: l+=1138:139: if key2[1] in combined:140: else :145: print "There are " + str(l) + " ratios combined with one intermediate step." for k in combined:152: if k in relative:157: abserror=sqrt(combined[k][1]**2+relative[k][1]**2) 158: dif=abs(combined[k][0]-relative[k][0])159: if dif>sigmacombinedratios*abserror: print "there was a problem with the ratio between " + str(exact) + " and " + str(k)161: else :162: mean=combined[k][0]/(combined[k][1]**2)+relative[k][0]/(relative[k][1]**2)163: err=1./(relative[k][1]*relative[k][1]) + 1./(combined[k][1]*combined[k][1])164: supercombined[k]=[mean/err,sqrt(1./err)]165: else :166: supercombined[k]=combined[k]167:168: return supercombined,combined, ConsCheck169:170: for x in raw_input( "The exact gain combination is: " ).split( ’,’ ))175: check_brightness=raw_input( "Make brightness plots? (y/n): " )176: check_indirect=raw_input( "Make indirect measurement check? (y/n): " )177: check_badrawdata=raw_input( "Make raw data plots of runs with bad points? (y/n): " )178:179:180: global sigmasingleratio alculateRatios.py global sigmacombinedratios184: sigmacombinedratios=1.5185: global ratiobadpoints186: ratiobadpoints=0.2187: global numberofbadpointsinfile188: numberofbadpointsinfile=8189:190:191:192: print "----------------------------------" "Processed" )195: ConfigRatios={}196: for subdirectory in dirnames:197: totaldict=dictOfDicts(subdirectory,ConfigRatios)198:199: totalRatios=updateRatios(totaldict)200:201:202: if not os.path.exists( ’Results/Bad’ ): os.makedirs( ’Results/Bad’ )213:214: badfiles=[]215: with open(os.path.join( "Results" , "Bad" , "BadPoints.txt" ), "w" ) as f: 216: total,l,m=0,0,0217: f.write( ’Relative number of bad points for each ratio:\n’ )218: for key in badratios: 219: if float(badratios[key][1])/badratios[key][0]>ratiobadpoints or badratios[key][0]<2:220: f.write( "(%s,%s,%s), (%s,%s,%s):%s/%s\n" %( "{:.0e}" .format(key[0][0]),key[0][1], key[0][2], "{:.0e}" .format(key[1][0]),key[1][1], key[1][2],badratios[key][1],badratios[key][0]))221: for i in range(len(badratios[key][2])):222: f.write( "\t%s\n" %(sorted(badratios[key][2], key= lambda x: totaldict[x][key][2] )[i]))223: total+=badratios[key][1] 224: if badratios[key][0]<2: m+=1225: else : l+=1226: print str(total-m)+ " bad points were associated with the " + str(l) + " ratios, which are excluded because more than " +str(ratiobadpoints*100)+ "% of total number of points were bad/red." print str(m) + " ratios, which each contained 1 point were excluded." print "----------------------------------" "\n\n\n\n" )233:234: for (key, value) in badpoints.iteritems():235: f.write( "%s\n" %(key))236: count=0237: for i in range(len(value)):238: count+=1239: f.write( "(%s,%s,%s), (%s,%s,%s)\n" %( "{:.0e}" .format(value[i][0][0]), value[i][0][1], value[i][0][2], "{:.0e}" .format(value[i][1][0]), value[i][1][1], value[i][1][2])) alculateRatios.py "\n" ) 241: if count>numberofbadpointsinfile: badfiles.append(key)242:243: with open(os.path.join( "Results" , "Ratios.txt" ), "w" ) as f:244: f.write( "config1 \t\t\t\t config2 \t\t\t\t\t\t ratio \t\t sigma\n" )245: for (key, val) in totalRatios.iteritems():246: f.write( "%-*s%-*s" %(25,key[0],20,key[1]))247: f.write( "\t\t\t" )248: f.write( "%-*.4f\t%-*.4f" %(10,val[0],10,val[1]))249: f.write( "\n" )250:251: print "There are " + str(len(totalRatios)) + " calculated ratios." "Results" , "Bad" , "BadToDo.mcr" ), "w" ) as todo:254: for i in range(len(badfiles)):255: todo.write( "%s\n" %( ’_’ .join(badfiles[i].split( ’_’ )[0:4])+ ’.config’ ))256:257: j=0258: for (key, val) in totalRatios.iteritems(): 260: if val[1]<0.0011: j+=1261: print "There are " + str(j) + " calculated ratios with 0.1% precision." for key in totalRatios:266: if key[0]==ratio:267: i+=1268: relativeRatio[key[1]]=[1./totalRatios[key][0],totalRatios[key][1]/(totalRatios[key][0]**2)]269: elif key[1]==ratio:270: i+=1271: relativeRatio[key[0]]=totalRatios[key]272:273: print "There are " +str(i)+ " direct ratios found that compare with " + str(ratio) + "." "Results" , "Comparedto%s.txt" %(str(ratio))), "w" ) as f:278: f.write( " )279: f.write( "\n" )280: f.write( "configuration \t\t ratio \t sigma\n\n" )281: for (key, val) in superCombinedRatio.iteritems():282: f.write( "%-*s" %(25,key))283: f.write( "%-*.5f\t%-*.5f" %(10,val[0],10,val[1]))284: f.write( "\n" )285: k+=1286: f.write( "\n" )287: f.write( "\n" )288:289: f.write( " )290: f.write( "\n" )291: f.write( "configuration \t\t ratio \t sigma\n\n" )292: for (key, val) in combinedRatio.iteritems():293: f.write( "%-*s" %(25,key))294: f.write( "%-*.5f\t%-*.5f" %(10,val[0],10,val[1]))295: f.write( "\n" )296: 297: f.write( "\n" )298: f.write( "\n" )299: f.write( " )300: f.write( "\n" )301: f.write( "configuration \t\t ratio \t sigma\n\n" )302: for (key, val) in relativeRatio.iteritems():303: f.write( "%-*s" %(25,key))304: f.write( "%-*.5f\t%-*.5f" %(10,val[0],10,val[1])) alculateRatios.py "\n" )306:307: print "There are " +str(k)+ " ratios found that compare with " + str(ratio) + "." print "----------------------------------" if check_brightness== ’y’ : PlotResults.PlotRatios(totRatCopy, totaldict, sigmasingleratio)315:316: if check_indirect== ’y’ :317: print "Indirect Measurement plotting:" if check_badrawdata== ’y’ :321: print "Bad raw data plotting:" for i in range(len(badfiles)):323: print "Plot: " +str(m)+ "/" +str(len(badfiles))324: PlotResults.PlotRawData( ’.’ .join(badfiles[i].split( ’.’ )[0:len(badfiles[i].split( ’.’ ))-1]))325: m+=1326:327:328:329: Acknowledgement I would like to thank my colleague Jeroen Van Houtte for developing most of the graphical outputof the program and for his valuable contributions to all parts of the end result. Further I thank mysupervisor Chris Wendt who came up with illuminating ideas when they were needed the most. Finally,this Honours Award in Sciences would never looked the same without professor Dirk Ryckbosch. Hegave me the opportunities to improve my skills while collaborating in research of the highest level andwas there with advice whenever obstacles appeared.46 References [1] C. L. Cowan, Jr., F. Reines, F. B. Harrison, H. W. Kruse, and A. D. McGuire. Detection of thefree neutrino: a confirmation. 1224(3212):103–104, July 1956.[2] F. Halzen and S. R. Klein. Invited Review Article: IceCube: An instrument for neutrino astron-omy. Review of Scientific Instruments , 81(8), August 2010.[3] C. Spiering. Towards High-Energy Neutrino Astronomy. A Historical Review. Eur. Phys. J. ,H37:515–565, 2012.[4] K. Greisen. End to the cosmic-ray spectrum? Phys. Rev. Lett. , 16:748–750, Apr 1966.[5] M. Ackermann, J. Ahrens, X. Bai, M. Bartelt, S. W. Barwick, R. C. Bay, et al. Optical propertiesof deep glacial ice at the south pole. Journal of Geophysical Research: Atmospheres , 111(D13),2006.[6] A. Achterberg, M. Ackermann, J. Adams, J. Ahrens, K. Andeen, D. W. Atlee, J. Baccus, J. N.Bahcall, X. Bai, et al. First year performance of the IceCube neutrino telescope. AstroparticlePhysics , 26:155–173, October 2006.[7] K. Hanson and O. Tarasova. Design and production of the IceCube digital optical module. NuclearInstruments and Methods in Physics Research A , 567:214–217, November 2006.[8] R. Abbasi, Y. Abdou, T. Abu-Zayyad, J. Adams, J. A. Aguilar, M. Ahlers, K. Andeen, J. Auf-fenberg, X. Bai, M. Baker, et al. Calibration and characterization of the IceCube photomultipliertube. Nuclear Instruments and Methods in Physics Research A , 618:139–152, June 2010.[9] D. Tosi and C. Wendt for the IceCube Collaboration. Calibrating the photon detection efficiencyin IceCube. ArXiv e-prints , February 2015.[10] The IceCube Collaboration. The IceCube Data Acquisition System: Signal Capture, Digitization,and Timestamping. ArXiv e-prints , October 2008.[11] N. Storey.