A NECTAr-based upgrade for the Cherenkov cameras of the H.E.S.S. 12-meter telescopes
Terry Ashton, Michael Backes, Arnim Balzer, David Berge, Julien Bolmont, Simon Bonnefoy, Francois Brun, Thomas Chaminade, Eric Delagnes, Gerard Fontaine, Matthias Füßling, Gianluca Giavitto, Berrie Giebels, Jean-Francois Glicenstein, Tobias Gräber, Jim Hinton, Albert Jahnke, Stefan Klepser, Marko Kossatz, Axel Kretzschmann, Valentin Lefranc, Holger Leich, Jean-Philippe Lenain, Hartmut Lüdecke, Iryna Lypova, Pascal Manigot, Vincent Marandon, Emmanuel Moulin, Thomas Murach, Mathieu de Naurois, Patrick Nayman, Stefan Ohm, Marek Penno, Duncan Ross, David Salek, Markus Schade, Thomas Schwab, Kleopas Shiningayamwe, Christian Stegmann, Constantin Steppa, Jean-Paul Tavernet, Julian Thornhill, Francois Toussnel, Pascal Vincent
AA NECTAr-based upgrade for the Cherenkov camerasof the H.E.S.S. 12-meter telescopes
T. Ashton b , M. Backes h,i , A. Balzer c , D. Berge c,a , J. Bolmont e ,S. Bonnefoy a , F. Brun d , T. Chaminade d , E. Delagnes d , G. Fontaine f ,M. F¨ußling a , G. Giavitto a, ∗ , B. Giebels f , J.-F. Glicenstein d , T. Gr¨aber a ,J.A. Hinton b,g , A. Jahnke j , S. Klepser a, ∗ , M. Kossatz a , A. Kretzschmann a ,V. Lefranc a,d , H. Leich a , J.-P. Lenain e , H. L¨udecke a , I. Lypova a ,P. Manigot f , V. Marandon g , E. Moulin d , T. Murach a , M. de Naurois f ,P. Nayman e , S. Ohm a , M. Penno a , D. Ross b , D. Salek c , M. Schade a ,T. Schwab g , K. Shiningayamwe h , C. Stegmann a , C. Steppa a ,J.-P. Tavernet e , J. Thornhill b , F. Toussenel e , P. Vincent e a DESY, D-15738 Zeuthen, Germany b Department of Physics and Astronomy, The University of Leicester, University Road,Leicester, LE1 7RH, United Kingdom c GRAPPA, Anton Pannekoek Institute for Astronomy, University of Amsterdam,Science Park 904, 1098 XH Amsterdam, The Netherlands d IRFU, CEA, Universit´e Paris-Saclay, F-91191 Gif-Sur-Yvette Cedex, France e Sorbonne Universit´es, Universit´e Paris Diderot, Sorbonne Paris Cit´e, CNRS/IN2P3,Laboratoire de Physique Nucl´eaire et de Hautes Energies, LPNHE, 4 Place Jussieu,F-75252, Paris, France f Laboratoire Leprince-Ringuet, Ecole Polytechnique, CNRS/IN2P3, F-91128 Palaiseau,France g Max-Planck-Institut f¨ur Kernphysik, P.O. Box 103980, D 69029 Heidelberg, Germany h University of Namibia, Department of Physics, Private Bag 13301, Windhoek, Namibia i Centre for Space Research, North-West University, Potchefstroom 2520, South Africa j JA consulting, St Michael Park 23, Avis, Windhoek, Namibia
Abstract
The High Energy Stereoscopic System (H.E.S.S.) is one of the three arrays ofimaging atmospheric Cherenkov telescopes (IACTs) currently in operation.It is composed of four 12-meter telescopes and a 28-meter one, and is sensitive ∗ Corresponding authors.
Email addresses: [email protected] (G. Giavitto), [email protected] (S. Klepser)
Preprint submitted to Astroparticle Physics January 15, 2020 a r X i v : . [ a s t r o - ph . I M ] J a n o gamma rays in the energy range ∼
30 GeV – 100 TeV. The cameras of the12-m telescopes recently underwent a substantial upgrade, with the goal ofimproving their performance and robustness. The upgrade involved replacingall camera components except for the photomultiplier tubes (PMTs). Thismeant developing new hardware for the trigger, readout, power, cooling andmechanical systems, and new software for camera control and data acquisi-tion. Several novel technologies were employed in the cameras: the readoutis built around the new NECTAr digitizer chip, developed for the next gen-eration of IACTs; the camera electronics is fully controlled and read out viaEthernet using a combination of FPGA and embedded ARM computers; thesoftware uses modern libraries such as Apache Thrift, ØMQ and Protocolbuffers. This work describes in detail the design and the performance of theupgraded cameras.
Keywords:
Gamma-ray astronomy, Cherenkov camera, High-energyinstrumentation upgrade, PMT Cameras, NECTAr, H.E.S.S.
1. Introduction
The first Cherenkov telescopes of the H.E.S.S. array were the four 12-meter diameter CT1–4, built and commissioned between 2002 and 2004 atthe H.E.S.S. site in the Khomas highlands in Namibia (see e.g. [1]). CT1–4are also known as the ”H.E.S.S. I array”. A fifth, 28-meter diameter telescopewas built in 2012 in the centre of the square H.E.S.S. I array. The main goalof this new telescope, called CT5, was lowering the minimum gamma-rayenergy threshold of H.E.S.S. from ∼
100 GeV down to ∼
30 GeV. To reachthat goal, CT5 has a very large mirror area (614 m ), photosensors withhigher quantum efficiency and a camera [2, 3] with a much lower dead-timethan the original CT1–4 ones. CT5 can trigger on low energy air showerswith a rate of ∼ ∼ µ s per event: lowering their trigger threshold by e.g. 30% would haveincreased the fraction of events lost due to dead-time to ∼ igure 1: Left: A picture of the first upgraded H.E.S.S. I camera, mounted on CT1.Right: Rear 3D view of the of the camera. The backplane rack is visible, the ventilationis contained inside the back door (in light blue). The mechanical structure of the camerawas built at the LLR laboratory. prevent the inevitable increase of failures due to the ageing of the electronics,connectors and other critical parts that had been exposed for 14 years to theharsh conditions of the Namibian site. Furthermore, many electronic com-ponents had become obsolete and could not be procured anymore, makingthe cameras increasingly difficult to maintain.This work is structured as follows: general description of design and ar-chitecture ( § § § § §
2. Architecture of the new cameras
Upgrading the H.E.S.S. I cameras meant replacing or refurbishing essen-tially every component inside them. Only the photomultiplier tubes (PMTs)and their high voltage power supplies (HV bases) were kept, due to theircost and relative robustness. This can also be seen in the schematic diagramof the camera subsystems (Fig. 2). When possible, commercial off-the-shelf(COTS) solutions were employed. A shared design feature of all customelectronic subsystems developed for the cameras is the usage of an FPGAcoupled to a single-board computer, controlled via Ethernet.Most of the development, production and testing of the cameras has beendone at the DESY site in Zeuthen. A picture of one of the upgraded camerason the telescope can be seen in Fig. 1, left.3 elescope Shelter
HESS1U CameraTelescope Hut
New or redesigned module C o nn e c t i o n B o a r d FAN Drawer MechanicsPower- Distribution-BoxAnalog Trigger Board: 38 supercluster trigger circuits
Unchanged componentCommercial product
2x Analog Board (16 Channels)1x Slow Control Board 16x PM Base16x Photo Multiplier E t h e r n e t S w i t c h
60x Network Cable
Pneumatic Lid ControlSingle PE Unit
Remote front lid openContact Local Mode
Contact f ront -li d openContact back -li d open Power Terminal V A C
8x Positioning Leds (POF)
Farm
Service Interface O p t i c a l F i b e r ( N e t w o r k ) Ventilation System Door,Humidity & Temp. Sensors back-lid contactfront-lid contactGPS Antenna (replaced)GPS SignalFront Panel Ambient Light SensorAcq. Management & Trigger controlGPS Module & InterfaceSlow control
Drawer Interface Box (DIB)
Optical fiber position poles at camera focal plane
Contact pressure okContact front-lid moving
Status R5 2016-09-11
Ventilation OK
Main 24V Rail ControlStatus
60x Power Cable 24V
Main Power SupplyAux. 24V Drawer PowerEnableCamera PC
Optical Fiber
Trigger of FF Unit
Isolated FF-transmitterSmoke Detector
Telescope Mirror m Flatfielding Unit
Receiverprotection
Central Trigger
Optical Fiber
RJ45
Auxilliary Interface Box
Optical Fiber
AC/DC 12V50W ePower Switch
Flatfielding Unit 2
Optical Fiber
Aux 24V Power Supply
Figure 2: This diagram illustrates how the various mechanical and electronic subsystems of the camera interact. Original,custom-made, and commercial off-the-shelf components are marked in green, blue and orange, respectively. Red lines representthe power distribution and the arrows its direction; if labeled “Optical Fiber”, they represent bidirectional optical fiber links.Green, circle-terminated lines represent copper Ethernet links. Black lines and arrows represent electrical signals. Physicallocations of the subsystems are marked with light grey boxes: “HESS1U Camera“ is the camera body, the “Telescope Shelter”is the camera daytime parking shelter; the “Telescope Hut” is a service container tied to the telescope structure, the “Farm”is the server room inside the array control building. .1. Front-end electronics Cherenkov light from particle showers in the atmosphere is detected anddigitized in the front-end of the camera. The light sensors are 960 PMTs,organized into 60 modules, called “drawers” (Fig. 3). The drawers are ar-ranged in a 9 × The analogue signal from one PMT is sent to the analogue board via a15 cm long coaxial cable. The PMTs produce negative polarity, single-endedvoltage pulses of 2–3 ns duration (FWHM) with an amplitude varying from1 mV to a few V, depending on the number of photons detected. Uponreaching the analogue board, the PMT signals are AC coupled, pre-amplifiedby a factor 9.8, split into three branches and further amplified by low noisesingle-ended to differential amplifiers, which also invert their polarity.Two of the branches are routed to the two inputs of the NECTAr readoutchip, for sampling and digitization. Their overall amplification factors are15.1 (high gain, HG) and 0.68 (low gain, LG). The NECTAr chip inputs havea nominal range of 2 V, so high gain signals are clipped to 3.3 V, the mostconvenient voltage present on the board within the NECTAr chip tolerancerange, to avoid affecting the low gain. An adjustable constant common-modeoffset of about 0.2 V is added to the electrical signal to keep it within theinput range even in the case of undershoot (this corresponds to a pedestaloffset of around 420 ADC counts).The signal in the third branch is amplified by a factor 45 and sent to a high-speed comparator, whose digital output is directly routed to the FPGA onthe slow control board. This signal is referred to as the level 0 (L0) triggersignal. 5 igure 3: Top: Annotated inside view of a partially assembled drawer. Bottom: A fullyassembled drawer. igure 4: Left: Part of the analogue board showing the analogue amplification stages(light green) and the NECTAr chips (black with a nectar logo). Right: Microphotographyof the NECTAr chip. Most of the performance improvements of the upgraded cameras are dueto its readout electronics, based on the NECTAr analogue memory chip,designed at CEA/IRFU [5] (Fig. 4, right).The NECTAr chip has two channels (one per gain), each equipped witha switched capacitor array of 1024 cells, acting as an analogue ring memorybuffer. There are two modes of operation: writing and reading. During thewriting phase, the input amplitude is stored sequentially on the array capac-itors, with a switching frequency of 1 GHz. The writing process is circularover the whole array, so the charge stored in the cells is overwritten every1024 ns by the new input. A trigger signal stops the writing and initiatesthe reading: the charges in the capacitor cells of a small region of interest(ROI) are read out and digitized by the on-chip 12-bit 21 Msamples/s ADC.The digital data is then transmitted to an FPGA by means of a serializer.For regular observations the ROI is currently set to 16 cells, and the datain the ROI is summed by the FPGA, and sent to the camera server as oneintegrated charge value per pixel and per gain. The choice of ROI lengthand simple summing charge integrator is inherited from the old cameras forcompatibility with the existing H.E.S.S. analysis and simulation frameworks(see e.g. [6]). It is a sufficiently adequate choice for most applications sinceCherenkov light from atmospheric particle showers reaching the camera hasa typical temporal spread of less than 10 ns, except for the most inclined andenergetic showers. The performance of the new camera readout and dataacquisition systems, however, allows full waveform sampling with an ROIlength of up to 48 samples, which is expected to increase the sensitivity of7he array to high energy showers. This mode of operation is currently beingtested on selected targets, along with more sophisticated charge integrationalgorithms (see also Sect. 5.3).
The FPGA and ARM computer of each front-end drawer are located onthe slow control board. They are connected via a 100 Mbit/s memory bus,with a 16 bit word width; the ARM computer has a 100 Mbit/s Ethernetinterface and acts as a device node of the distributed camera control software.The FPGA reads out sampling data from the NECTAr chip, collects othermonitoring data such as PMT currents and L0 trigger counters, and directlycontrols all the electronics inside the drawer. The ARM computer runs aslow control server accessing the FPGA registers, reads out all FPGA data,buffers it and sends it over the network to a central camera server via TCP/IPusing the ØMQ library [7]. The central camera server controls the drawerby means of remote procedure calls implemented using the Apache Thriftlibrary [8].The drawer slow control board also houses several point-of-load regulatorsand DC line filters, providing the required voltage supplies for all the drawercomponents. Also, the sockets for the PMT HV bases and the correspondingcontrol and readout electronics are located at the front-facing end of theboard.The connection board has 2 RJ45 sockets, one for standard Ethernet andone for four Low Voltage Differential Signaling (LVDS) signals: two triggeroutputs, a clock and a readout control input. A 4-pin M8 socket provides 24 VDC (see Sect. 2.2.4) to the main step-down (24 V to 12 V) DC-DC converter,which is also hosted on the connection board. This arrangement assuresgalvanic isolation of the electronics inside each drawer, preventing groundloops and current surges. It also isolates the rather noisy switching-modeDC-DC converter from the sensitive analog front-end part of the drawer, andallows it to be efficiently cooled. The 12V output of the DC-DC converter isrouted to the regulators on the slow control board.
The back-end electronics is deployed inside one 19 inch rack located inthe back side of the camera (see Fig. 1, right, and Fig. 5). New componentsdeveloped specifically for this upgrade are described in the following.8 igure 5: Left: Photograph of the inside of the installed CT1 camera. Right: Photographof the camera cabling solution, using cable spines. The cables carry Ethernet data (red);trigger, clock and control signals (blue) and power (black). The cables are connected tothe drawer connection boards.
The drawer interface box (DIB) is the central hub of the camera. Assuch, its functions include: trigger and readout control interface and clockdistribution to the drawers; camera-level trigger generation; interface to thearray central trigger and to the auxiliary camera components, such as thefront position LEDs, the pneumatics control and the ambient light sensor (seeFig. 2); GPS timestamping of events and a safety interlock logic to ensurethe protection of people, PMTs and camera electronics.The DIB is composed of three interconnected boards: front panel board,main board and analogue trigger board (see Fig. 6, left). The front panelboard houses connectors for the drawer trigger, clock and control signals,the central trigger fiber interface, the front position LEDs lightguides andthe other camera sensors and actuators; the main board is where the FPGAand ARM computer are located and all signals are routed; finally the analogboard is a mezzanine of the main board, whose purpose is to generate thelevel-1 (L1) camera trigger (see Sect. 2.2.2).Furthermore, the DIB is equipped with a GPS module that delivers apulse-per-second (PPS) signal, to which the main 10 MHz clock, providedby a high precision temperature stabilized quartz oscillator, is disciplined.This clock is also distributed to the drawers. The GPS module also sends atimestamp to the DIB via a serial interface. This is used to timestamp eventsat the camera level. The precision of the camera GPS timestamp is betterthan few µ s, of the order of the signal transit time between the camera and9 igure 6: Left: Drawer interface box, with top lid open to show the analogue trigger boardon top of the main board. The GPS module is at the back. Right: Power distributionbox. the central trigger. The camera trigger architecture is the same as it was in the old cameraelectronics, a N -majority over “trigger sectors” of 64 contiguous pixels [9].Therefore, an N -fold coincidence within a sector is sufficient for the camerato trigger. Usually N is set to 3. There are 38 sectors in the camera, whichoverlap horizontally by one half drawer and vertically by one full drawer.This trigger architecture is implemented as follows: the signal from of eachPMT is amplified and compared to a threshold P to produce the L0 signal,which is then routed to the FPGA on the slow control board and sampledthere at 800 MHz. The sampled L0 signal can thus be delayed or stretchedin steps of 1.25 ns. Then, the FPGA counts the number of pixels with anactive L0 in each half of the drawer separately. These two numbers arecontinuously sent as two LVDS pulse-amplitude modulated trigger signals tothe DIB. The amplitude modulation has 8 discrete levels with an amplitudeof 33 mV each.In the DIB these amplitude-modulated signals are made single-ended andisochronally routed to 38 analogue summators, one per sector, located onthe analogue trigger board. Due to the overlapping geometry, each signal isdistributed to up to 4 sector summators. The amplitude of the output ofeach summator is proportional to the number of active L0 signals in eachsector. This sector sum signal is then routed to a comparator, where a sectorthreshold Q corresponding to N active pixels is applied. All comparator10utputs are subsequently routed to the FPGA of the DIB where they arecombined in an OR to form the camera L1 trigger. When an L1 trigger ispresent, a length-encoded “stop” signal is broadcast to all drawers via theLVDS readout control lines, and an “active” signal is sent to the centralarray trigger in the control building via an optical fibre. Upon receiving the“stop” signal, the drawer FPGA stops the NECTAr writing, and immediatelyperforms the readout and digitization of the region of interest, storing thedata in a front-end buffer.During regular observations, the H.E.S.S. central trigger [10] sends backan “accept” signal to the CT1–4 cameras only if a coincidence of at least twotelescopes “active” signals occurs within an 80 ns window (after correctingfor their pointing-dependent light propagation delay). This signal is receivedby the DIB and forwarded to the drawers, initiating there the storage ofthe data held in the front-end buffer. Should no “accept” signal arrive, thefront-end buffer is discarded after a hold-off time t b slightly longer than thereadout dead-time and the maximum latency of the signal response from thecentral trigger. If another L1 trigger is issued by the camera before the hold-off is expired, a “busy” signal is sent to the central trigger instead, but nosignal is sent to the drawers. “Active”, “accept”, and “busy” triggers sharethe same fibre connection, so they are pulse-length coded.A design choice different from the original H.E.S.S. camera trigger, andinspired by the digital camera trigger design for CTA [11] is the 800 MHz sam-pling of the pixel trigger comparator output (the old logic was asynchronous).One advantage of using a synchronous logic is that the L0 signal can be de-layed and stretched, another is the possibility to implement alternative L1trigger logic architectures. Indeed, two of them have been implemented: acompact next-neighbour (NN, [12]) logic, and a pseudo-analogue sum triggerlogic [13]. In both cases no changes in the analogue part of the trigger aremade, only the FPGA firmware is different.In the NN logic, the L1 signal is issued only when a cluster of neigh-bouring pixels inside a drawer is simultaneously active. In the FPGA this isimplemented with a simple look-up table. The implementation however doesnot take into account NN groups overlapping two drawers.The pseudo-sum trigger algorithm works by measuring the duration of theL0 signals, instead of just counting the active ones. The idea behind this isthat the duration of the L0 signal is proportional to the total charge depositedwithin the corresponding pixel, because for PMT-like pulses the durationabove a certain threshold is proportional to their amplitude (the pulses are11oughly triangular, see Fig. 9, left). This measurement is performed in theFPGA for each half-drawer separately, in units of 1.25 ns, within a 5-nswindow. The windowing limits the maximum contribution of any L0 signal to4 counts, and is meant to avoid problems due to PMT after-pulsing similarlyto an amplitude clipping. The sum of the duration of the L0 signals of a half-drawer in the preceding 5 ns is transmitted to the analogue trigger board,so the output of any sector summator is proportional to the total chargedeposited within the corresponding sector, with an individual pixel clippinggiven by the windowing. The ventilation system consists of a single 250 mm Helios KVW 250/4/50/30centrifugal fan, two filters in series (coarse and fine) and a 6 kW air heater.The whole system is attached to the back door. When operating, it forces a ∼
360 l/s airflow from the back to the front of the camera, where the outletsare located. The filters ensure that very little dust enters the camera. Theheater is turned on automatically if the external humidity is higher than75%, to prevent condensation, or the external temperature is below 5 ◦ C, tominimize temperature gradients across the camera. In operation, the drawertemperature is kept stable at ∼ ◦ C, with a gradient of ± ◦ C along thetop-bottom direction. Both absolute temperature and temperature gradienthave no measurable effect on the data and on the trigger efficiency. Theinternal temperature of the camera is stable for the typical range of externalnight temperatures, between 0 and 25 ◦ C.The pneumatic system consists of two cylinders for the back door, onecylinder and five clamps for the front lid. Compressed air is provided by anindustrial compressor located in the camera shelter. A custom-built pneu-matics control box implements a simple control logic using air valves. Thereare two modes of operation: local or remote. In local mode, all remote oper-ations are inhibited and the front lid and back door can be opened manuallyusing switches on the outside of the camera body. In the default remotemode, only the front lid can be opened and closed using a relay controlled bythe DIB. In this mode, a power failure or safety alarm causes the the frontlid to close automatically. The status of the pneumatic system is monitoredby four sensors: a contact sensor for the back door, two end switches for thefront lid, and the remote/local switch. An air horn is blown for a few secondsas a warning before any movement happens.The camera power is supplied via standard industrial 400 V three-phase12C mains. Care was taken to ensure that the load was balanced over allthree phases. The ventilation system is directly powered by the mains, whilea distribution board provides 230 V single-phase AC to the network switches,the front-end power supply, and the DIB. The DIB AC power is remotelycontrolled by a commercial network power switch.The 24 V DC to the drawers is generated by the main front-end power supply,a commercial TDK-Lambda FPS-S1U unit, equipped with 3 load-sharingFPS1000-24 modules. It is distributed to the drawers by a custom-builtpower switch called Power Distribution Box (PDB), see Fig. 6, right. ThePDB monitors the current drawn by each drawer, samples the current andvoltage ramps at power-up, and can shut the drawers off autonomously ifit detects an over-current. This device also employs the FPGA + ARMcomputer design found elsewhere in the camera. The power consumption ofthe whole camera is between 3 and 9 kW, depending if the air heater is usedor not.
The cabling uses industry solutions such as standard Ethernet twisted-pair cables wherever possible to ensure ease of procurement and replacement.The data (both readout and slow control) between drawers and backplaneis transmitted via TCP/IP over Ethernet by means of standard Cat. 6 ca-bles. The LVDS pulse-amplitude modulated trigger, readout control and10 MHz clock signals are transmitted on standard Cat. 6 A Ethernet cablesof equal length (with a tolerance of ±
40 mm, corresponding to ± . Several sensors are deployed inside and outside the camera to monitordoor position, temperature, humidity, ambient light and smoke presence.Their signals are fed to a safety interlock system that ensures safe cameraoperations for both shift crew and hardware. The interlock logic is imple-mented in the firmware of the DIB FPGA, so it cannot be disabled and isindependent of the software implementation.13or the calibration of the gain of the individual PMTs, a device calledsingle photo-electron (SPE) unit is used. It is located in the shelter, facingthe front of the camera. Due to its position, it can be used only when thetelescope is fully parked. The SPE unit uses an LED to emit pulses of blue(370 nm) light with pulse frequencies ranging from 38 Hz to 156 kHz. Theintensity of the pulses ranges from ∼ . ∼
200 photo-electrons, andtheir duration is less than a nanosecond. It was designed at the LPNHElaboratory in Paris for the original H.E.S.S. array. A plastic diffuser in fronton the LED ensures complete camera illumination, with a uniformity of about50% [14]. The pulse frequency and intensity are controlled by the cameraserver via UDP. An adapter board was added to the SPE unit, allowing itto send its trigger signal to the camera via an optical fibre connection. Thisadditional trigger signal is synchronous to the light pulses and it is requiredfor calibration purposes (see Sect. 4.2).To perform the pixel-wise calibration of the light collection efficiencies ofPMT photo-cathode and funnels [15], another device called flat-fielding unitis used [16, 14]. It is located in the centre of the telescope mirror dish and,similarly to the SPE unit, it has a LED that emits short ( < ∼
100 p.e. at the PMTs).A holographic diffuser is placed in front of it. The high quality diffuser andthe small angle subtended by the camera assure a uniform illumination. Thestability of the flat-fielding intensity and its non-uniformity across the cameraare within 5% RMS.
All main subsystems (drawers, DIB, PDB and ventilation system) are con-nected via 100 Mbit/s Ethernet links to two interconnected 48-port switcheslocated inside the camera. Their uplink to the main camera server is a10 Gbit/s optical fibre connection. The camera server is a commercial 1-unitrack server with a 4-core Intel Xeon E3-1246v3 processor clocked at 3.5 GHzand 16 GB of DDR3 RAM. It is housed in the computer “farm” (see Fig. 2),an air-conditioned server room inside the main control building. The topol-ogy of the internal camera network is star-like: slow-control commands areissued only by the central camera server, which is also the endpoint of themonitoring, logging and event data streams. The devices on the network areindependent from one another, and the only access point to the camera isthrough the camera server.This distributed design improves the flexibility and resilence of the camera:14or instance, during data-taking the ARM computer memory (256 MB permodule, for a total of 15.6 GB) is used for buffering the data, preventingdata loss during event bursts.The software was written from scratch; it has a distributed, multi-architecturenature, as required by the new camera design. Its main functions are slowcontrol and event acquisition; it also includes text-based and web-baseduser interfaces, extensive unit tests, integration tests and validation routinesneeded for the mass production, and a full commissioning and calibrationsuite able to take runs, analyze them and adjust camera parameters inde-pendently of the main H.E.S.S. DAQ.The full codebase is around 100,000 lines of code long, composed by 82%C++, 11% ANSI-C and 7% python. Its implementation was one of the majorefforts of the upgrade, and required around 6 man-years by a team composedof two full-time coders and four part-time contributors. This paid off witha 10- to 1,000-fold improvement in speed and reliability over the previoussystem (see Sect. 5.3 for some performance measurements).To maximize efficiency, extensibility and maintainability of the codebase,the development team made use of well-tested off-the-shelf open source so-lutions wherever possible. A single source tree was used for both ARM andx86 64 architectures; cross-compilation was handled by the CMake build sys-tem. The operating system running on the ARM computers is the Yoctoembedded Linux [17]. It runs a Linux kernel v3.0 patched by the manufac-turer, and a custom-built DMA-enabled driver for communicating with theFPGA. The remote procedure call framework required to control the cam-era is implemented using the Apache Thrift library. The camera slow controlsoftware was interfaced to the already existing H.E.S.S. data acquisition soft-ware (DAQ, [18]) via the CORBA protocol. Data transfer is accomplishedvia the ØMQ [7] smart socket library. The raw data serialization protocolis custom, and optimized for speed; for monitoring and logging the general-purpose Google Protocol Buffers library [19] is used instead.
3. Test facilities and procedures
The development of a new detector generally requires planning and im-plementing test and verification procedures. The testing needs of H.E.S.S. IUpgrade cameras were identified early on in the project and grouped intofour main stages (prototyping, integration, quality control, commissioning),15 igure 7: Left, top: table-top drawer test-bench used for the quality control of the mass-produced drawers. Note the daisy-chained pulse generators stacked one on top of theother. Left, bottom: mini-camera. Right: full copy of the camera body used for testing atDESY in Zeuthen. The camera inclination reproduces the parking position in Namibia:this helped training the deployment. The mechanical structure was fabricated at the LLRlaboratory in France. for which four distinct test facilities were build, and are described in thefollowing.
During the prototyping stage daily debugging and testing of the proto-types was needed to validate the design of the new electronics and the correctimplementation of all features needed. These mostly manual and one-timecharacterization tests required a versatile laboratory test setup.For this purpose, a table-top laboratory test bench was set up, equippedwith an oscilloscope (LeCroy DPO 4104), an arbitrary function generator(Agilent 81160A), and several auxiliary instrumentation such as a variableattenuator and a digital multimeter. Many results shown here, such as thelinearity shown in Fig. 13, left or the bandwidth shown in Fig. 12, right, havebeen obtained using this setup.When the mass production of 270 drawers started, each one of them had16o undergo more than 300 individual tests to pass the quality control. Thetests mainly checked the functionality of the drawer, but also included thecalibration of the NECTAr chip; and the characterization of readout noise,linearity, saturation, cross-talk.The table-top test bench was thus refitted with four purpose-built, Ethernet-controlled 8-channel pulse generators, allowing to perfom the above-mentionedtests automatically. The generators are built using the same FPGA-ARMcomputer combination used elsewhere and were seamlessly integrated in thetest software. They deliver PMT-like pulses with fast ( ∼ The integration of the new front-end electronics with the existing H.E.S.S. IPMTs was a critical step, and it required a setup to test a PMT-equippeddrawer with a low level of background illumination, with the possibility offlashing it with Cherenkov-like light pulses.A single-drawer “black box” test bench was built for this purpose. It consistsof a simple aluminium box holding a complete drawer. A H.E.S.S. I SPE unitis used to illuminate the PMTs and is attached to the side of the box facingthem. The inside of the box is painted black to minimize reflections. Theblack box was used extensively during the first stages of prototyping, andlater on to devise the appropriate calibration routines. After prototypingwas over, it was shipped to the H.E.S.S. site, where it was used during thedeployment of the cameras, mostly to inspect malfunctioning drawers during17he day. It is still being used on site occasionally for drawer maintenanceand refitting.
The verification tests needed during the integration and commissioningphases called for a fully functional camera. A 4-drawer “mini-camera” wasbuilt for this purpose (Fig. 7, bottom left), housed in a 1 m light-tightenclosure, with 64 PMTs, one DIB and a light source (an SPE unit). Withit, it was possible to test the integration between front-end and back-end byrecreating a minimal 1-sector trigger setup. This allowed testing the analoguetrigger board and the other trigger functionalities of the DIB using realisticsignals, as close to the field conditions as possible. The mini-camera was alsoused to develop and test the slow control and event builder software and totest their integration into the existing H.E.S.S. DAQ control software.After the installation of the first telescope, the mini-camera served asthe main commissioning test bench. In fact, it became the primary way ofreproducing and troubleshooting in the laboratory problems found duringthe first months of field operations. Later in the project, the camera on-site assembly and integration had tobe prepared and rehearsed as thoroughly as possible before actual deploy-ment. This stage required a full camera, so a copy of the camera body wasfabricated at the LLR laboratory and installed at DESY Zeuthen (Fig. 7,right). Due to its size, it could not be housed in a light-tight room, so thePMTs were not used, but all other components of all four cameras weremounted and tested first on this camera body, with the purpose of verify-ing their functionality and training the technicians involved in the assembly.Thanks to this, the on-site physical assembly of one camera could be finishedin less than 5 working days. In 2015, the total down-time of the CT1 tele-scope, excluding commissioning and fine-tuning, was 18 days. In 2016, theCT2-4 telescopes had a total down-time of four weeks.Other testing and validation activities performed on the copy cameraincluded checks of the cable mapping; full trigger chain functionality check;assessment of the event builder performance and its integration with theH.E.S.S. software; configuration of the camera-internal network; evaluationof the capabilities of ventilation system, slow control software, and power18upply; mechanical integration of the new back door and the pneumaticsystem.
4. Camera calibration
The following section is an overview of the calibration procedures neededto commission the upgraded cameras, partly updating the information foundin [14].
The NECTAr switched capacitor array is arranged in 16 lines × ∼
20 to ∼ The NECTAr chips continuously store signals inside their analogue mem-ory ring buffers until the arrival of an L1 trigger signal. When this happens,19 A DC c oun t s Nectar baseline
Mean y 460.4RMS y 25.65
Non-equalized A DC c oun t s Nectar baseline
Mean y 420.1RMS y 3.041
Equalized
Figure 8: Equalization of the NECTAr switched capacitor array baseline to a nominalvalue of 420 ADC counts. The plots show the histogram of NECTAr cell readout valuesordered by line before (left) and after (right) calibration. Both offset and RMS are adjustedusing the line DACs. The final RMS of ∼ ∼ the region of interest is located L cells before the last sampled one, where L is the L1 trigger latency in nanoseconds. It is therefore necessary to measurethe trigger latency L for each chip and trigger source. This is done by illu-minating the whole camera with high intensity ( ∼
100 p.e.) reference lightpulse (see e.g. Fig. 9, left) while varying the NECTAr register
N d controllingthe start of the region of interest inside the chip buffer, until the sampledpulse signal is located at the center of it. Since the chip buffer is 1024 cellsdeep and circular,
N d is the complementary of L over the buffer length, N d = 1024 − L .The position of the readout window needs to be adjusted individually fortwo trigger sources having different latencies: the SPE unit trigger and thestandard camera level 1 trigger. For the former, the SPE unit itself providesthe reference light pulses; for the latter the flat-fielding unit is used. Aftera successful adjustment, the two sets of N d values are stored in a MySQLdatabase.
In order to reliably measure the amount of light arriving at the camera,it is necessary to equalize the gain of the electronic chain of each channel.This is done by varying the voltage applied to the PMTs, and illuminatingthe camera with pulses from the SPE unit, at an intensity so low that theaverage number of photons detected by a PMT for each light pulse is less20han 1. The typical charge distribution of these calibration runs can beseen in Fig. 9, right. The charge is integrated over the standard 16 ns ROI.This distribution can be fit by a linear combination of Gaussian functions,as shown in [14], equation 6. This simple fit form is quite robust over awide range of PMT illuminations (0.1–3 p.e.), but its result is biased: theactual PMT single photo-electron charge distribution is not a Gaussian, butan asymmetric distribution skewed towards lower values. So, the averagesingle photo-electron amplitude is lower than the amplitude at the peak, asshown in [20]. This discrepancy is corrected later on in the analysis by afactor 0.855 derived from realistic simulation of the H.E.S.S. I PMTs [21].After this correction, the systematic error in determining the PMT gain hasbeen estimated with simulations to be within ∼ γ ADCe of 80 ADC counts (peak valueobtained from the above-mentioned fit). This particular value is chosen em-pirically, based on the reproducibility and robustness of the fit results. Thecorresponding average PMT gain is 2 . × . The PMT voltages rangefrom ∼
850 V up to ∼ N d is readjusted, because the PMT transittime t p depends on the voltage V applied to it. After equalizing the PMT gains, to correctly estimate the amount ofCherenkov light reaching the detector plane, one needs to calibrate the dif-ferences in light collection efficiency for each pixel. As mentioned previously,this is achieved by recording light flashes generated by the flat-fielding unit.In fact, assuming the flat-fielding light is homogeneous, one can easily calcu-late a correction factor C i from the charge Q i recorded by each pixel and itsaverage over all camera pixels ¯ Q : C i = ¯ Q i /Q i . This is done for the high andlow gain channels separately. Flat-fielding runs are also used to calibrate thetime of maximum information of each pixel, under the assumption that the21 A DC c oun t s FWHM = 3.44 ns
Pulse Waveform
Entries 205365 / ndf c – – Ped s – – Gain s – – – C oun t s / b i n
10 Entries 205365 / ndf c – – Ped s – – Gain s – – – Charge distribution of SPE calibration
Figure 9: Left: The digitized PMT pulse from the flat-fielding unit as recorded by thereadout. The plot shows the distribution of over 2,000 light pulses. The red line is a splineinterpolation of the average values in each sample. The FWHM of this interpolation isalso shown with dashed lines. Right: The distribution of charges from a typical PMT gaincalibration run, fitted to a linear combination of Gaussian functions as described in [14],section 6.2, equation 6. In this particular case, γ ADCe (“Gain”) is 79.3 and σ P is 12.9 ADCcounts, corresponding to a gain of 2 . × and a noise level of 0.16 p.e. flat-fielding pulses arrive isochronally at their entrance window. The flat-fielding is performed several times per observation period (one lunar month),and the obtained coefficients for the period are then averaged and stored ina MySQL database. As for the previous cameras, the distribution of the C i coefficients is a Gaussian with an RMS of 10%, there is no discerniblegradient across the camera. Trials with a new flat-fielding unit are ongoing. As described in section 2.2.2, the camera trigger has several parameterswhich require dedicated calibrations. The most important ones are the pixeland sector thresholds P and Q , and the pixel L0 delay d . The L0 stretching l is set to zero to have a trigger response similar to the one of the old cameras.Central trigger delays also have to be adjusted after the installation of acamera.Calibrating the pixel threshold P requires finding the relationship be-tween its value as set by the electronics, in DAC counts, or mV, and itseffective value in photoelectrons. This is done with special calibration runs,where a variable-intensity pulsed light source is needed. The SPE unit is usedfor this purpose. The camera is flashed with a fixed frequency f and an vary-ing intensity I , between ∼ ∼
50 p.e. The light pulse intensity in eachpixel is measured from the mean of its charge distribution using a previously-determined PMT gain coefficient. While the run is ongoing, P is varied and22he L0 pixel trigger efficiencies are measured as the ratio between the pixeltrigger rate and f . The resulting graph is a sigmoid, whose mid-point marksthe value of P needed to discriminate I photoelectrons (see figure 10, left).By repeating this procedure for several intensities, it is possible to determinethe offset b and slope m of the linear dependency P ( I ) = mI + b , and thusthe effective value of P in photoelectrons.The calibration of Q , the sector threshold, is likewise accomplished bymeans of special flat-fielding runs. With the flat-fielding unit activated at afrequency f , one enables N pixels in every sector, varies Q and measures thesector trigger efficiency as ratio between the measured sector trigger rate and f . The mid-point of the resulting sigmoid curve corresponds to the value ofthe threshold Q for the given N (see figure 10, right). Since N is discrete,this sigmoid is much steeper than the one for P . By repeating this procedurefor several values of N , it is possible to determine the relationship Q ( N ) in asimilar way as for the pixel threshold. Finally, for the nominal multiplicity of3, Q is set to a value for which all sectors have 100% efficiency when N = 3and 0% when N = 2.The L0 delays calibration is much simpler: using a modified drawer FPGAfirmware, it is possible to send the sampled L0 information of all pixels onthe data stream. This is done while flashing the camera with the flat-fieldingunit, so all pixels are illuminated at the same time. The L0 delays d are thenindividually adjusted until the rising edge of all L0 signals is aligned.After the above-mentioned calibrations, it is necessary to determine theoperating point of P . To do that, P is varied while measuring the camera L1and coincidence trigger rates during a regular observation run. This “thresh-old scan” is performed under optimal observing conditions, using the wholearray, including CT5. It results in “bias curves” for all four telescopes, shownin Fig. 11. In these plots, the steeply falling part of the coincidence rate atthresholds lower than ∼ P is conservatively chosen so that coincidentevents due to noise are less than 1% of all triggers. For regular camera op-eration, P is 5.5 p.e., which ensures stable operation even at higher levels ofNSB light, up to ∼
250 MHz photon rate. Note that at this value of P , thesingle-telescope L2 trigger rates are already in the noise, with rates well inexcess of 1 kHz, which with the old cameras would have caused more than36% of the events to be lost due to dead-time, a figure that becomes around23
200 2400 2600 2800 3000 3200 E ff i c i en cy Pixel threshold calibration
Pixel threshold (DAC) I ll u m i na t i on ( p . e . ) E ff i c i en cy
2 3 4 5 6
Sector threshold calibration
Sector threshold (DAC) A c t i v e p i x e l s Figure 10: Efficiency curves from pixel (left) and sector (right) threshold calibration.The value of the pixel and sector thresholds P and Q are given in units of DAC counts(0.76 mV/count). Lower insets show a linear dependency of the sigmoid center on theillumination level (in p.e.) and the number of actively triggering pixels for pixel andsector thresholds, respectively. The error bars on the efficiencies are calculated followingapproximation 1 of [24]. ∼
1% with the new cameras, thanks to the new NECTAr-based readout.At the array level, it is important to measure the signal round-trip timebetween the central trigger and the camera, in order to adjust the fixed partof the central trigger coincidence delays, which also vary depending on thepointing direction. This is done by sending a trigger signal via optical fiberfrom the central trigger to the DIB, which then replies to it. The differencebetween the time of sending and that of receiving is measured at the centraltrigger with an oscilloscope. On average, the round-trip time was reducedby ∼
300 ns with respect to the original cameras.
5. Performance
We report some of the most significant performance metrics for the newCherenkov cameras in this section. Some of them were measured in the lab,prior to the installation of the cameras, others in the field in Namibia, duringor after commissioning.Efforts are ongoing to fully characterize the performance of the new cam-eras in terms of gamma-ray sensitivity using simulations and standard candledata; and to exploit the several new features they offer. The results will bemade available in upcoming publications by the H.E.S.S. collaboration.24 R a t e ( H z ) CT1 CT25 6 7 8 9 10 11 12 13 14Pixel threshold (p.e.)10 R a t e ( H z ) CT3 5 6 7 8 9 10 11 12 13 14Pixel threshold (p.e.)CT4
Pixel threshold scan, majority 3, medium NSB
Pixel Camera L1 Coincidence Sector Deadtime
Figure 11: Results of a threshold scan for a Galactic source with a typical level of NSBlight. The graphs show trigger rate versus pixel threshold in photo-electrons. Pixel andsector rates are showed alongside camera L1 trigger rates and coincidence trigger rateswith any other telescope in the array, including CT5. The coincidence trigger is formedafter applying delays dependent on pointing direction. The green line is the result ofthe fit of a linear combination of two exponential functions to the coincidence rate data,the red and grey dashed lines show the median rates of pixels and sectors, respectively.The fraction of events lost due to dead-time is shown in per mille as a purple line. Thecameras are operated at a nominal threshold of 5.5 p.e, shown as a vertical grey dashedline. Histograms of the rates of all pixels (red) and sectors (grey) at this nominal thresholdare shown in the insets ime difference between consecutive events [s]0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 N o r m a li z ed nu m be r o f e v en t s - - - Time difference between consecutive events
Pre-upgrade, 146 HzPost-upgrade, 335 Hz
Time difference between consecutive events
Time difference between consecutive events [s] - - - - N o r m a li z ed nu m be r o f e v en t s -
50 60 70 80 90100 200 300 400 500 (MHz) p /2 w Frequency 14 - - - - - - - ) ( d B ) w M agn i t ude H ( j p /2 w – z – p /2 w – z – w wz w w
1- = w jH Analogue Bandwidth
Figure 12: Left: Normalized distributions of the time delay between two consecutive eventsin the original (blue) and upgraded (green) H.E.S.S. I cameras. The two runs from whichthis plot is taken had different average trigger rates. The log-log plot in the inset showsa zoom of the first millisecond with finer binning. From this plot the dead-time can beestimated as the shortest time between two consecutive events, i.e. the position of theleft-most bin with entries. Before the upgrade it was ∼ µ s, after the upgrade ∼ µ s.Right: measurement of the end-to-end Bode magnitude plot of the readout electronics,The fit function form employed | H ( jω ) | is the modulus of the transfer function of asecond-order underdamped system H ( s ) = ω / ( s + 2 ζω s + ω ), with damping ratio ζ and eigenfrequency ω . The dashed lines show the -3 dB point of the plot, correspondingto ω/ π ∼
330 MHz. The measurement was done injecting a pure sine wave of varyingfrequency into the system and measuring the relative amplitude of the digitized waveforms.
The dead-time of a NECTAr chip when reading the nominal 16 cells regionof interest is about 1 . µ s [25, 26]. However, the minimum safe time intervalbetween two events is greater than the nominal dead-time of the NECTArchip, because of the trigger signal generation and the chip readout processon the FPGA take ∼ µ s. Also, for the version of the NECTAr chip usedhere, the first 16 readout cells have to be read out and discarded becauseof stale values, adding another 1 . µ s to the dead-time. Due to all this, thehold-off time is set to: t b = 4 + ( n + n/ × . n is the totalnumber of NECTAr cells read out ( n = 32 for regular observations). Thiscan be appreciated in Fig. 12, left, which shows that the overall dead-timeof the upgraded H.E.S.S. I cameras, measured from the distribution of thetime difference of two consecutive events during a regular observation run is ∼ . µ s.The nominal analogue bandwidth of the NECTAr chip is 410 MHz [25,26]. The design of the analogue electronics uses components matching orexceeding that bandwidth. The end-to-end -3 dB bandwidth of the readout26s ∼
330 MHz, more than four times higher than in the previous camera,see Fig. 12, right. One can see the benefit of such a high bandwidth inthe sampled PMT pulse shape shown in the left panel of Fig. 9, where theFWHM is less than 3.5 ns. Such narrow peaks allow a better determinationof shower time profiles, which can be used to improve the sensitivity of theanalysis [27, 28].The design of the analogue part of the readout was optimized for lownoise. The pedestal noise, which is the RMS of the value of a single NECTArcell in the presence of no input signal is on average ∼ ∼ . × , and charge integration window of 16samples, the electronic noise of front-end and PMT combined has on averagea RMS of ∼
16 ADC counts, or ∼ . ∼ .
25 p.e. RMS) than the oneswho are in the back of the drawer ( ∼ .
15 p.e. RMS), forming two distinctpopulations. This is likely due to different noise pick-up along the routes ofthe traces on the analog circuit board. It is anyway not a problem becauseat the chosen gain, the single-electron signal is distinguishable in any case,being always at least 3 times higher than the noise.The linearity and cross-talk of the readout were measured by recording apre-calibrated, PMT-like pulse of variable intensity. The results, which canbe seen in the left panel of Fig. 13, show that non-linearities in both highgain and low gain amount to less than 2%. The linear range of the high gainis 0.3–200 p.e. and that of the low gain is 30–4,200 p.e.: the total readoutdynamic range is greater than 80 dB. The ratio between high and low gainis ∼
22 between 30 and 200 p.e. (see Fig. 13, left, bottom panel).The data mentioned above was also used to characterize the cross-talkbetween two channels on the same analogue board. For the high gain channelthe cross-talk is typically less than 0.5%, and never larger than 1%; for thelow gain is at most 7% (see Fig. 13, right). It is measured using the largestPMT-like pulse inside the linear range of each gain, and taking the ratio C ( i, r ) = Q r /Q i between the charge recorded in an empty channel ( Q r ) tothat measured in the input channel ( Q i ). Similarly to the electronic noise,the cross-talk is also larger for the front channels (4–7 and 12–15) than for27 ea s u r ed C ha r ge ( A DC c oun t s ) / ndf c – – c – – c – – c – – High gainLow gain
Linearity and HG / LG ratio R e l . E rr o r ( % ) - - - - Intensity (ph.e.) H G / L G R a t i o / ndf c – c – Input channel0 1 2 3 4 5 6 7 8 R eadou t c hanne l C r o ss t a l k ( % ) Max crosstalk HG
Input channel0 1 2 3 4 5 6 7 8 R eadou t c hanne l C r o ss t a l k ( % ) Max crosstalk LG
Figure 13: Left: Linearity of a typical readout channel. The top frame shows the recordedcharge versus the input pulse intensity, both for high (black circles) and low (white circles)gain. Two linear functions (red lines) fit this data, their fit parameters and χ values aredisplayed next to them. The fit residuals are displayed in the middle panel. The bottompanel shows the ratio between the two gains, and a fit to a constant value in the overlappingrange 30–200 p.e.. Right: Maximum cross-talk inside one analogue board, for both high(top) and low (bottom) gain. The cross-talk is computed as the ratio C ( i, r ) betweencharges recorded in any pair of channels; the x-axis corresponds to the channel i , wherethe test pulse is applied, the y-axis to the readout empty channel r . The optimization of the trigger described in the previous section increasedthe fraction of events triggered stereoscopically with CT5 by more than afactor of two. Before the upgrade it was 20%, after the upgrade it is 44%.This is a direct consequence of the reduced deadtime of the camera due tothe NECTAr chip, which allows the camera pixel threshold to be loweredsubstantially.In the case of observations with a low NSB light intensity in the fieldof view (i.e. an average pixel photon rate across the camera of less than100 MHz), the nominal pixel trigger threshold can be lowered by 1 p.e, to4.5 p.e. Preliminary studies on simulations showed that this simple adjust-ment results in marginal improvements in terms of threshold trigger effectivearea, which were not deemed sufficient to justify the manpower investmentin the production and maintenance of a full new set of simulations and in-strument response functions.The next-neighbour alternative trigger architecture was also tested andsimulated, but it was found not to deliver a substantially improved perfor-mance with respect to the default 3-majority scheme. The performance ofthe pseudo-sum trigger alternative is still under study due to the highernumber of parameters to optimize and difficulty of implementing a realisticsimulation.
The NECTAr chip design, the modularity of the camera, the advanceddriver for the FPGA–ARM memory bus exploiting direct memory access(DMA) technology and the ample software buffering allow for a maximumachievable data acquisition rate with default settings (i.e. readout and stor-age of integral charge and timing information) of around 10 kHz per tele-scope. This is about twenty times higher than the usual CT1–4 acquisitionrate during regular observations. It was determined by field tests under re-alistic conditions.The bottleneck is the transfer of data to the H.E.S.S. main DAQ program,because the network bandwidth is only 1 Gb/s. Performance tests on a290 Gb/s network showed that the cameras could sustain a constant individ-ual data acquisition rate in excess of 50 kHz. The system can sustain shortbursts of events at a much higher rate by buffering the data in the RAM ofthe ARM computer and of the camera server. This can be very importantfor some physics cases, such as transient events and especially GRBs.The improvement of the new camera readout system allows to configurethe readout so that full waveforms of up to 48 samples are stored alongsidethe integrated charge over 16 ns and the timing information. This is expectedto be beneficial in the reconstruction of inclined or large impact parametershowers with energies larger than 1 TeV, for which the arrival time dispersionof Cherenkov light at the telescope is greater than 16 ns. This readout modeincreases the amount of transmitted data by a factor ∼
17 (each drawer sends51 data blocks instead of the usual 3). In order to keep up with the usualdata acquisition rates (up to 700 Hz) when using this acquisition mode, theadditional waveform data must be stored on the camera server hard disks,and is transmitted to the H.E.S.S. DAQ off-line on the following day. Thismode is only used for selected targets, due to the much greater amount ofdata created when it is active. Initial results on the performance of thisreadout mode are reported in [30].Regarding the slow control software performance, stress tests performedon the Apache Thrift RPC framework operating in the busy DESY lab net-work showed that it is capable of sustaining rates of 10,000 single point-to-point request/replies per second for more than 12 hours with no failures.One-to-many requests, such as distributing a command or collecting infor-mation from all drawers, are handled on the camera server by spawning onethread per connection. This strategy allows for a command distribution la-tency of ∼ The upgrade of the first camera, that of CT1, was carried out in July/August2015. This was followed by an extended integration and commissioning pe-riod of 9 months. During this period of time, many bugs and problems wereironed out, while the rest of the array (CT2-5) continued scientific observa-tions with minimally degraded performance. This strategy allowed us to com-pare old and new cameras after the first one was completely commissioned.The other three upgraded cameras were installed in September/October 2016and underwent a much shorter commissioning phase of four months. In Jan-uary 2017, a bright flare from the well-known Mkn 421 blazar was observed30 igure 14: Left: Significance sky map of Mkn 421, a well-known TeV gamma-ray emittingblazar, observed during the commissioning of the H.E.S.S. upgrade cameras. Right:Example 4-telescope event recorded with the upgrade cameras. Figure adopted from [32]. by H.E.S.S. using the new upgraded cameras, following an alert reported bythe HAWC collaboration [31]. About 2 hours of data were collected duringthis observation. The preliminary processing of the data using two indepen-dent analysis pipelines revealed a clear detection with a significance of 16 σ .This was the first detection of a TeV gamma-ray source using the NECTArchip technology [32] (see Fig. 14, for a significance sky map of this detec-tion and an example event). The upgraded cameras have been employed inroutine observations since January 2017, and since then have achieved anaverage weather-corrected data taking efficiency of 98.5%.
6. Conclusion
The four upgraded cameras of the 12-meter H.E.S.S. Cherenkov telescopeswere successfully deployed on site in 2015 and 2016. They are equipped with anew NECTAr-based readout technology that substantially reduces the dead-time by a factor of 60 from ∼ µ s in the previous system to ∼ µ s in thenew cameras. Furthermore, the new design allows for a more robust, versatileand efficient operation and maintenance, leading to improved performanceand reliability. All components of the cameras were tested, integrated andcalibrated, and their performance was validated in the field. The cameraconfiguration was optimized, resulting in more than twice the amount of31tereoscopically recorded showers by the H.E.S.S. array.The achieved average data taking efficiency of the cameras is 98.5%. Nomajor problems due to ageing were found during an ordinary maintenancecampaign that took place in early 2019. Thus, all the primary goals of theproject have been achieved.In addition, the new cameras offer the possibility of using more sophis-ticated and flexible trigger and readout algorithms. The most promising ofthese new possibilities is to record fully sampled waveforms, which is beingexplored intensively in current observation campaigns and will be reportedon in the future.The new cameras are foreseen to be in use in the H.E.S.S. experiment forits remaining lifetime. ReferencesReferences [1] F. Aharonian, et al., Observations of the Crab Nebula with H.E.S.S.,Astronomy and Astrophysics 457 (2006) 899.[2] J. Bolmont, et al., The camera of the fifth H.E.S.S. telescope. Part I:System description, Nuclear Instruments and Methods in Physics Re-search Section A: Accelerators, Spectrometers, Detectors and AssociatedEquipment 761 (2014) 46–57.[3] E. Delagnes, Y. Degerli, P. Goret, P. Nayman, F. Toussenel, P. Vincent,SAM: A new GHz sampling ASIC for the H.E.S.S.-II front-end electron-ics, Nuclear Instruments and Methods in Physics Research Section A:Accelerators, Spectrometers, Detectors and Associated Equipment 567(2006) 21–26.[4] P. Vincent, et al., Performance of the H.E.S.S. cameras, in: Proceed-ings, 28th International Cosmic Ray Conference (ICRC 2003), Tsukuba,Japan, 31 Jul - 7 Aug 2003, volume 5, Universal Academy Press, pp.2887–2890.[5] C. L. Naumann, et al., New electronics for the Cherenkov TelescopeArray (NECTAr), Nuclear Instruments and Methods in Physics Re-search Section A: Accelerators, Spectrometers, Detectors and AssociatedEquipment 695 (2012) 44. 326] M. de Naurois, L. Rolland, A high performance likelihood reconstructionof γ -rays for imaging atmospheric Cherenkov telescopes, AstroparticlePhysics 32 (2009) 231 – 252.[7] Code Connected - zeromq, https://zeromq.org/ , 2019. Accessed:2019-07-02.[8] Apache Thrift - Home, https://thrift.apache.org/ , 2019. Accessed:2019-07-02.[9] L. Rolland, Calibration of the cameras of the H.E.S.S. gamma-ray ex-periment and observations of the Galactic Centre above 100 GeV, Ph.D.thesis, Universit´e Pierre et Marie Curie - Paris VI, 2005.[10] S. Funk, et al., The trigger system of the H.E.S.S. telescope array,Astroparticle Physics 22 (2004) 285–296.[11] R. Wischnewski, U. Schwanke, M. Shayduk, Performance study of adigital camera trigger for CTA, in: Proceedings, 32nd InternationalCosmic Ray Conference (ICRC 2011): Beijing, China, August 11-18,2011, volume 9, p. 63.[12] N. Bulian, et al., Characteristics of the multi-telescope coincidence trig-ger of the HEGRA IACT system, Astroparticle Physics 8 (1998) 223 –233.[13] M. Rissi, N. Otte, T. Schweizer, M. Shayduk, A New Sum Trigger toProvide a Lower Energy Threshold for the MAGIC Telescope, IEEETransactions on Nuclear Science 56 (2009) 3840 –3843.[14] F. Aharonian, et al., Calibration of cameras of the H.E.S.S. detector,Astroparticle Physics 22 (2004) 109–125.[15] K. Bernl¨ohr and others, The optical system of the H.E.S.S. imagingatmospheric Cherenkov telescopes. Part I: layout and components ofthe system, Astroparticle Physics 20 (2003) 111 – 128.[16] K.-M. Aye, et al., A Novel Alternative to UV-Lasers Used in Flat-Fielding VHE g-Ray Telescopes, in: Proceedings, 28th InternationalCosmic Ray Conference (ICRC 2003), Tsukuba, Japan, 31 Jul - 7 Aug2003, volume 5, p. 2975. 3317] Yocto Project | Open Source embedded Linux build system, pack-age metadata and SDK generator, ,2019. Accessed: 2019-07-02.[18] A. Balzer, et al., The H.E.S.S. central data acquisition system, As-troparticle Physics 54 (2014) 67.[19] Protocol Buffers, https://developers.google.com/protocol-buffers , 2019. Accessed: 2019-07-02.[20] K. Bernl¨ohr, Simulation of imaging atmospheric Cherenkov telescopeswith CORSIKA and sim telarray, Astroparticle Physics 30 (2008) 149–158.[21] K. Bernl¨ohr, CORSIKA and sim hessarray mc simulation of the imagingatmospheric cherenkov technique for the H.E.S.S. experiment, H.E.S.S.Internal Note, 2002. 02/04.[22] R. Saldanha, L. Grandi, Y. Guardincerri, T. Wester, Model IndependentApproach to the Single Photoelectron Calibration of PhotomultiplierTubes, Nuclear Instruments and Methods in Physics Research SectionA: Accelerators, Spectrometers, Detectors and Associated Equipment863 (2017) 35–46.[23] M. Takahashi, et al., A technique for estimating the absolute gain ofa photomultiplier tube, Nuclear Instruments and Methods in PhysicsResearch Section A: Accelerators, Spectrometers, Detectors and Asso-ciated Equipment 894 (2018) 1–7.[24] D. Casadei, Estimating the selection efficiency, Journal of Instrumen-tation 7 (2012) P08021.[25] E. Delagnes, et al., NECTAr0, a new high speed digitizer ASIC for theCherenkov Telescope Array, in: 2011 IEEE Nuclear Science SymposiumConference Record, pp. 1457–1462.[26] E. Delagnes, Specifications of the NECTAr0 Chip, Internal DocumentIrfu CEA Saclay, 2016. V.4 2016-01-05.[27] E. Aliu, et al., Improving the performance of the single-dish Cherenkovtelescope MAGIC through the use of signal timing, Astroparticle Physics30 (2009) 293 – 305. 3428] V. Stamatescu, et al., Timing analysis techniques at large core distancesfor multi-TeV gamma ray astronomy, Astroparticle Physics 34 (2011)886 – 896.[29] K. P. Shiningayamwe, Investigating electronic pedestals of the analoquefront-end boards of the upgraded high-energy stereoscopic system(H.E.S.S. I) cameras, Thesis, University of Namibia, 2017.[30] J. Zorn, Sensitivity Improvements of Very-High-Energy Gamma-RayDetection with the Upgraded H.E.S.S. I Cameras using Full WaveformProcessing, in: Proceedings, 36th International Cosmic Ray Confer-ence (ICRC 2019), Madison, WI, U.S.A., 24 Jul - 1 Aug 2019, volumeICRC2019, Proceedings of Science, p. 834.[31] I. Martinez, J. Wood, R. Lauer, HAWC detection of further increase inTeV gamma-ray flux from Mrk 421, The Astronomer’s Telegram 9946(2017).[32] H.E.S.S. Collaboration - Source of the Month - March 2017, , 2017.