Design and Implementation of Detector Control System for Muon Forward Tracker at ALICE
K. Yamakawa, A. Augustinus, G. Batigne, P. Chochula, M. Oya, S. Panebianco, O. Pinazza, K. Shigaki, R. Tieulent, Y. Yamaguchi
PPrepared for submission to JINST
Design and Implementation of Detector Control Systemfor Muon Forward Tracker at ALICE
K. Yamakawa, a A. Augustinus, b G. Batigne, c P. Chochula, b M. Oya, a S. Panebianco, d O. Pinazza, b , e K. Shigaki, a , R. Tieulent, f and Y. Yamaguchi a a Hiroshima University, Hiroshima, Japan b European Organization for Nuclear Research (CERN), Geneva, Switzerland c SUBATECH, IMT Atlantique, Université de Nantes, CNRS-IN2P3, Nantes, France d Université Paris-Saclay Centre d’Etudes de Saclay (CEA), IRFU, Départment de Physique Nucléaire(DPhN), Saclay, France e INFN, Sezione di Bologna, Bologna, Italy f Université de Lyon, Université Lyon 1, CNRS/IN2P3, IPN-Lyon, Villeurbanne, Lyon, France
E-mail: [email protected]
Abstract: ALICE is the experiment at the CERN LHC devoted to study heavy-ion collisions. Anupgrade program of the ALICE detector is ongoing toward the LHC Run 3 starting in 2022 togetherwith the upgrade of the data acquisition system and the detector control system (DCS). One of themain projects of the current ALICE upgrade program is the addition of the muon forward tracker(MFT), a new silicon pixel detector located at forward rapidity. In this paper, we describe the DCSof the MFT detector which is entirely controlled via a finite state machine in a hierarchical system.Keywords: Detector control systems (detector and experiment monitoring and slow-control sys-tems, architecture, hardware, algorithms, databases), Control and monitor systems online, Particletracking detectors (Solid-state detectors), Heavy-ion detectors Corresponding author. a r X i v : . [ phy s i c s . i n s - d e t ] A ug ontents A Large Ion Collider Experiment (ALICE) [1] is the experiment which focuses on the heavy-ion program at the CERN Large Hadron Collider (LHC) [2]. Understanding of the propertiesof quark-gluon plasma (QGP), which is an exotic state of hadronic matter described by quantumchromodynamics, is the primary aim of ALICE.
ALICE has performed successfully during LHC Runs 1 (2009–2013) and 2 (2015–2018) harvestinga multitude of results. To take advantage of the increased luminosity of the LHC during Runs 3(2022–2024) and 4 (2027–2030) for high precision measurements of experimental observables andto extend its scientific goals, the ALICE collaboration defined a complete upgrade strategy [3].The aim of the ALICE upgrade is to have the capability of recording all Pb-Pb interactions ina continuous data taking mode and to enhance the track reconstruction performance [3–8]. Theimplementation of this upgrade program includes, in particular the readout and trigger for the– 1 –ew front-end electronics [4], a new integrated online-offline computing system (O ) [8], and theaddition of the muon forward tracker (MFT) [6, 9], a silicon pixel tracker at forward rapidity. Heavy quarks, charm ( c ) and bottom ( b ) quarks, are known as good probes to investigate thecharacteristics of the QGP. The muon spectrometer of ALICE [10, 11] has performed successfulmeasurements of the J / ψ production rate in the forward rapidity region during Runs 1 and 2 invarious collision systems from pp, p-Pb to Pb-Pb [2]. The separation of J / ψ from B hadrondecay from prompt J / ψ has been, however, impossible due to the presence of a hadron absorber of60 radiation lengths placed between the interaction point and the muon tracking system. It induceslarge multiple scatterings and limited spatial resolution around the interaction point, preventing thedetermination of the origin of muons. In order to overcome this limitation, a new silicon pixeldetector, the MFT, is installed between the interaction point and the hadron absorber. It coversthe forward pseudo-rapidity range of -3.6 < η < -2.5 to match most of the muon spectrometeracceptance. The pointing accuracy of the muon production point is consequently improved bymatching the tracks measured by the MFT and by the muon spectrometer. The CMOS monolithicactive pixel sensor (CMOS-MAPS) technology has been chosen for the sensors. The adopted sensoris developed for both the new ALICE inner tracking system (ITS) and the MFT, and is called theALICE pixel detector (ALPIDE) [12, 13]. The dimension of ALPIDE is 1.5 × with the pixelpitch of 27 × µ m . It has a spatial resolution of about 5 µ m and the charge integration time of30 µ s. A total of 936 ALPIDE chips are used for the MFT covering about 0.4 m .Figure 1 shows the 3-dimensional view of the full MFT. The MFT is separated into two halfcones, called the top and the bottom MFT’s, respectively. A half cone is composed of five halfdisks, named from 0 to 4 ( e.g. half disk 0), each with two detection half planes. Each detection halfplane is split into four zones, each of which is powered in common and read out in order to reducethe number of connection lines. Figure 2 shows the definition of zones of half disk 4 as an example.A zone corresponds to a set of three to five sensor ladders connected to a single readout unit board(RU). Each ladder, housing between two and five ALPIDE chips, is a flexible printed circuit board(PCB) on which the sensors are wire-connected. The ladders are glued on the support planes andconnected to another PCB enabling the power and data connection. A total of 280 ladders composesthe full MFT. Continuous readout of raw data without any trigger and subsequent simultaneous data processingare a challenge of the ALICE upgrade program. A typical data volume produced by the ALICEsub-detectors will be 3.4 TB/s in Pb-Pb collisions at √ s NN = , in which online and offline systemsare merged in a single operating system [8]. The data links between the acquisition system and thefront end electronics (FEEs) use the gigabit transceiver (GBT) [14] technology developed at CERN.In the O farm, the common readout unit (CRU) on the first level processor (FLP) splits rawdata from the sub-detectors into physics data and slow control data. Figure 3 shows the diagram ofthe O raw data stream. The slow control data for each event are one of the ingredients for the data– 2 – igure 1 . 3-dimensional view of the MFT detector. Figure 2 . Definition of zones of half disk 4. reduction on the FLP. Collected physics data on the FLP are transferred to the event processor nodes(EPNs) with a data rate of 500 GB/s. The EPN processes the raw data online while performingreconstruction tasks such as clustering and tracking. The output of these tasks is transmitted to thedata storage at a rate of 90 GB/s.
Figure 3 . Diagram of the O data stream. – 3 – .4 Detector Control System The detector control system (DCS) of ALICE is upgraded to follow the O strategy. It is based onthe component framework, guidelines, and configurations of a framework named the joint controlproject (JCOP) [15], which is developed at CERN and provides software tools for DCS developmenton WinCC Open Architecture (WinCC OA) [16]. The control of FEE employs the GBT slow controladapter (GBT-SCA) [17], which is an application-specific integrated circuit (ASIC), designed forslow control in the framework of the GBT. A specific software framework is developed in orderto interface the detector’s FEE from the graphical user interface (GUI) based on WinCC OA. Theframework includes two major elements: the ALICE low-level front-end (ALF) running on the FLPand the front-end device (FRED) [18] as shown in Fig. 4. The ALF provides low-level access tothe CRU links, while the FRED translates high-level instructions from WinCC OA into low-levelcommands consisting of sequences of hexadecimal words to operate the GBT-SCA and accessthe FEE. Slow control data, including detector and environmental conditions, are carried in thesame packet with physics data in the raw data stream from the FEE. The CRU splits the raw datainto physics and slow control data. The slow control data come up to WinCC OA via the ALFand the FRED, while the physics data are transferred to the O farm as described above. Thecommunication protocol between the WinCC OA, the FRED, and the ALF is based on the CERNdistributed information management system (DIM) [19]. Figure 4 . Schematics of the DCS data stream. Control commands are transmitted from WinCC OA to theMFT through the FRED and the FLP. The monitoring data are collected via the same links in the other way,as shown with the blue arrows. A part of DCS data that are temperatures of ALPIDEs share the same packetswith the physics data from the MFT to the CRU. The physics data are sent to the O farm after splittingfrom the DCS data at the CRU, while the DCS data come to WinCC OA via Ethernet, as shown with the redarrows. The MFT DCS controls and monitors three subsystems as shown in Fig. 5: the low voltage powersupplies, the detector and readout modules, and the cooling system.– 4 – igure 5 . Hardware structure of the MFT DCS.
The ALPIDE chip is powered by two voltage lines at 1.8 V, for the analog and digital parts. Inaddition, a reverse bias voltage up to -3 V can be applied to the ALPIDE sensor to increase itsefficiency and cope with the performance degradation induced by the radiation dose. A dedicatedboard, known as the power supply unit (PSU), is installed inside the detector, between half disks 3and 4, to provide local voltage generation in order to avoid large voltage drops in the power cablesfrom the power supplies to the detector, located about 40 m apart. The PSU board houses DC-DCconverters providing the 1.8 V outputs to the detector as well as the GBT-SCA chips to control andmonitor the PSU via the CRU.Power supply modules manufactured by CAEN [20] are used to supply the low voltage (LV).Figure 6 shows the structure of the LV system of the MFT. WinCC OA connects with an SY4527mainframe using open platform communications (OPC) via Ethernet. Two branch controllersA1676A in the SY4527 communicate with two power supply systems, one to power the PSUs andthe other to power the FEE cards, named the readout units (RUs). These systems are based onthe CAEN embedded assembly system (EASY) which is tolerant to radiation and magnetic field.Twelve A3009 power supply boards and two A3006 boards are installed in four EASY3000 crates,which are powered by four A3486 modules to convert 3-phase AC to 48 V DC.
The RU, a FPGA based system, is employed as the FEE card of the MFT to read raw data fromand send configuration to the ALPIDE chips. One RU board reads out raw data from a zone of adetection half plane. A total of 80 RUs composes the MFT FEE system. The FPGA on the RUis used for configuration and operation of the ALPIDE chips. The RU and the CRU communicatethrough GBT links. – 5 – igure 6 . Structure of the power supply system.
A leakless water system is used to ensure proper cooling of the detector and the RUs. The structureof the system is shown in Fig. 7. A nominal pressure value of the cooling water is set at 0.3 bar,below the atmospheric pressure to prevent a water leak. The temperature ranges of inlet water tothe detector and to the RUs are 18–20 °C and 18–23 °C, respectively.
Figure 7 . Structure of the water cooling system.
An air ventilation system provides a dry and cool airflow which guarantees the temperatureuniformity and humidity control inside the detector volume. The nominal values of temperatureand humidity are 20 °C and 35%, respectively.
A logical tree structure, which describes all devices in operation and monitoring, is designed basedon the hardware structure of the MFT DCS, and configured using the JCOP device models. Onthe one hand, all hardware devices are referred as elements of the tree, and control commands arepublished from the elements to the devices. On the other hand, the hardware tree on WinCC OAdescribes how all hardware devices are wired up. Figure 8 shows the relation between the logicaltree and the hardware tree. – 6 –he logical representation of the detector is built by using aliases on the JCOP device instances.An alias name is assigned to each device used in operation, and reassigned to point a differenthardware channel, for example when the originally corresponding low voltage power supply channelhas a problem.
Figure 8 . Relation between the logical tree and the hardware tree of the MFT DCS. Names written in italicare aliases of the hardware device for the logical tree.
The finite state machine (FSM) of the MFT detector is a hierarchical control application, basedon its tree-like logical representation and state diagrams implemented in the JCOP framework onWinCC OA. Figure 9 shows the FSM structure of the MFT DCS. A control unit is a conceptualpart of the detector system and a device unit corresponds to a real device controlled by the FSM.The FSM allows modeling of behaviors of the elements with state diagrams.The FSM as a software artifact propagates commands across the devices and sums up the overallstate of the detector. State diagrams are defined for all the FSM nodes. Figure 10 shows the diagramfor the top node, named MFT in Fig. 9, as an example. A series of actions permits to switch on/offand configure the different parts of the detector system according to a given sequence. Figure 11is the synchronization table which shows how the top node state is defined based on the states ofthe two daughter nodes, corresponding to the related subsystems. The only state which allows forphysics data acquisition is the READY state where all subsystems are on and configured. Otherstates include intermediate states used either to bring up/down the detector between the OFF andREADY states, and secured states for special conditions of the LHC beam (e.g. magnets ramping,beam tuning, beam injection, and beam dumping) or of the ALICE experiment (e.g. changingmagnet conditions). The state of the top node moves from any state to the ERROR state whenevera problem arises. – 7 – igure 9 . FSM structure of the MFT DCS. It is designed based on the tree-like logical representation.
Figure 10 . State diagram of the MFT top node. The MFT detector is available for physics data taking onlyin the green READY state. The states with blue boxes mean the status is okay, but not ready for data taking.
Figure 11 . Synchronization table of the MFT top node. – 8 – peration
FSM commands are transferred down the hierarchy and converted into slow controlcommands, which are in turn transferred to the ALPIDE chips, the PSU, and the RU through DIM.The commands are sent from the FSM also to the CAEN power supply modules via the EASYbus. The upper nodes of the FSM send operational commands to their daughter nodes, then thelowest control units transmit the operational commands to the end devices, which the device unitscorrespond to. In the other direction, condition data from the ALPIDE chips and the RU passthrough the DCS data line to the FSM.
Software Interlock
The FSM implements an automatic software interlock mechanism. The FSMmonitors the temperatures of the GBT-SCA, the half planes, the RU, the FPGA on the RU, themezzanine boards of the PSU, and the ALPIDE chips. When one of the monitored temperaturesexceeds a given threshold, the FSM turns off the corresponding channel of the CAEN LV module.The flow and humidity of the cooling air and the temperature of the half disks are also monitoredby the FSM. All the LV power supply channels for the entire MFT detector are turned off if theFSM detects an abnormal condition of the cooling air or an excessive temperature of a half disk.The FSM also takes care of communication control to the CAEN mainframe. It refreshes theOPC server for the CAEN modules when the communication between the CAEN mainframe andWinCC OA is lost. Communication loss between WinCC OA and the FRED and/or the FLP isanother case to trigger the software interlock.
Detector safety system (DSS) is the hardwired interlock system based on programmable logiccontrollers (PLCs), used commonly for all sub-detectors in ALICE. Actions on the DSS areimplemented detector by detector. Figure 12 shows the general layout of the MFT DSS. It turns offall channels of the CAEN power supply modules for the detector if a crucial problem occurs.In this multi-layered scheme combining the DSS and the preventive software interlock by theDCS FSM, minor issues, e.g. an excessive temperature in the MFT or a communication loss withthe main power supply, are normally handled by raising an alarm and shown in the FSM. The DSSrepresents an ultimate safety system in these cases, being operational even if the communicationbetween the WinCC OA and the CAEN main frames is lost. Any issue in the cooling system shouldbe handled directly by the DSS. The sensors on the cooling system hence give a hardwired triggerto the DSS in case the cooling does not work correctly.
Figure 12 . MFT Detector Safety System (DSS) layout. – 9 –
Implementation and Tests with Detector Hardware
A quality assurance system of the ladders has been set up during the ladder production phase of theMFT project. This system also serves as a basis to develop the first elements of the global MFTDCS system on WinCC OA. The first step of the MFT ladder qualification is to power it. Thistest is named a smoke test. The voltages provided to the analog and digital parts of the chips areramped up from 0 V to the nominal value of 1.8 V by steps of 0.1 V. The consumption of currentis recorded and any abnormal power consumption triggers a voltage shutdown. The smoke testdetects some defects of the ladders, in both conditions with and without the back-bias voltage.
Setup
Figure 13 shows the setup of the smoke test bench. It consists of a WinCC OA instanceincluding the JCOP framework, the CAEN power supply system, and intermediate boards to adaptthe cable connection to the ladder. An A1516B low voltage power supply board is inside an SY4527crate. It supplies voltages in the required ranges of 0.0–1.8 V for analog and digital lines and of-3.0–0.0 V for the back-bias. An application running under WinCC OA controls the power supplysystem and records the test results. A GUI has been designed and implemented on WinCC OA.Operators can set the demanded values of output voltages, the numbers of steps between 0.0 V andthe demanded values, and the ramping up speed in each of the steps. The test results are recordedin comma separated values (CSV) files and screen shots of the GUI in portable network graphicalformat (PNG) files as the logs. A safety system is implemented to protect the ladder. The test isstopped when the current exceeds a given threshold or if the GUI is closed.
Figure 13 . Smoke test bench setup.
Achievements
The smoke test needs to be performed for about 500 ladders including spares.Figure 14 shows the test results of one of the ladders, numbered 2010. A test is performed firstwithout the back-bias voltage. The current consumption of the analog and digital lines reaches aconstant value at about 20 mA each which is below the maximum allowed value of 40 mA. Thenegative back-bias voltage is then slowly increased from 0.0 to -3.0 V keeping analog and digitalvoltages at the nominal values. The power consumption of the analog and digital lines should stayconstant and the back-bias current should not exceed 20 mA. In this example, the smoke test issuccessful and the ladder is made available for the next steps of the qualification procedure beforebeing used for the MFT detector assembly. – 10 – igure 14 . Result of smoke test with back-bias voltage.
The MFT detector is commissioned at CERN, following the production and assembly of its compo-nents. The DCS is integrated to the detector system in this commissioning phase in order to test allits functionalities. This stage is called the surface commissioning, i.e. before the MFT installationin the cavern where the ALICE detector is located. Extensive readout tests are conducted duringthis stage. The MFT DCS is crucial in this phase to ensure the safe and easy operation.The MFT DCS consists of three separate subsystems as described in Sec. 2. The entire MFTdetector system, except for the cooling part, was assembled during the commissioning. A WinCCOA project is dedicated for each of the subsystems and they are integrated by the main instanceknown as the MFT DCS through Ethernet. Relevant data items in terms of the FSM nodes are puttogether in a single DCS GUI panel. The panels are refined to allow more user-friendly operationsthrough the MFT surface commissioning stage.Separate DCS panels are developed to be used by standard users and by experts. The DCSpanels of a single RU control for standard users and experts are shown as examples in Figs. 15and 16, respectively. Standard users are basically restricted to monitor the RU conditions. Onlyexperts are allowed to turn on and off the power of the RU and to set the voltage values using theadvanced setting functions. The FPGA configuration is easily accessible via a dedicated button.The temperature values of the RU board and chips on it are used to activate a software interlock ifany temperature value exceeds the threshold. The GUI is organized in a tree-like structure, and thepanels at a higher hierarchical stage can control all RUs simultaneously.– 11 – igure 15 . DCS panel of a single RU control for standard users.
Figure 16 . DCS panel of a single RU control for experts.
The MFT is a new silicon pixel detector installed in the ALICE experiment for Runs 3 and 4 ofthe LHC at CERN, starting in 2022, in order to improve the muon tracking capability at forwardrapidity. The MFT DCS has been developed within the frameworks of CERN and the ALICE DCS.It controls and monitors the low voltage power supplies, the detector, the readout modules, and thecooling system of the MFT. The FSM is a key element both for operation of the detector and forthe software interlock. In addition, the DSS serves as the ultimate hardwired interlock. The DCSis implemented, used, and tested in the quality assurance of the MFT ladders. It works perfectly atthe smoke test of about 500 ladders. It is also integrated in the surface commissioning of the MFT,where its all functionalities are tested. The MFT DCS system is operational, constantly improving– 12 –nd adding needed functionalities, and ready for the real detector operation.Interlock scenarios to be implemented on the FSM and/or DSS will be defined for specific alertcases. The DCS with full functionalities will be implemented to the ALICE detector and installedin the cavern for the physics runs.
Acknowledgments
We appreciate the ALICE MFT collaboration for their support in the design and implementation ofthe detector control system. We are grateful to the DCS and O teams of the ALICE collaborationfor their technical help, and to the entire ALICE collaboration. This work was in part supported byJSPS KAKENHI grant numbers JP15H03664 and JP18H05401, and by the Toshiko Yuasa FranceJapan Particle Physics Laboratory (TYL-FJPPL) project HAD_02. References [1] ALICE collaboration,
The ALICE experiment at the CERN LHC , JINST
LHC Machine , JINST
Upgrade of the ALICE experiment: Letter of Intent , J. Phys. G: Nucl. Part.Phys.
41 (2014) 087001.[4] ALICE collaboration,
Upgrade of the ALICE readout and trigger system , CERN-LHCC-2013-019(2013).[5] ALICE collaboration,
Technical Design Report for the upgrade of the ALICE Inner Tracking System , J. Phys. G: Nucl. Part. Phys.
41 (2014) 087002.[6] ALICE collaboration,
Technical Design Report for the Muon Forward Tracker ,CERN-LHCC-2015-001; ALICE-TDR-018 (2015).[7] ALICE collaboration,
Upgrade of the ALICE Time Projection Chamber , CERN-LHCC-2013-020;ALICE-TDR-016 (2013).[8] ALICE collaboration,
Technical Design Report for the upgrade of the Online-Offline computingsystem , CERN-LHCC-2015-006; ALICE-TDR-019 (2015).[9] ALICE collaboration,
Addendum of the Letter of Intent for the upgrade program of the ALICEexperiment: The Muon Forward Tracker , CERN-LHCC-2013-014; LHCC-I-022-ADD-1 (2013).[10] ALICE collaboration,
ALICE dimuon forward spectrometer: Technical Design Report ,CERN-LHCC-99-022; ALICE-TDR-5 (1999).[11] ALICE collaboration,
Addendum to the Technical Design Report of the dimuon forward spectrometer ,CERN-LHCC-2000-046; ALICE-TDR-5-add-1 (2000).[12] ALICE collaboration,
ALPIDE, the monolithic active pixel sensor for the ALICE ITS upgrade , Nucl.Instrum. Meth. A
824 (2016) 434.[13] ALICE collaboration,
The ALPIDE pixel sensor chip for the upgrade of the ALICE Inner TrackingSystem , Nucl. Instrum. Meth. A
845 (2017) 583.[14] P. Moreira et al. , The GBT, a proposed architecture for multi-Gb/s data transmission in high energyphysics , TWEPP-07, CERN-2007-07 (2007) 332. – 13 –
15] O. Holme et al. , The JCOP framework , CERN-OPEN-2005-027 (2005),
Conf. Proc. C et al. , The GBT-SCA, a radiation tolerant ASIC for detector control and monitoringapplications in HEP experiments , JINST
10 (2015) C03034.[18] P. Chochula et al. , Challenges of the ALICE Detector Control System for the LHC RUN3 , ICALEPCS2017 , TUMPL09 (2018) 323.[19] C. Gaspar and M. Dönszelmann,
DIM: a distributed information management system for the DELPHIexperiment at CERN