CAN field bus for industrial-controls applications in high-energy physics experiments.

W.P.J.Heubers, H.Boterenbrood, J.T.van Es, R.G.K.Hart

NIKHEF

National Institute for Nuclear and High-Energy Physics

Amsterdam, The Netherlands

Abstract

CAN bus, a popular field bus choice in industry, has gained momentum in high-energy physics research for industrial-controls-like applications in experiments and accelerators. This paper describes at first the controls problems which next-generation particle-physics experiments are facing. It will then discuss the hardware and software developments of prototype systems for analogue sensor readout by the CERN-recommended CAN field bus.

2

Introduction

The Large Hadron Collider (LHC) at CERN is a particle accelerator that will collide beams of protons with an energy of 7 TeV. Four high-energy physics particle detectors will be installed on well-defined locations in the 27-km long accelerator ring. These detectors will be designed, built and operated under responsibility of large collaborations with participants from all over the world. Because of the extremely long time scale, the start of LHC operation is foreseen in 2005, and the complexity of the instrumentation one has to take care that when possible, widely accepted industrial standards and solutions will be implemented. Reliability and maintainability have to be ensured during the many years of operation.

A typical area where one can not select standard industrial solutions is the inner part of the detector, where space is limited and environmental conditions are hostile. Especially the amount of radiation and constraints concerning low-power dissipation and limited space, require custom designed and radiation-hard electronics for the readout of the large number of data channels. In the outer parts of the detectors the environmental conditions are somewhat better and one can consider the use of commercially available electronics, such as micro controllers as intelligent nodes connected to a field bus.

Physicists and engineers cannot enter the caverns where the detectors are installed to inspect the functioning of the instrumentation when LHC is operating. A reliable and redundant control system is required to be able to set and monitor the estimated number of several 100,000 I/O points remotely from the control rooms. Mixing of the control data with the physics data on the high bandwidth data channels from the detector to the counting rooms must be avoided, to prevent a blocking of the control signals in case of congestion in the data channels.

The Atlas Detector

The largest detector to be installed in LHC is the ATLAS detector, measuring 20 by 40 meters. The ATLAS detector consists of three main detection systems (muon detector, calorimeters and tracking detectors) and of a number of subsystems, such as the magnet system, the cooling system and the data acquisition system. The outer detector is the Muon Spectrometer and occupies the largest volume of the ATLAS detector (approximately 16.000 m3) and a radius of 11 meters. This Muon Spectrometer consists of a barrel and two end-caps with a total number of 1194 precision chambers and 800 trigger chambers, generating physics data on more then 1,000,000 readout channels. One has to take into account that the instrumentation for the Muon Spectrometer has to operate in a magnetic field with values of typical 0.5-2 Tesla and with a radiation background of mainly low-energy neutrons and photons.

In this paper a description will be given of the ideas for the readout of analogue sensors to continuously monitor the temperature and the values of the magnetic field in the precision chambers in the Muon Spectrometer. The proposal is to install on each of these chambers a field bus node where analogue signals can be digitized and transmitted to the detector control system.

Detector Control System

The ATLAS Detector Control System DCS [1] will be divided into subsystems. A requirement of DCS is that these subsystems can be operated independently from each other to be able to test and commission parts of the detector during installation and maintenance periods. The division of DCS into subsystems can be either a functional division (e.g. the magnet system) or a topological division (e.g. the precision chambers in the barrel). The supervisory control has to interconnect all these subsystems together and provide the operators with a global and consistent interface to the detector. Local Control Stations will interface to the hardware, typically modular systems (most likely VME) with connections to field bus networks and PLC systems.

Selection of a Field Bus

Concerning the time scale and the complexity of the instrumentation projects for the LHC accelerator and particle detectors, a policy has been defined to apply industrial solutions and standards where ever appropriate. Field buses are typical examples of industrial developments and they will be implemented on a large scale to control and monitor a wide range of equipment in the accelerator and experiments. As many different kinds of field buses are available from industry, it was felt necessary to restrict the choice of field buses to be used at CERN [2]. The CAN field bus has been selected as the solution for relatively simple control tasks and sensor readout. Reliability, availability of inexpensive controller chips from different suppliers, ease of use, wide acceptance by industry and the expectation that CAN will be available for a long period of time, were strong arguments in favour of this choice. Both data link layer and physical layer (not including the transmission medium) from the ISO/OSI reference model are defined for the CAN bus as the open standard ISO/DIS 11898 [3].

Can Control for Muon Precision Chambers

As mentioned earlier, one of the main detection systems in the ATLAS detector is the Muon Spectrometer. The barrel of this spectrometer contains about 600 Monitored Drift Tube (MDT) chambers. These precision chambers are arranged in three concentric cylinders around the beam axis at radial distances of about 5, 7.5 and 10.5 meters. Each chamber is an assembly of six parallel layers of drift tubes on a support frame, three layers on each side.

From a control system point of view each of these chambers can be considered as a separate entity controlled by one CAN node which is mounted on the frame of the chamber. The barrel of the spectrometer consists of about 600 precision chambers, consequently a field bus network configuration has to be designed of (at least) 600 CAN nodes interfaced to the detector control system by VME modules. Monitoring of the temperature on different spots of the supporting frame on a precision chamber is required with a resolution of 0.5° C with a repetition rate in the order of ten seconds. Collecting this information regularly is required because the resolution of the drift tubes and the deformation of the chambers are functions of the temperature. When measuring the temperature on an average of 16 locations on each chamber, close to 10,000 sensors are needed for the 600 chambers in the barrel of the Muon Spectrometer.

Hall probes will be mounted on many chambers to monitor continuously the three field components of the magnetic field. The output values for these probes will be compared with the results of calculations on the three-dimensional field model.

Apart from the temperature sensors and Hall probes, output from other sensors, such as strain gauges or gas pressure sensors, can be digitized and collected by the CAN nodes as well.

Prototype Muon Detector

The Demonstration of Atlas Chamber Alignment (DATCHA) is a prototype of a barrel section of the Muon Spectrometer. This prototype consists of three precision chambers and is intentionally built to test the accuracy of the alignment system. DATCHA is 12 meters high and has been installed in one of the underground caverns at CERN. To gain experience using the CAN field bus for chamber control we considered DATCHA as an excellent opportunity to work out our ideas and to show it as a real application to the ATLAS collaboration. CAN nodes have been designed and implemented to control and monitor the readout electronics, to set and monitor the high-voltage channels and to monitor temperatures. The field bus network is interfaced to a Sun Unix system by a VME-CAN interface from MicroSys [4].

General Purpose Can Nodes

The heart of the CAN nodes that control the DATCHA prototype detector is a general-purpose credit-card sized module. This module contains a CAN controller, a micro controller, memory and I/O and is used in combination with dedicated electronics to add more specific functionality. These general-purpose modules can be programmed for different application and combine local intelligence, extensive I/O capabilities and the CAN-bus interface on a small board.

Two different implementations of this general purpose CAN module are used for the DATCHA detector: one is the General-Purpose CAN module (GPCAN), an in-house development of NIKHEF, and the other one is commercially available (MicroKey [5]). Both have a Philips micro controller of the 8051 family with an on-chip integrated CAN bus controller and extensive I/O functions.

Three dedicated CAN nodes are designed with these general- purpose CAN modules for the control and monitoring of the precision chambers in DATCHA:

1.  Detector Control Card (DCC)

2.  Quad High-Voltage Card (Quad-HV)

  1. Crystal CAN system.

After a short description of the DCC and the Quad-HV, we will give a more detailed description of the Crystal CAN system for analogue sensor readout in the next chapters.

Detector Control Card

A Detector Control Card (DCC) with a CAN node and additional electronics has been installed on each of the three precision chambers of the prototype detector DATCHA. The DCC module adjusts the analogue threshold values, disables noisy channels, generates test signals and monitors the temperature of the front-end electronics. An important feature here is the function to disable noisy channels in the chambers. Each front-end module connects to 32 tubes in the precision chamber and has a 32-bits register to disable individual channels. The register is written and read through a JTAG interface, as defined by the Boundary Scan Architecture standard [6] by the micro controller. The Detector Control Card is implemented by a GPCAN from NIKHEF with the Philips 87C592 micro controller and integrated CAN-bus interface.

High-Voltage Control Card

The Quad High-Voltage Card (Quad-HV) is meant for the control and monitoring of the high-voltage channels for the precision chambers. It consists of a CAN node, which controls four independent high-voltage generators. The node is able to switch on and off the power supply, to set the value between 0 and the maximum value of 4096 Volt, to monitor the actual voltage and current values and to trip the supply when preset limits are exceeded. The CAN node is based on the same GPCAN node as used for the detector control card described above.

Crystal Can system

One of the requirements of the future ATLAS DCS is that it should be able to measure the magnetic field and temperature inside and around different parts of the detector. The Crystal CAN system is developed as a prototype for distributed and multiplexed analogue sensor readout.

The hardware is made up of two module types: the Controller Module and the Sensor Module. The controller module hardware is the same in all applications. The software running on the module will provide application specific features. The sensor module is built around the same ADC in all applications, but the signal conditioning electronics around it depends on the application (the type of sensors to be read out).

The Controller Module is based on a commercially available universal micro controller module (MicroKey 20CN592), designed around the Philips 80C592 8-bit micro controller running at a clock frequency of 16 MHz. The micro controller has an integrated CAN controller and the on-board CAN transceiver (Philips 82C250) provides an ISO/DIS-11898 standard CAN-bus interface. The module offers 48 KB of user application ROM (flash) and 63.5 KB of RAM. This amount of memory is enough to build quite large applications. User application code can be downloaded through a serial port (standard) or optionally via the CAN-bus, an option that could be very useful once the modules are integrated into an experiment. These features are provided by the onboard firmware delivered with the module. The controller module also contains a multiplexer chip to enable connections to several sensor modules. One controller module may control up to eight sensor modules. A digital multiplexer-chip is used to switch the controller's serial interface I/O port to any of the connected sensor modules.

The CRYSTAL CS5525 ADC [7] is the heart of the Sensor Module that digitizes the analogue quantities to be measured. This 16-bits ADC contains an instrumentation amplifier, a programmable gain amplifier, a digital filter and calibration circuitry. It also has four output latch pins, which are used to control an external analogue multiplexer to directly select any of up to 16 analogue inputs. In our case we use the CPD output as a fifth bit to be able to select even up to 32 analogue inputs. The CS5525 can perform conversions with rates of 3.76, 7.5, 15, 30, 60, 123, 169 and 202 Hz with voltage input ranges of 25 mV, 55 mV, 100 mV, 1 V and 5 V, unipolar as well as bipolar. The CS5525 is controlled through a three-wire serial interface that is compatible with SPI and MicroWire standards. The interface signal lines are connected to I/O ports of the micro controller in the controller module, which runs software implementing the protocol across these lines [8].

System configuration

Figure 1 shows a schematic of a general configuration of a Controller Module and a number of Sensor Modules. Typically the sensor modules are located a variable but relatively small distance from the controller module and connected by a cable, carrying the control signals and power. The individual sensors either are part of the sensor module or are connected to the sensor module by a cable. The controller module is connected to the outside world through a CAN field-bus. The CAN-bus can extend several hundred meters with its -in this application- foreseen 125 Kbit/sec transfer rate and can connect up to about 64 controller modules, thus providing the required distributed control capability. Figure 2 shows a possible configuration of a local control station as part of the DCS of the ATLAS detector. It is a hierarchical system: the controller modules monitor the sensors, the Bus-to-CAN interfaces possibly may possess some intelligence and monitor the controller modules on 'their' CAN-bus and the host controller monitors the various CAN-networks through the system-bus interfaces and may provide a local (graphical) user interface. The whole system is remotely monitored and controlled by the central DCS.