A first discussion about beam backgrounds affecting the detections of the proton tags at 420 m is necessary.

See attached document.

I believe that the document should discuss the impact of the presence of multiple proton-proton interactions per beam crossing on the detection of exclusive and low mass final states in the central detectors. Part of the physics measurements proposed require high statistics. Therefore, the authors should demostrate the feasibility of the physics program in a high instantaneous luminosity scenario.

Our current FP420 design includes timing detectors which can measure the proton arrival time with a resolution  ~ 10 ps. If we assume that the two detected protons come from the same pp interaction, therefore, we will know the z of their origin with a resolution  ~ 2 mm (10ps 2 ). This can be matched with the observed central (jet, W, Z, …) vertex.

Even when there are 20 interactions in a crossing, they are distributed over a Gaussian in z with (z) = 75.5 mm. For our good events the two protons are timed to come from a point with (z)  2 – 3 mm. The central massive state (H  JJ, WW, ZZ, t-tbar) has a vertex with (z) ~ 50 m and can be matched. Using this timing technique to match the vertex with the candidate tagged protons, we believe we can analyse data efficiently even at L = 1034 cm-2s-1 for those channels that can be centrally triggered (all the above except for H  JJ, see the trigger discussion below).

The trigger bandwidth remains a concern. If possible, it would be helpful to show that a sizable fraction of the events are already selected by the standard triggers at ATLAS and CMS. The referee wonders whether ATLAS and CMS trigger groups would accept to dedicate some bandwidth to low Et trigger by 2008 given the current limitations of the L1 trigger bandwidth at both experiments. Perhaps a more quantitative discussion of trigger rates would clarify the situation. Following the previous point, a discussion of the pileup effects should be also quantify to some extend.

Triggering on a Higgs Boson with mass around 120 GeV poses a special challenge at the LHC. The relatively low transverse momenta of its decay products necessitate L1 jet E_T thresholds as low as 40 GeV. Thresholds that low would result in a L1 trigger rate of more than 50 kHz, due to the QCD background, essentially saturating the available output bandwidth.

The following strategy has been devised to tackle the problem (studies performed in the CMS/TOTEM environment):

o) H(120 GeV) --> bbbar --> 2 jets

The output rate of a 2-jet L1 trigger condition with thresholds of 40 GeV per jet can be kept at an acceptable level of order 1 kHz, in the absence of pile-up, by either using the TOTEM T1 and T2 detectors (or the ATLAS forward detectors) as vetoes or by requiring that a proton be seen in the TOTEM (or ATLAS) Roman Pot (RP) detectors at 220 m on one side of the IP (single-sided 220 m condition). This gives a sufficient reduction of the QCD background event rate. At higher luminosities, up to 2 x 1033 cm-2 s-1, where pile-up is present, it is necessary to combine the single-sided 220 m condition with conditions based on event topology and on HT, the scalar sum of all L1 jet ET values. These L1 trigger conditions result in signal efficiencies between 15% and 20%. A further 10% of the Higgs events can be retained by exploiting the muon-rich final state in the H --> bbbar mode, with no requirements on the forward detectors.

Going to even higher luminosities, up to 1 x 1034 cm-2 s-1, will necessitate additional L1 trigger conditions, such as inclusion of RP detectors at 420m from the IP. This would require an increase in the L1 trigger latency.

o) H (140 GeV) --> WW

There are no trigger problems for final states rich in high p_T leptons,

such as the semi-leptonic WW decay modes of the Standard Model Higgs Boson. Efficiencies are around 20% (including the branching ratio) if the standard single-leptonic (and di-leptonic) trigger thresholds are applied. This is close to maximal. Small improvements will be possible by, for example, including a dedicated tau decays trigger. A full study was performed in hep-ph/0505240

10 mu position resolution sounds very challenging given the movement of the beam positions store-by-store and the limitations in the alignment of the detectors w.r.t the beam-pipe. A more elaborated discussion about the feasibility of such position resolution is desirable.

The crucial measurement is to know the displacements x and y between the un-deflected beam centre and the track, at the front and back of our 10m arm. A single layer pixel resolution of 10 m is currently achieved in 3D Si detectors. We plan to have 8-12 layers precision-mounted together in a group, and should achieve ~ 5 m resolution in x and y from the group, relative to a reference plate. The standard button Beam Position Monitors (BPMs) (of which there are over 1000 in the LHC) achieve 10 m resolution but are not normally calibrated absolutely to better than about 100 m. We are having discussions with LHC instrumentation experts, who state that with special care a 5 m resolution is possible. We plan to have two such BPMs, at the front and back of the 10m arm. With precision mechanics (probably either mounting the BPMs and the tracker stations on a 10m optical bench (in the Microstation solution) or together in the precision pipe (in the Hamburg Pipe solution) we can know the relative positions of tracker and BPM electrodes. The BPMs can be calibrated “on the bench” (we expect that the full 10m arm will be an integrated unit and assembled and tested before installation) using a precision pulsed wire. The balance of the L-R/U-D electronic channels can be much improved over the standard BPMs. There is a question about the bunch-to-bunch beam position stability. We measure this (integrated over many orbits) and if it is limiting (i.e. more than a few m) we can correct for it. In some ways the BPM’s are as important as the tracking detectors, and we will work closely with the LHC Instrumentation Group on this. [In fact they are interested in this development for LHC diagnostic reasons.]

It is also desirable to align the detectors with respect to the central apparatus and with respect to the beam by using known, high rate physics processes, which produces protons whose momentum is known a priori from a measurement made using the central detector alone. The trajectory of these protons along the beam-line can then be predicted and compared with the coordinates of the hits on the individual detector planes in order to align them. In addition, one can exploit the fact that the transverse momentum of the protons must balance the transverse momentum of the rest of the event. If the latter is measured in the central detector, momentum conservation can be used to determine the incoming beam direction. This method was successfully exploited at HERA by H1 and ZEUS to align their forward spectrometers; the process used there was diffractive photoproduction of rho0 mesons (see eg ZEUS Collab., Z. Phys. C73 (1997) 253, hep-ex/9609003). Two processes are currently being investigated:

i) Diffractive photoproduction of Y mesons, in which one of the incoming protons radiates a quasi-real photon which interacts with the other proton: gamma p --> Y p. This is the exact analogue of the reaction used at HERA.

ii) In addition, we are considering the production of continuum dileptons by photon-photon fusion, where again the two photons are radiated by the incoming protons.

Finally we are studying an in situ calibration which looks promising, using ppp +  + p events by 2-photon exchange. The cross section is about 100 fb in a useful kinematic region. Thanks to the muons being very back-to-back and with the same pT they can be found even in the presence of multi-event pile-up. The muons (pT >~ 6 GeV) are triggered on and well measured, and then the proton momenta are very well known ( ~ 10^-4). This gives us a ~1% missing mass calibration over a range of central masses.

It is not clear in the document whether a 1% resolution in the invariant mass is mandatory or not. In view of the previous comment it would help to show how the physics program is affected by a limited mass resolution worse than 1%.

The quoted mass resolution of 1% is critical only for the signal to background ratio in the b-jet decay channels of the (light) Standard Model Higgs. In this case, the signal to background S/B ~ MH3 / M, where M is the mass window within which the search is performed. As the mass increases, therefore, the resolution can be allowed to deteriorate. The S/B ~ 1 number quoted for the 120 GeV Standard Model Higgs assumes a gaussian resolution of  = 1 GeV, as quoted in the LOI. The resolution is important because the exclusive b-jet background is essentially an irreducible continuum background - a worse resolution therefore allows more background events into the mass window. Of

course, a S/B ratio > 1 is not a disaster, and indeed exceptionally good relative to any other channel.

For the more favourable MSSM scenarios, the cross section for the production of the Higgs Bosons is (relatively) much larger than in the SM case, and therefore a worse mass resolution (within reason) does not adversely affect the discovery potential.

More details can be found in hep-ph/0409144

The WW decay channels are much less sensitive to the mass resolution. In the recent paper hep-ph/0505240 a more conservative 2 GeV was assumed, and the backgrounds were found (in most circumstances) to be negligible, although there is still a small irreducible continuum background from -> WW. The caveat applies to the semi-leptonic WW decay channel of a SM Higgs with a mass below the 2-W threshold. The potentially problematic QCD background in this case is not reduced significantly by a better mass resolution on the taggers, however. It is worth noting that even a 2 or 3 GeV resolution on the Higgs in the WW decay channel (in which there is at least one neutrino) is MUCH better than in any other method.

In summary, the physics program does not rely upon the missing mass resolution being as low as (or even close to) = 1 GeV. The S/B gets progressively worse as the resolution deteriorates, which is only a matter of concern for light Standard Model Higgs Bosons in the b-jet channel. Measuring the Higgs mass with high precision is an important part of the FP420 program, however, and so whilst the S/B is not adversely affected in most channels by a worse resolution, we would of course strive to achieve 1 GeV . It is also desirable to push the mass resolution as low as possible in certain MSSM scenarios. where the Higgs width can be (much) greater than 1 GeV - in which case a direct Higgs width measurement becomes possible if the resolution is sufficiently good.

The referee and the LHCC committe understand that the LHCC is being asked to support the proposal for starting a R&D activity where most of the details on the detector harward and the required modifications in the LHC cryogenic system of course must be still explored. Nevertheless, we would like to understand how the schedule for the installation of the detectors would fit into the master LHC schedule on 2008.

We invite the authors of the proposal to interact with beam-division representatives on this matter and provide an indication of possible scenarios.

FP420 is in constant contact with the LHC groups. Indeed, the plan is to place at least one engineer into the AT-CRI group, as stated in the LOI. To be clear, we are aiming to be ready to install the new cryostats, with detectors installed, in the 2008 / 2009 shutdown (no earlier). If the LHC schedule moves, we would aim for the first shutdown of sufficient duration after the start of 2009.

NOTE TO ALBERT – WE SHOULD CHECK KEITH IS HAPPY TO BE QUOTED HERE

Below is an extract from Keith Potters reply to the referees questions above.

The new cryostat must be designed to be exchangeable with the existing connection cryostat within a three-month shutdown. This is because it is assumed that during LHC

running the traditional Winter shutdown will be maintained. Nobody seriously doubts that it will be needed by both machine and experiments. The exchange of two/four 13m long cryostats in this time is expected to be rather straightforward. The time left for installing detectors is perhaps more questionable, but a lot can probably done in parallel and the over all aim to install both cryostats and detectors in a traditional annual shutdown must be possible.

The completion work on the RF, beam dump and collimators shouldn't give rise to any problem, beyond the transport logistics which will be a feature of all LHC shutdown work.

NOTE TO ALBERT – WE SHOULD ASK EMMANUEL FOR WHAT WE CAN SAY REGARDING LEASON WITH THE MACHINE.