Factors affecting bunch spacing in the vlhc
February 16, 1999
We begin the discussion by building on the conclusion from the Multiple Interactions Working Group at the Very Large Hadron Collider Physics and Detector Workshop held at Fermilab in March 1997. (1)
The problem with multiple interactions at the VLHC will be worse, yet comparable to the situation at the LHC. The design luminosities are identical, 1034, and the luminous region for a given bunch crossing will be a few cm longitudinally for both machines. While the time between bunch crossings is 25 nsec at the LHC and the bunches are separated by 7.5 m, these numbers are 17 nsec and 5 m, respectively, for the VLHC. The figure shows the number of interactions per bunch crossing expected at the VLHC as a function of luminosity, assuming an inelastic proton-proton cross section of 130 mbarn. At design luminosity, each beam crossing will yield about 22 interactions. Both the LHC and the VLHC will likely come online with instantaneous luminosities at least a factor of 10 lower than the design luminosity. At start-up luminosity (0.1 of design), there will only be a few interactions per crossing, so the multiple interaction problem will be similar to that faced at the Tevatron.
The much higher center-of-mass energy of the VLHC, however, will make the underlying event problem more difficult than at the LHC, since the particle multiplicity and average minimum-bias ET will be higher. Still an average ET density of 10s of GeV per unith - f at the VLHC design luminosity is manageable if one is searching for high mass particles and jets at sqrt(s) = 100 TeV.
The above two paragraphs and figure are copied from ref (1) and use the Snowmass parameter set (2).
Lets take a look at some of these numbers for two cases, a 50 TeV low-field machine and a 50 TeV high-field machine. The assumed parameters are listed below.
For both fields:
|Magnetic field (Tesla)||2.0||12.5|
|Ring packing factor||0.95||0.75|
|Total circumference (km)||524||84|
|Revolution time (millisec)||1.84||0.37|
|Damping time (hours)||87||2.8|
Varying the bunch spacing
Lets look at a range varying from having charge in every 12th bucket to having charge in every bucket.
This is shown as a band since there is uncertainty in the total cross section extrapolation to Ecm = 100 TeV as well as disagreement on what fraction of the total cross section to use. Two different extrapolations (and taking 3/4 of the total cross section) (3) give 102 and 113 mb compared to the 130 mb used in the Snowmass parameter set. (2) A 1993 paper (9) using various extrapolations from CDF and E710 (also consistent with a recent result from E811) show the same range of values for the total cross sections. The plot is for inelastic cross sections of 100 and 130 mb. In this note we use the higher number.
For this same range of bunch spacing the head-on beam-beam tune shift (for 1034) is acceptable.
The stored energy in the beam rises inversely as the square root of the bunch spacing.
These are quite big numbers especially for the large circumference low-field 50 TeV ring and probably the most serious consequence on the accelerator of reducing the bunch spacing.
Unequal bunch spacing and "microbunch" structure.
Another strategy is to have many bunches but not space them equally. The example below has 1/3 of the buckets filled with charge so from the point of view of beam-beam tune shift, stored energy, and TMCI threshold both cases are the same. There may be some advantage to the unequal spacing for the detector, but since extensive data "pipelining" will be needed it seems unlikely there is much difference.
One could, in principle, carry this idea further by using much high frequency RF, thus generating a microbunch structure. (4). What are the RF requirements to have very short bunches (say 3 cm) spaced 10 cm apart? In this situation assuming zero crossing angle and some kind of "crab crossing" scheme, one would make the luminous region "lumpy" with vertices coming from well defined locations.
Lets return to 530 MHz and look at bunch length and crossing angle.
|e (rms) p mm-mrad||1.00||3.75||2.0|
|s (bunch L), cm||5||7.5||13.0|
|Crossing angle (m-rad)||(see below)||200||200|
|Form factor||(see below)||0.90||0.89|
|Luminous region (cm)||(see below)||9.6||16.4|
|Bunch Spacing (in nsec)||(see below)||24.95||132|
LHC Parameters are from reference (5) and TeV33 preliminary numbers are from reference (6). The form factor (reduction in luminosity due to the cross angle) is calculated using the formula in (7).
The next graph shows the variation in this factor as a function of crossing angle.
Length of the luminous region
The length of the luminous region is bounded on the one hand byb* and sL, and on the other hand (for short bunches) by RF parameters.
RF is necessary to offset synchrotron radiation. Once we have chosen 530 MHz or 1.8 nsec bucket spacing that fixes the bunch length to be < 1/3 or 0.6 nsec, which corresponds to as < 18-cm. (3). The luminous region will be roughly 1/v2 less than this or + 13 cm. So 2/3 of the interactions will lie in a region 26 cm long (or less). We can lower the RF frequency but this will lower TMCI threshold.
In the example above wheresL = 5 cm, the luminous region varies from 7 cm to 3 cm as the crossing angle is varied from 0 to 200 microradians. 68% of the interactions lie in this region. The comparison with LHC and TeV33 (very preliminary) is shown in the above table.
The vertices are distributed in this region. The number in interactions/mm is shown for two assumptions: 10 interactions/crossing and 20 interactions/crossing.
What is best for the detector?
One needs to look at both time and space resolution and consider potential improvements in both over the next two decades. One rather restricted class of detectors can be to have a point source (short bunch length and a crossing angle), surround the interaction region with heavy shielding and look for high massm-pairs coming out. However, for this discussion assume a general purpose "exploration for new phenomena" type detector. Tracking will be important to extract a few "gold-plated" events in the face of tremendous background. The detector must have high granularity, ability to stand high rates, fast and pipelined readout.
Collection times for all known detection devices is in the 10-20 nsec range. This corresponds to the intrinsic time response of silicon, scintillating fibers, phototubes etc. This assumes one stays away from inherently slow devices such as liquid Argon (ionization) calorimetry. Of course, there might be some breakthrough in detection techniques that surmounts this limit. Probably at this point in developing vlhc parameters we should have two sets: one with bunch spacing in the 10-20 nsec range (10 nsec is already beyond current state of the art) and another set with 2 nsec (or even less if we raise the RF frequency).
Once one gets below the resolving time of the detector it probably would seem to not make much difference to go to smaller bunch spacing. However, the area in which dramatic progress has been made and is likely to continue in the future is ultrafast digitization. So even in the face of pileup it may be possible to untangle the interactions separated in time by less than the resolving time of the detector. At this point calibration may become the main issue.
Consider the following futuristic case: Crossing angle 50 microradians; Luminous region 6 cm; Bunch spacing 1.9 nsec, every bucket filled; Interactions/crossing: 2.5. Suppose one can digitize, e.g. 8-bit accuracy, at the bucket frequency. Then during a 19 nsec time constant of a detector one is digitizing 10 times. The software problem (aside from the vast amounts of data) is similar in time to the problem currently dealt with in space. Clusters of energy inh-f bins need to be separated into individual particles and jets. The software problem now becomes one of disentangling clusters in 3-D space.
If digitizers are invented that go faster than the bucket frequency (530 MHz) then one might even contemplate separated individual interactions within the bunch crossing since they do not all occur at the same time. A 6 cm luminous region corresponds to 200 psec.
Currently Dzero can separate in time in its lowest level trigger, two interactions separated by 3-4 cm (sum of upstream/downstream trigger counters.) This would argue for a longer luminous region. Going from a 1- cm luminous region to a 1-meter one does not change the detector size dramatically since typical calorimeter/magnet dimensions are several meters. This means that the vertex and inner tracking is more spread out and the "load" i.e. occupancy, is reduced.
The Eloisatron working group came up with a recommended set of parameters for a 200 TeV cm 1034 pp collider. The main difference seems to be thatb* is larger, probably reflecting realistic IP optics designs. This needs to be looked at for our 100 TeV cm case. They also recommend a longer bunch length, thus spreading out the interactions more.
|Crossing angle (microradians)||50|
|Beam transverse size (sigma, 68%), microns||2.0|
|Bunch length (sigma, 68%), cm||12|
|Luminous region (68% of interactions), cm||9|
|Average number of vertices/crossing (22 - 30)||26|
|Average number/mm (68% of interactions)||0.3|
Discussions with Dmitri Denisov, Bill Foster, Vladimir Shiltsev, and Greg Snow have helped clarify some of the issues.