Detectors Archives – CERN Courier https://cerncourier.com/c/detectors/ Reporting on international high-energy physics Wed, 09 Jul 2025 07:11:30 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.1 https://cerncourier.com/wp-content/uploads/2025/03/cropped-favicon-32x32.png Detectors Archives – CERN Courier https://cerncourier.com/c/detectors/ 32 32 Sensing at quantum limits https://cerncourier.com/a/sensing-at-quantum-limits/ Wed, 09 Jul 2025 07:11:30 +0000 https://cerncourier.com/?p=113517 Quantum sensors have become important tools in low-energy particle physics. Michael Doser explores opportunities to exploit their unparalleled precision at higher energies.

The post Sensing at quantum limits appeared first on CERN Courier.

]]>
Atomic energy levels. Spin orientations in a magnetic field. Resonant modes in cryogenic, high-quality-factor radio-frequency cavities. The transition from superconducting to normal conducting, triggered by the absorption of a single infrared photon. These are all simple yet exquisitely sensitive quantum systems with discrete energy levels. Each can serve as the foundation for a quantum sensor – instruments that detect single photons, measure individual spins or record otherwise imperceptible energy shifts.

Over the past two decades, quantum sensors have taken on leading roles in the search for ultra-light dark matter and in precision tests of fundamental symmetries. Examples include the use of atomic clocks to probe whether Earth is sweeping through oscillating or topologically structured dark-matter fields, and cryogenic detectors to search for electric dipole moments – subtle signatures that could reveal new sources of CP violation. These areas have seen rapid progress, as challenges related to detector size, noise, sensitivity and complexity have been steadily overcome, opening new phase space in which to search for physics beyond the Standard Model. Could high-energy particle physics benefit next?

Low-energy particle physics

Most of the current applications of quantum sensors are at low energies, where their intrinsic sensitivity and characteristic energy scales align naturally with the phenomena being probed. For example, within the Project 8 experiment at the University of Washington, superconducting sensors are being developed to tackle a longstanding challenge: to distinguish the tiny mass of the neutrino from zero (see “Quantum-noise limited” image). Inward-looking phased arrays of quantum-noise-limited microwave receivers allow spectroscopy of cyclotron radiation from beta-decay electrons as they spiral in a magnetic field. The shape of the endpoint of the spectrum is sensitive to the mass of the neutrino and such sensors have the potential to be sensitive to neutrino masses as low as 40 meV.

Quantum-noise limited

Beyond the Standard Model, superconducting sensors play a central role in the search for dark matter. At the lowest mass scales (peV to meV), experiments search for ultralight bosonic dark-matter candidates such as axions and axion-like particles (ALPs) through excitations of the vacuum field inside high–quality–factor microwave and millimetre-wave cavities (see “Quantum sensitivity” image). In the meV range, light-shining-through-wall experiments aim to reveal brief oscillations into weakly coupled hidden-sector particles such as dark photons or ALPs, and may employ quantum sensors for detecting reappearing photons, depending on the detection strategy. In the MeV to sub-GeV mass range, superconducting sensors are used to detect individual photons and phonons in cryogenic scintillators, enabling sensitivity to dark-matter interactions via electron recoils. At higher masses, reaching into the GeV regime, superfluid helium detectors target nuclear recoils from heavier dark matter particles such as WIMPs.

These technologies also find broad application beyond fundamental physics. For example, in superconducting and other cryogenic sensors, the ability to detect single quanta with high efficiency and ultra-low noise is essential. The same capabilities are the technological foundation of quantum communication.

Raising the temperature

While many superconducting quantum sensors require ultra-low temperatures of a few mK, some spin-based quantum sensors can function at or near room temperature. Spin-based sensors, such as nitrogen-vacancy (NV) centres in diamonds and polarised rubidium atoms, are excellent examples.

NV centres are defects in the diamond lattice where a missing carbon atom – the vacancy – is adjacent to a lattice site where a carbon atom has been replaced by a nitrogen atom. The electronic spin states in NV centres have unique energy levels that can be probed by laser excitation and detection of spin-dependent fluorescence.

Researchers are increasingly exploring how quantum-control techniques can be integrated into high-energy-physics detectors

Rubidium is promising for spin-based sensors because it has unpaired electrons. In the presence of an external magnetic field, its atomic energy levels are split by the Zeeman effect. When optically pumped with laser light, spin-polarised “dark” sublevels – those not excited by the light – become increasingly populated. These aligned spins precess in magnetic fields, forming the basis of atomic magnetometers and other quantum sensors.

Being exquisite magnetometers, both devices make promising detectors for ultralight bosonic dark-matter candidates such as axions. Fermion spins may interact with spatial or temporal gradients of the axion field, leading to tiny oscillating energy shifts. The coupling of axions to gluons could also show up as an oscillating nuclear electric dipole moment. These interactions could manifest as oscillating energy-level shifts in NV centres, or as time-varying NMR-like spin precession signals in the atomic magnetometers.

Large-scale detectors

The situation is completely different in high-energy physics detectors, which require numerous interactions between a particle and a detector. Charged particles cause many ionisation events, and when a neutral particle interacts it produces charged particles that result in similarly numerous ionisations. Even if quantum control were possible within individual units of a massive detector, the number of individual quantum sub-processes to be monitored would exceed the possibilities of any realistic device.

Increasingly, however, researchers are exploring how quantum-control techniques – such as manipulating individual atoms or spins using lasers or microwaves – can be integrated into high-energy-physics detectors. These methods could enhance detector sensitivity, tune detector response or enable entirely new ways of measuring particle properties. While these quantum-enhanced or hybrid detection approaches are still in their early stages, they hold significant promise.

Quantum dots

Quantum dots are nanoscale semiconductor crystals – typically a few nanometres in diameter – that confine charge carriers (electrons and holes) in all three spatial dimensions. This quantum confinement leads to discrete, atom-like energy levels and results in optical and electronic properties that are highly tunable with size, shape and composition. Originally studied for their potential in optoelectronics and biomedical imaging, quantum dots have more recently attracted interest in high-energy physics due to their fast scintillation response, narrow-band emission and tunability. Their emission wavelength can be precisely controlled through nanostructuring, making them promising candidates for engineered detectors with tailored response characteristics.

Chromatic calorimetry

While their radiation hardness is still under debate and needs to be resolved, engineering their composition, geometry, surface and size can yield very narrow-band (20 nm) emitters across the optical spectrum and into the infrared. Quantum dots such as these could enable the design of a “chromatic calorimeter”: a stack of quantum-dot layers, each tuned to emit at a distinct wavelength; for example red in the first layer, orange in the second and progressing through the visible spectrum to violet. Each layer would absorb higher energy photons quite broadly but emit light in a narrow spectral band. The intensity of each colour would then correspond to the energy absorbed in that layer, while the emission wavelength would encode the position of energy deposition, revealing the shower shape (see “Chromatic calorimetry” figure). Because each layer is optically distinct, hermetic isolation would be unnecessary, reducing the overall material budget.

Rather than improving the energy resolution of existing calorimeters, quantum dots could provide additional information on the shape and development of particle showers if embedded in existing scintillators. Initial simulations and beam tests by CERN’s Quantum Technology Initiative (QTI) support the hypothesis that the spectral intensity of quantum-dot emission can carry information about the energy and species of incident particles. Ongoing work aims to explore their capabilities and limitations.

Beyond calorimetry, quantum dots could be formed within solid semiconductor matrices, such as gallium arsenide, to form a novel class of “photonic trackers”. Scintillation light from electronically tunable quantum dots could be collected by photodetectors integrated directly on top of the same thin semiconductor structure, such as in the DoTPiX concept. Thanks to a highly compact, radiation-tolerant scintillating pixel tracking system with intrinsic signal amplification and minimal material budget, photonic trackers could provide a scintillation-light-based alternative to traditional charge-based particle trackers.

Living on the edge

Low temperatures also offer opportunities at scale – and cryogenic operation is a well-established technique in both high-energy and astroparticle physics, with liquid argon (boiling point 87 K) widely used in time projection chambers and some calorimeters, and some dark-matter experiments using liquid helium (boiling point 4.2 K) to reach even lower temperatures. A range of solid-state detectors, including superconducting sensors, operate effectively at these temperatures and below, and offer significant advantages in sensitivity and energy resolution.

Single-photon phase transitions

Magnetic microcalorimeters (MMCs) and transition-edge sensors (TESs) operate in the narrow temperature range where a superconducting material undergoes a rapid transition from zero resistance to finite values. When a particle deposits energy in an MMC or TES, it slightly raises the temperature, causing a measurable increase in resistance. Because the transition is extremely steep, even a tiny temperature change leads to a detectable resistance change, allowing precise calorimetry.

Functioning at millikelvin temperatures, TESs provide much higher energy resolution than solid-state detectors made from high-purity germanium crystals, which work by collecting electron–hole pairs created when ionising radiation interacts with the crystal lattice. TESs are increasingly used in high-resolution X-ray spectroscopy of pionic, muonic or antiprotonic atoms, and in photon detection for observational astronomy, despite the technical challenges associated with maintaining ultra-low operating temperatures.

By contrast, superconducting nanowire and microwire single-photon detectors (SNSPDs and SMSPDs) register only a change in state – from superconducting to normal conducting – allowing them to operate at higher temperatures than traditional low-temperature sensors. When made from high–critical-temperature (Tc) superconductors, operation at temperatures as high as 10 K is feasible, while maintaining excellent sensitivity to energy deposited by charged particles and ultrafast switching times on the order of a few picoseconds. Recent advances include the development of large-area devices with up to 400,000 micron-scale pixels (see “Single-photon phase transitions” figure), fabrication of high-Tc SNSPDs and successful beam tests of SMSPDs. These technologies are promising candidates for detecting milli-charged particles – hypothetical particles arising in “hidden sector” extensions of the Standard Model – or for high-rate beam monitoring at future colliders.

Rugged, reliable and reproducible

Quantum sensor-based experiments have vastly expanded the phase space that has been searched for new physics. This is just the beginning of the journey, as larger-scale efforts build on the initial gold rush and new quantum devices are developed, perfected and brought to bear on the many open questions of particle physics.

Partnering with neighbouring fields such as quantum computing, quantum communication and manufacturing is of paramount importance

To fully profit from their potential, a vigorous R&D programme is needed to scale up quantum sensors for future detectors. Ruggedness, reliability and reproducibility are key – as well as establishing “proof of principle” for the numerous imaginative concepts that have already been conceived. Challenges range from access to test infrastructures, to standardised test protocols for fair comparisons. In many cases, the largest challenge is to foster an open exchange of ideas given the numerous local developments that are happening worldwide. Finding a common language to discuss developments in different fields that at first glance may have little in common, builds on a willingness to listen, learn and exchange.

The European Committee for Future Accelerators (ECFA) detector R&D roadmap provides a welcome framework for addressing these challenges collaboratively through the Detector R&D (DRD) collaborations established in 2023 and now coordinated at CERN. Quantum sensors and emerging technologies are covered within the DRD5 collaboration, which ties together 112 institutes worldwide, many of them leaders in their particular field. Only a third stem from the traditional high-energy physics community.

These efforts build on the widespread expertise and enthusiastic efforts at numerous institutes and tie in with the quantum programmes being spearheaded at high-energy-physics research centres, among them CERN’s QTI. Partnering with neighbouring fields such as quantum computing, quantum communication and manufacturing is of paramount importance. The best approach may prove to be “targeted blue-sky research”: a willingness to explore completely novel concepts while keeping their ultimate usefulness for particle physics firmly in mind.

The post Sensing at quantum limits appeared first on CERN Courier.

]]>
Feature Quantum sensors have become important tools in low-energy particle physics. Michael Doser explores opportunities to exploit their unparalleled precision at higher energies. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_QSENSING_ADMX.jpg
Gaseous detectors school at CERN https://cerncourier.com/a/gaseous-detectors-school-at-cern/ Fri, 16 May 2025 16:29:04 +0000 https://cerncourier.com/?p=112717 DRD1 is a new worldwide collaborative framework of more than 170 institutes focused on R&D for gaseous detectors.

The post Gaseous detectors school at CERN appeared first on CERN Courier.

]]>
How do wire-based detectors compare to resistive-plate chambers? How well do micropattern gaseous detectors perform? Which gas mixtures optimise operation? How will detectors face the challenges of future more powerful accelerators?

Thirty-two students attended the first DRD1 Gaseous Detectors School at CERN last November. The EP-DT Gas Detectors Development (GDD) lab hosted academic lectures and varied hands-on laboratory exercises. Students assembled their own detectors, learnt about their operating characteristics and explored radiation-imaging methods with state-of-the-art readout approaches – all under the instruction of more than 40 distinguished lecturers and tutors, including renowned scientists, pioneers of innovative technologies and emerging experts.

DRD1 is a new worldwide collaborative framework of more than 170 institutes focused on R&D for gaseous detectors. The collaboration focuses on knowledge sharing and scientific exchange, in addition to the development of novel gaseous detector technologies to address the needs of future experiments. This instrumentation school, initiated in DRD1’s first year, marks the start of a series of regular training events for young researchers that will also serve to exchange ideas between research groups and encourage collaboration.

The school will take place annually, with future editions hosted at different DRD1 member institutes to reach students from a number of regions and communities.

The post Gaseous detectors school at CERN appeared first on CERN Courier.

]]>
Meeting report DRD1 is a new worldwide collaborative framework of more than 170 institutes focused on R&D for gaseous detectors. https://cerncourier.com/wp-content/uploads/2025/03/CCMayJun25_FN_DRD1.jpg
CDF addresses W-mass doubt https://cerncourier.com/a/cdf-addresses-w-mass-doubt/ Wed, 26 Mar 2025 14:24:15 +0000 https://cerncourier.com/?p=112584 Ongoing cross-checks at the Tevatron experiment reinforce its 2022 measurement of the mass of the W boson, which stands seven standard deviations above the Standard Model prediction

The post CDF addresses W-mass doubt appeared first on CERN Courier.

]]>
The CDF II experiment

It’s tough to be a lone dissenting voice, but the CDF collaboration is sticking to its guns. Ongoing cross-checks at the Tevatron experiment reinforce its 2022 measurement of the mass of the W boson, which stands seven standard deviations above the Standard Model (SM) prediction. All other measurements are statistically compatible with the SM, though slightly higher, including the most recent by the CMS collaboration at the LHC, which almost matched CDF’s stated precision of 9.4 MeV (CERN Courier November/December 2024 p7).

With CMS’s measurement came fresh scrutiny for the CDF collaboration, which had established one of the most interesting anomalies in fundamental science – a higher-than-expected W mass might reveal the presence of undiscovered heavy virtual particles. Particular scrutiny focused on the quoted momentum resolution of the CDF detector, which the collaboration claims exceeds the precision of any other collider detector by more than a factor of two. A new analysis by CDF verifies the stated accuracy of 25 parts per million by constraining possible biases using a large sample of cosmic-ray muons.

“The publication lays out the ‘warts and all’ of the tracking aspect and explains why the CDF measurement should be taken seriously despite being in disagreement with both the SM and silicon-tracker-based LHC measurements,” says spokesperson David Toback of Texas A&M University. “The paper should be seen as required reading for anyone who truly wants to understand, without bias, the path forward for these incredibly difficult analyses.”

The 2022 W-mass measurement exclusively used information from CDF’s drift chamber – a descendant of the multiwire proportional chamber invented at CERN by Georges Charpak in 1968 – and discarded information from its inner silicon vertex detector as it offered only marginal improvements to momentum resolution. The new analysis by CDF collaborator Ashutosh Kotwal of Duke University studies possible geometrical defects in the experiment’s drift chamber that could introduce unsuspected biases in the measured momenta of the electrons and muons emitted in the decays of W bosons.

“Silicon trackers have replaced wire-based technology in many parts of modern particle detectors, but the drift chamber continues to hold its own as the technology of choice when high accuracy is required over large tracking volumes for extended time periods in harsh collider environments,” opines Kotwal. “The new analysis demonstrates the efficiency and stability of the CDF drift chamber and its insensitivity to radiation damage.”

The CDF II detector operated at Fermilab’s Tevatron collider from 1999 to 2011. Its cylindrical drift chamber was coaxial with the colliding proton and antiproton beams, and immersed in an axial 1.4 T magnetic field. A helical fit yielded track parameters.

The post CDF addresses W-mass doubt appeared first on CERN Courier.

]]>
News Ongoing cross-checks at the Tevatron experiment reinforce its 2022 measurement of the mass of the W boson, which stands seven standard deviations above the Standard Model prediction https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_NA_CDF_feature.jpg
The triggering of tomorrow https://cerncourier.com/a/the-triggering-of-tomorrow/ Wed, 26 Mar 2025 14:14:12 +0000 https://cerncourier.com/?p=112724 The third TDHEP workshop explored how triggers can cope with high data rates.

The post The triggering of tomorrow appeared first on CERN Courier.

]]>
The third edition of Triggering Discoveries in High Energy Physics (TDHEP) attracted 55 participants to Slovakia’s High Tatras mountains from 9 to 13 December 2024. The workshop is the only conference dedicated to triggering in high-energy physics, and follows previous editions in Jammu, India in 2013 and Puebla, Mexico in 2018. Given the upcoming High-Luminosity LHC (HL-LHC) upgrade, discussions focused on how trigger systems can be enhanced to manage high data rates while preserving physics sensitivity.

Triggering systems play a crucial role in filtering the vast amounts of data generated by modern collider experiments. A good trigger design selects features in the event sample that greatly enrich the proportion of the desired physics processes in the recorded data. The key considerations are timing and selectivity. Timing has long been at the core of experiment design – detectors must capture data at the appropriate time to record an event. Selectivity has been a feature of triggering for almost as long. Recording an event makes demands on running time and data-acquisition bandwidth, both of which are limited.

Evolving architecture

Thanks to detector upgrades and major changes in the cost and availability of fast data links and storage, the past 10 years have seen an evolution in LHC triggers away from hardware-based decisions using coarse-grain information.

Detector upgrades mean higher granularity and better time resolution, improving the precision of the trigger algorithms and the ability to resolve the problem of having multiple events in a single LHC bunch crossing (“pileup”). Such upgrades allow more precise initial-level hardware triggering, bringing the event rate down to a level where events can be reconstructed for further selection via high-level trigger (HLT) systems.

To take advantage of modern computer architecture more fully, HLTs use both graphics processing units (GPUs) and central processing units (CPUs) to process events. In ALICE and LHCb this leads to essentially triggerless access to all events, while in ATLAS and CMS hardware selections are still important. All HLTs now use machine learning (ML) algorithms, with the ATLAS and CMS experiments even considering their use at the first hardware level.

ATLAS and CMS are primarily designed to search for new physics. At the end of Run 3, upgrades to both experiments will significantly enhance granularity and time resolution to handle the high-luminosity environment of the HL-LHC, which will deliver up to 200 interactions per LHC bunch crossing. Both experiments achieved efficient triggering in Run 3, but higher luminosities, difficult-to-distinguish physics signatures, upgraded detectors and increasingly ambitious physics goals call for advanced new techniques. The step change will be significant. At HL-LHC, the first-level hardware trigger rate will increase from the current 100 kHz to 1 MHz in ATLAS and 760 kHz in CMS. The price to pay is increasing the latency – the time delay between input and output – to 10 µsec in ATLAS and 12.5 µsec in CMS.

The proposed trigger systems for ATLAS and CMS are predominantly FPGA-based, employing highly parallelised processing to crunch huge data streams efficiently in real time. Both will be two-level triggers: a hardware trigger followed by a software-based HLT. The ATLAS hardware trigger will utilise full-granularity calorimeter and muon signals in the global-trigger-event processor, using advanced ML techniques for real-time event selection. In addition to calorimeter and muon data, CMS will introduce a global track trigger, enabling real-time tracking at the first trigger level. All information will be integrated within the global-correlator trigger, which will extensively utilise ML to enhance event selection and background suppression.

Substantial upgrades

The other two big LHC experiments already implemented substantial trigger upgrades at the beginning of Run 3. The ALICE experiment is dedicated to studying the strong interactions of the quark–gluon plasma – a state of matter in which quarks and gluons are not confined in hadrons. The detector was upgraded significantly for Run 3, including the trigger and data-acquisition systems. The ALICE continuous readout can cope with 50 kHz for lead ion–lead ion (PbPb) collisions and several MHz for proton–proton (pp) collisions. In PbPb collisions the full data is continuously recorded and stored for offline analysis, while for pp collisions the data is filtered.

Unlike in Run 2, where the hardware trigger reduced the data rate to several kHz, Run 3 uses an online software trigger that is a natural part of the common online–offline computing framework. The raw data from detectors is streamed continuously and processed in real time using high-performance FPGAs and GPUs. ML plays a crucial role in the heavy-flavour software trigger, which is one of the main physics interests. Boosted decision trees are used to identify displaced vertices from heavy quark decays. The full chain from saving raw data in a 100 PB buffer to selecting events of interest and removing the original raw data takes about three weeks and was fully employed last year.

The third edition of TDHEP suggests that innovation in this field is only set to accelerate

The LHCb experiment focuses on precision measurements in heavy-flavour physics. A typical example is measuring the probability of a particle decaying into a certain decay channel. In Run 2 the hardware trigger tended to saturate in many hadronic channels when the luminosity was instantaneously increased. To solve this issue for Run 3 a high-level software trigger was developed that can handle 30 MHz event readout with 4 TB/s data flow. A GPU-based partial event reconstruction and primary selection of displaced tracks and vertices (HLT1) reduces the output data rate to 1 MHz. The calibration and detector alignment (embedded into the trigger system) are calculated during data taking just after HLT1 and feed full-event reconstruction (HLT2), which reduces the output rate to 20 kHz. This represents 10 GB/s written to disk for later analysis.

Away from the LHC, trigger requirements differ considerably. Contributions from other areas covered heavy-ion physics at Brookhaven National Laboratory’s Relativistic Heavy Ion Collider (RHIC), fixed-target physics at CERN and future experiments at the Facility for Antiproton and Ion Research at GSI Darmstadt and Brookhaven’s Electron–Ion Collider (EIC). NA62 at CERN and STAR at RHIC both use conventional trigger strategies to arrive at their final event samples. The forthcoming CBM experiment at FAIR and the ePIC experiment at the EIC deal with high intensities but aim for “triggerless” operation.

Requirements were reported to be even more diverse in astroparticle physics. The Pierre Auger Observatory combines local and global trigger decisions at three levels to manage the problem of trigger distribution and data collection over 3000 km2 of fluorescence and Cherenkov detectors.

These diverse requirements will lead to new approaches being taken, and evolution as the experiments are finalised. The third edition of TDHEP suggests that innovation in this field is only set to accelerate.

The post The triggering of tomorrow appeared first on CERN Courier.

]]>
Meeting report The third TDHEP workshop explored how triggers can cope with high data rates. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_FN_TDHEP.jpg
Cosmogenic candidate lights up KM3NeT https://cerncourier.com/a/cosmogenic-candidate-lights-up-km3net/ Mon, 24 Mar 2025 08:40:44 +0000 https://cerncourier.com/?p=112563 Strings of photodetectors anchored to the seabed off the coast of Sicily have detected the most energetic neutrino ever observed, smashing previous records.

The post Cosmogenic candidate lights up KM3NeT appeared first on CERN Courier.

]]>
Muon neutrino

On 13 February 2023, strings of photodetectors anchored to the seabed off the coast of Sicily detected the most energetic neutrino ever observed, smashing previous records. Embargoed until the publication of a paper in Nature last month, the KM3NeT collaboration believes their observation may have originated in a novel cosmic accelerator, or may even be the first detection of a “cosmogenic” neutrino.

“This event certainly comes as a surprise,” says KM3NeT spokesperson Paul de Jong (Nikhef). “Our measurement converted into a flux exceeds the limits set by IceCube and the Pierre Auger Observatory. If it is a statistical fluctuation, it would correspond to an upward fluctuation at the 2.2σ level. That is unlikely, but not impossible.” With an estimated energy of a remarkable 220 PeV, the neutrino observed by KM3NeT surpasses IceCube’s record by almost a factor of 30.

The existence of ultra-high-energy cosmic neutrinos has been theorised since the 1960s, when astrophysicists began to conceive ways that extreme astrophysical environments could generate particles with very high energies. At about the same time, Arno Penzias and Robert Wilson discovered “cosmic microwave background” (CMB) photons emitted in the era of recombination, when the primordial plasma cooled down and the universe became electrically neutral. Cosmogenic neutrinos were soon hypothesised to result from ultra-high-energy cosmic rays interacting with the CMB. They are expected to have energies above 100 PeV (1017 eV), however, their abundance is uncertain as it depends on cosmic rays, whose sources are still cloaked in intrigue (CERN Courier July/August 2024 p24).

A window to extreme events

But how might they be detected? In this regard, neutrinos present a dichotomy: though outnumbered in the cosmos only by photons, they are notoriously elusive. However, it is precisely their weakly interacting nature that makes them ideal for investigating the most extreme regions of the universe. Cosmic neutrinos travel vast cosmic distances without being scattered or absorbed, providing a direct window into their origins, and enabling scientists to study phenomena such as black-hole jets and neutron-star mergers. Such extreme astrophysical sources test the limits of the Standard Model at energy scales many times higher than is possible in terrestrial particle accelerators.

Because they are so weakly interacting, studying cosmic neutrinos requires giant detectors. Today, three large-scale neutrino telescopes are in operation: IceCube, in Antarctica; KM3NeT, under construction deep in the Mediterranean Sea; and Baikal–GVD, under construction in Lake Baikal in southern Siberia. So far, IceCube, whose construction was completed over 10 years ago, has enabled significant advancements in cosmic-neutrino physics, including the first observation of the Glashow resonance, wherein a 6 PeV electron antineutrino interacts with an electron in the ice sheet to form an on-shell W boson, and the discovery of neutrinos emitted by “active galaxies” powered by a supermassive black hole accreting matter. The previous record-holder for the highest recorded neutrino energy, IceCube has also searched for cosmogenic neutrinos but has not yet observed neutrino candidates above 10 PeV.

Its new northern-hemisphere colleague, KM3NeT, consists of two subdetectors: ORCA, designed to study neutrino properties, and ARCA, which made this detection, designed to detect high-energy cosmic neutrinos and find their astronomical counterparts. Its deep-sea arrays of optical sensors detect Cherenkov light emitted by charged particles created when a neutrino interacts with a quark or electron in the water. At the time of the 2023 event, ARCA comprised 21 vertical detection units, each around 700 m in length. Its location 3.5 km deep under the sea reduces background noise, and its sparse set up over one cubic kilometre optimises the detector for neutrinos of higher energies.

The event that KM3NeT observed in 2023 is thought to be a single muon created by the charged-current interaction of an ultra-high-energy muon neutrino. The muon then crossed horizontally through the entire ARCA detector, emitting Cherenkov light that was picked up by a third of its active sensors. “If it entered the sea as a muon, it would have travelled some 300 km water-equivalent in water or rock, which is impossible,” explains de Jong. “It is most likely the result of a muon neutrino interacting with sea water some distance from the detector.”

The network will improve the chances of detecting new neutrino sources

The best estimate for the neutrino energy of 220 PeV hides substantial uncertainties, given the unknown interaction point and the need to correct for an undetected hadronic shower. The collaboration expects the true value to lie between 110 and 790 PeV with 68% confidence. “The neutrino energy spectrum is steeply falling, so there is a tug-of-war between two effects,” explains de Jong. “Low-energy neutrinos must give a relatively large fraction of their energy to the muon and interact close to the detector, but they are numerous; high-energy neutrinos can interact further away, and give a smaller fraction of their energy to the muon, but they are rare.”

More data is needed to understand the sources of ultra-high-energy neutrinos such as that observed by KM3NeT, where construction has continued in the two years since this remarkable early detection. So far, 33 of 230 ARCA detection units and 24 of 115 ORCA detection units have been installed. Once construction is complete, likely by the end of the decade, KM3NeT will be similar in size to IceCube.

“Once KM3NeT and Baikal–GVD are fully constructed, we will have three large-scale neutrino telescopes of about the same size in operation around the world,” adds Mauricio Bustamante, theoretical astroparticle physicist at the Niels Bohr Institute of the University of Copenhagen. “This expanded network will monitor the full sky with nearly equal sensitivity in any direction, improving the chances of detecting new neutrino sources, including faint ones in new regions of the sky.”

The post Cosmogenic candidate lights up KM3NeT appeared first on CERN Courier.

]]>
News Strings of photodetectors anchored to the seabed off the coast of Sicily have detected the most energetic neutrino ever observed, smashing previous records. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_NA_KM3NeT_feature.jpg
Inside pyramids, underneath glaciers https://cerncourier.com/a/inside-pyramids-underneath-glaciers/ Wed, 20 Nov 2024 13:48:19 +0000 https://cern-courier.web.cern.ch/?p=111476 Coordinated by editors Paola Scampoli and Akitaka Ariga, Cosmic Ray Muography provides an invaluable snapshot of a booming research area.

The post Inside pyramids, underneath glaciers appeared first on CERN Courier.

]]>
Muon radiography – muography for short – uses cosmic-ray muons to probe and image large, dense objects. Coordinated by editors Paola Scampoli and Akitaka Ariga of the University of Bern, the authors of this book provide an invaluable snapshot of this booming research area. From muon detectors, which differ significantly from those used in fundamental physics research, to applications of muography in scientific, cultural, industrial and societal scenarios, a broad cross section of experts describe the physical principles that underpin modern muography.

Hiroyuki Tanaka of the University of Tokyo begins the book with historical developments and perspectives. He guides readers from the first documented use of cosmic-ray muons in 1955 for rock overburden estimation, to current studies of the sea-level dynamics in Tokyo Bay using muon detectors laid on the seafloor and visionary ideas to bring muography to other planets using teleguided rovers.

Scattering methods

Tanaka limits his discussion to the muon-absorption approach to muography, which images an object by comparing the muon flux before and after – or with and without – an object. The muon-scattering approach, which was invented two decades ago, instead exploits the deflection of muons passing through matter that is due to electromagnetic interactions with nuclei. The interested reader will find several examples of the application of muon scattering in other chapters, particularly that on civil and industrial applications by Davide Pagano (Pavia) and Altea Lorenzon (Padova). Scattering methods have an edge in these fields thanks to their sensitivity to the atomic number of the materials under investigation.

Cosmic Ray Muography

Peter Grieder (Bern), who sadly passed away shortly before the publication of the book, gives an excellent and concise introduction to the physics of cosmic rays, which Paolo Checchia (Padova) expands on, delving into the physics of interactions between muons and matter. Akira Nishio (Nagoya University) describes the history and physical principles of nuclear emulsions. These detectors played an important role in the history of particle physics, but are not very popular now as they cannot provide real-time information. Though modern detectors are a more common choice today, nuclear emulsions still find a niche in muography thanks to their portability. The large accumulation of data from muography experiments requires automatic analysis, for which dedicated scanning systems have been developed. Nishio includes a long and insightful discussion on how the nuclear-emulsions community reacted to supply-chain evolution. The transition from analogue to digital cameras meant that most film-producing firms changed their core business or simply disappeared, and researchers had to take a large part of the production process into their own hands.

Fabio Ambrosino and Giulio Saracino of INFN Napoli next take on the task of providing an overview of the much broader and more popular category of real-time detectors, such as those commonly used in experiments at particle colliders. Elaborating on the requirements set by the cosmic rate and environmental factors, their
chapter explains why scintillator and gas-based tracking devices are the most popular options in muography. They also touch on more exotic detector options, including Cherenkov telescopes and cylindrical tracking detectors that fit in boreholes.

In spite of their superficial similarity, methods that are common in X-ray imaging need quite a lot of ingenuity to be adapted to the context of muography. For example, the source cannot be controlled in muography, and is not mono­chromatic. Both energy and direction are random and have a very broad distribution, and one cannot afford to take data from more than a few viewpoints. Shogo Nagahara and Seigo Miyamoto of the University of Tokyo provide a specialised but intriguing insight into 3D image reconstruction using filtered back-projection.

A broad cross section of experts describe the physical principles that underpin modern muography

Geoscience is among the most mature applications of muography. While Jacques Marteau (Claude Bernard University Lyon 1) provides a broad overview of decades of activities spanning from volcano studies to the exploration of natural caves, Ryuichi Nishiyama (Tokyo) explores recent studies where muography provided unique data on the shape of the bedrock underneath two major glaciers in the Swiss Alps.

One of the greatest successes of muography is the study of pyramids, which is given ample space in the chapter on archaeology by Kunihiro Morishima (Nagoya). In 1971, Nobel-laureate Luis Alvarez’s team pioneered the use of muography in archaeology during an investigation at the pyramid of Khafre in Giza, Egypt, motivated by his hunch that an unknown large chamber could be hiding in the pyramid. Their data convincingly excluded that possibility, but the attempt can be regarded as launching modern muography (CERN Courier May/June 2023 p32). Half a century later, muography was reintroduced to the exploration of Egyptian pyramids thanks to ScanPyramids – an international project led by particle-physics teams in France and Japan under the supervision of the Heritage Innovation and Preservation Institute. ScanPyramids aims at systematically surveying all of the main pyramids in the Giza complex, and recently made headlines by finding a previously unknown corridor-shaped cavity in Khufu’s Great Pyramid, which is the second largest pyramid in the world. To support the claim, which was initially based on muography alone, the finding was cross-checked with the more traditional surveying method based on ground penetrating radar, and finally confirmed via visual inspection through an endoscope.

Pedagogical focus

This book is a precious resource for anyone approaching muography, from students to senior scientists, and potential practitioners from both academic and industrial communities. There are some other excellent books that have already been published on the same topic, and that have showcased original research, but Cosmic Ray Muography’s pedagogical focus, which prioritises the explanation of timeless first principles, will not become outdated any time soon. Given each chapter was written independently, there is a certain degree of overlap and some incoherence in terminology, but this gives the reader valuable exposure to different perspectives about what matters most in this type of research.

The post Inside pyramids, underneath glaciers appeared first on CERN Courier.

]]>
Review Coordinated by editors Paola Scampoli and Akitaka Ariga, Cosmic Ray Muography provides an invaluable snapshot of a booming research area. https://cerncourier.com/wp-content/uploads/2024/10/CCNovDec24_REV_muons-1.jpg
A rich harvest of results in Prague https://cerncourier.com/a/a-rich-harvest-of-results-in-prague/ Wed, 20 Nov 2024 13:34:58 +0000 https://cern-courier.web.cern.ch/?p=111420 The 42nd international conference on high-energy physics reported progress across all areas of high-energy physics.

The post A rich harvest of results in Prague appeared first on CERN Courier.

]]>
The 42nd international conference on high-energy physics (ICHEP) attracted almost 1400 participants to Prague in July. Expectations were high, with the field on the threshold of a defining moment, and ICHEP did not disappoint. A wealth of new results showed significant progress across all areas of high-energy physics.

With the long shutdown on the horizon, the third run of the LHC is progressing in earnest. Its high-availability operation and mastery of operational risks were highly praised. Run 3 data is of immense importance as it will be the dataset that experiments will work with for the next decade. With the newly collected data at 13.6 TeV, the LHC experiments showed new measurements of Higgs and di-electroweak-boson production, though of course most of the LHC results were based on the Run 2 (2014 to 2018) dataset, which is by now impeccably well calibrated and understood. This also allowed ATLAS and CMS to bring in-depth improvements to reconstruction algorithms.

AI algorithms

A highlight of the conference was the improvements brought by state-of-the-art artificial-intelligence algorithms such as graph neural networks, both at the trigger and reconstruction level. A striking example of this is the ATLAS and CMS flavour-tagging algorithms, which have improved their rejection of light jets by a factor of up to four. This has important consequences. Two outstanding examples are: di-Higgs-boson production, which is fundamental for the measurement of the Higgs boson self-coupling (CERN Courier July/August 2024 p7); and the Higgs boson’s Yukawa coupling to charm quarks. Di-Higgs-boson production should be independently observable by both general-purpose experiments at the HL-LHC, and an observation of the Higgs boson’s coupling to charm quarks is getting closer to being within reach.

The LHC experiments continue to push the limits of precision at hadron colliders. CMS and LHCb presented new measurements of the weak mixing angle. The per-mille precision reached is close to that of LEP and SLD measurements (CERN Courier September/October 2024 p29). ATLAS presented the most precise measurement to date (0.8%) of the strong coupling constant extracted from the measurement of the transverse momentum differential cross section of Drell–Yan Z-boson production. LHCb provided a comprehensive analysis of the B0→ K0* μ+μ angular distributions, which had previously presented discrepancies at the level of 3σ. Taking into account long-distance contributions significantly weakens the tension down to 2.1σ.

Pioneering the highest luminosities ever reached at colliders (setting a record at 4.7 × 1034 cm–2 s–1), SuperKEKB has been facing challenging conditions with repeated sudden beam losses. This is currently an obstacle to further progress to higher luminosities. Possible causes have been identified and are currently under investigation. Meanwhile, with the already substantial data set collected so far, the Belle II experiment has produced a host of new results. In addition to improved CKM angle measurements (alongside LHCb), in particular of the γ angle, Belle II (alongside BaBar) presented interesting new insights in the long standing |Vcb| and |Vub| inclusive versus exclusive measurements puzzle (CERN Courier July/August 2024 p30), with new |Vcb| exclusive measurements that significantly reduce the previous 3σ tension.

Maurizio Pierini

ATLAS and CMS furthered their systematic journey in the search for new phenomena to leave no stone unturned at the energy frontier, with 20 new results presented at the conference. This landmark outcome of the LHC puts further pressure on the naturalness paradigm.

A highlight of the conference was the overall progress in neutrino physics. Accelerator-based experiments NOvA and T2K presented a first combined measurement of the mass difference, neutrino mixing and CP parameters. Neutrino telescopes IceCube with DeepCore and KM3NeT with ORCA (Oscillation Research with Cosmics in the Abyss) also presented results with impressive precision. Neutrino physics is now at the dawn of a bright new era of precision with the next-generation accelerator-based long baseline experiments DUNE and Hyper Kamiokande, the upgrade of DeepCore, the completion of ORCA and the medium baseline JUNO experiment. These experiments will bring definitive conclusions on the measurement of the CP phase in the neutrino sector and the neutrino mass hierarchy – two of the outstanding goals in the field.

The KATRIN experiment presented a new upper limit on the effective electron–anti-neutrino mass of 0.45 eV, well en route towards their ultimate sensitivity of 0.2 eV. Neutrinoless double-beta-decay search experiments KamLAND-Zen and LEGEND-200 presented limits on the effective neutrino mass of approximately 100 meV; the sensitivity of the next-generation experiments LEGEND-1T, KamLAND-Zen-1T and nEXO should reach 20 meV and either fully exclude the inverted ordering hypothesis or discover this long-sought process. Progress on the reactor neutrino anomaly was reported, with recent fission data suggesting that the fluxes are overestimated, thus weakening the significance of the anti-neutrino deficits.

Neutrinos were also a highlight for direct-dark-matter experiments as Xenon announced the observation of nuclear recoil events from8B solar neutrino coherent elastic scattering on nuclei, thus signalling that experiments are now reaching the neutrino fog. The conference also highlighted the considerable progress across the board on the roadmap laid out by Kathryn Zurek at the conference to search for dark matter in an extraordinarily large range of possibilities, spanning 89 orders of magnitude in mass from 10–23 eV to 1057 GeV. The roadmap includes cosmological and astrophysical observations, broad searches at the energy and intensity frontier, direct searches at low masses to cover relic abundance motivated scenarios, building a suite of axion searches, and pursuing indirect-detection experiments.

Lia Merminga and Fabiola Gianotti

Neutrinos also made the headlines in multi-messenger astrophysics experiments with the announcement by the KM3Net ARCA (Astroparticle Research with Cosmics in the Abyss) collaboration of a muon-neutrino event that could be the most energetic ever found. The energy of the muon from the interaction of the neutrino is compatible with having an energy of approximately 100 PeV, thus opening a fascinating window on astrophysical processes at energies well beyond the reach of colliders. The conference showed that we are now well within the era of multi-messenger astrophysics, via beautiful neutrinos, gamma rays and gravitational-wave results.

The conference saw new bridges across fields being built. The birth of collider-neutrino physics with the beautiful results from FASERν and SND fill the missing gap in neutrino–nucleon cross sections between accelerator neutrinos and neutrino astronomy. ALICE and LHCb presented new results on He3 production that complement the AMS results. Astrophysical He3 could signal the annihilation of dark matter. ALICE also presented a broad, comprehensive review of the progress in understanding strongly interacting matter at extreme energy densities.

The highlight in the field of observational cosmology was the recent data from DESI, the Dark Energy Spectroscopic Instrument in operation since 2021, which bring splendid new data on baryon acoustic oscillation measurements. These precious new data agree with previous indirect measurements of the Hubble constant, keeping the tension with direct measurements in excess of 2.5σ. In combination with CMB measurements, the DESI measurements also set an upper limit on the sum of neutrino masses at 0.072 eV, in tension with the inverted ordering of neutrino masses hypothesis. This limit is dependent on the cosmological model.

In everyone’s mind at the conference, and indeed across the domain of high-energy physics, it is clear that the field is at a defining moment in its history: we will soon have to decide what new flagship project to build. To this end, the conference organised a thrilling panel discussion featuring the directors of all the major laboratories in the world. “We need to continue to be bold and ambitious and dream big,” said Fermilab’s Lia Merminga, summarising the spirit of the discussion.

“As we have seen at this conference, the field is extremely vibrant and exciting,” said CERN’s Fabiola Gianotti at the conclusion of the panel. In these defining times for the future of our field, ICHEP 2024 was an important success. The progress in all areas is remarkable and manifest through the outstanding number of beautiful new results shown at the conference.

The post A rich harvest of results in Prague appeared first on CERN Courier.

]]>
Meeting report The 42nd international conference on high-energy physics reported progress across all areas of high-energy physics. https://cerncourier.com/wp-content/uploads/2024/10/CCNovDec24FN_ICHEP1-2.jpg
Watch out for hybrid pixels https://cerncourier.com/a/watch-out-for-hybrid-pixels/ Mon, 16 Sep 2024 14:11:02 +0000 https://preview-courier.web.cern.ch/?p=111070 Hybrid pixel detectors are changing the face of societal applications such as X-ray imaging.

The post Watch out for hybrid pixels appeared first on CERN Courier.

]]>
In 1885, in a darkened lab in Würzburg, Bavaria, Wilhelm Röntgen noticed that a screen coated with barium platinocyanide fluoresced, despite being shielded from the electron beam of his cathode-ray tube. Hitherto undiscovered “X”-rays were being emitted as the electrons braked sharply in the tube’s anode and glass casing. A week later, Röntgen imaged his wife’s hand using a photographic plate, and medicine was changed forever. X-rays would be used for non-invasive diagnosis and treatment, and would inspire countless innovations in medical imaging. Röntgen declined to patent the discovery of X-ray imaging, believing that scientific advancements should benefit all of humanity, and donated the proceeds of the first Nobel Prize for Physics to his university.

One hundred years later, medical imaging would once again be disrupted – not in a darkened lab in Bavaria, but in the heart of the Large Hadron Collider (LHC) at CERN. The innovation in question is the hybrid pixel detector, which allows remarkably clean track reconstruction. When the technology is adapted for use in a medical context, by modifying the electronics at the pixel level, X-rays can be individually detected and their energy measured, leading to spectroscopic X-ray images that distinguish between different materials in the body. In this way, black and white medical imaging is being reinvented in full colour, allowing more precise diagnoses with lower radiation doses.

The next step is to exploit precise timing in each pixel. The benefits will be broadly felt. Electron microscopy of biological samples can be clearer and more detailed. Biomolecules can be more precisely identified and quantified by imaging time-of-flight mass spectrometry. Radiation doses can be better controlled in hadron therapy, reducing damage to healthy tissue. Ultra-fast changes can be captured in detail at synchrotron light sources. Hybrid pixel detectors with fast time readout are even being used to monitor quantum-mechanical processes.

Digital-camera drawbacks

X-ray imaging has come a long way since the photographic plate. Most often, the electronics work in the same way as a cell-phone camera. A scintillating material converts X-rays into visible photons that are detected by light-sensitive diodes connected to charge-integrating electronics. The charge from high-energy and low-energy photons is simply added up within the pixel in the same way a photographic film is darkened by X-rays.

A hybrid pixel detector and Medipix3 chip

Charge integration is the technique of choice in the flat-panel detectors used in radiology as large surfaces can be covered relatively cheaply, but there are several drawbacks. It’s difficult to collect the scintillation light from an X-ray on a single pixel, as it spreads out. And information about the energy of the X-rays is lost.

By the 1990s, however, LHC detector R&D was driving the development of the hybrid pixel detector, which could solve both problems by detecting individual photons. It soon became clear that “photon counting” could be as useful in a hospital ward as it would prove to be in a high-energy-physics particle detector. In 1997 the Medipix collaboration first paired semiconductor sensors with readout chips capable of counting individual X-rays.

Nearly three decades later, hybrid pixel detectors are making their mark in hospital wards. Parallel to the meticulous process of preparing a technology for medical applications in partnership with industry, researchers have continued to push the limits of the technology, in pursuit of new innovations and applications.

Photon counting

In a hybrid pixel detector, semiconductor sensor pixels are individually fixed to readout chips by an array of bump bonds – tiny balls of solder that permit the charge signal in each sensor pixel to be passed to each readout pixel (see “Hybrid pixels” figure). In these detectors, low-noise pulse-processing electronics take advantage of the intrinsic properties of semiconductors to provide clean track reconstruction even at high rates (see “Semiconductor subtlety” panel).

Since silicon detectors are relatively transparent to the X-ray energies used in medical imaging (approximately 20 to 140 keV), denser sensor materials with higher stopping power are required to capture every photon passing through the patient. This is where hybrid pixel detectors really come into their own. For X-ray photons with an energy above about 20 keV, a highly absorbing material such as cadmium telluride can be used in place of the silicon used in the LHC experiments. Provided precautions are taken to deal with charge sharing between pixels, the number of X-rays in every energy bin can be recorded, allowing each pixel to measure the spectrum of the interacting X-rays.

Semiconductor subtlety

In insulators, the conduction band is far above the energy of electrons in the valence band, making it difficult for current to flow. In conductors, the two bands overlap and current flows with little resistance. In semiconductors, the gap is a just a couple of electron-volts. Passing charged particles, such as those created in the LHC experiments, promotes thousands of valence electrons into the conduction band, creating positively charged “holes” in the valence band, allowing current to flow.

Hybrid pixel detector

Silicon has four valence electrons and therefore forms four covalent bonds with neighbouring atoms to fill up its outermost shell in silicon crystals. These crystals can be doped with impurities that either add additional electrons to the conduction band (n-type doping) or additional holes to the valence band (p-type doping). The silicon pixel sensors used at the LHC are made up of rectangular pixels doped with additional holes on one side coupled to a single large electrode doped with additional electrons on the rear (see “Pixel picture” figure).

In p-n junctions such as these, “depletion zones” develop at the pixel boundaries, where neighbouring electrons and holes recombine, generating a natural electric field. The depletion zones can be extended throughout the whole sensor by applying a strong “reverse-bias” field in the opposite direction. When a charged particle passes, electrons and holes are created as before, but thanks to the field a directed pulse of charge now flows across the bump bond into the readout chip. Charge collection is prompt, permitting the pixel to be ready for the next particle.

In each readout pixel the detected charge pulse is compared with an externally adjustable threshold. If the pulse exceeds the threshold, its amplitude and timing can be measured. The threshold level is typically set to be many times higher than the electronic noise of the detection circuit, permitting noise-free images. Because of the intimate contact between the sensor and the readout circuit, the noise is typically less than a root-mean-square value of 100 electrons, and any signal higher than a threshold of about 500 electrons can be unambiguously detected. Pixels that are not hit remain silent.

In the LHC, each passing particle liberates thousands of electrons, allowing clean images of the collisions to be taken even at very high rates. Hybrid pixels have therefore become the detector of choice in many large experiments where fast and clean images are needed, and are the heart of the ATLAS, CMS and LHCb experiments. In cases where the event rates are lower, such as the ALICE experiment at the LHC and the Belle II experiment at SuperKEKB at KEK in Japan, it has now become possible to use “monolithic” active pixel detectors, where the sensor and readout electronics are implemented in the same substrate. In the future, as the semiconductor industry shifts to three-dimensional chip and wafer stacking, the distinction between hybrid and monolithic pixel detectors will be blurred.

Protocols regarding the treatment of patients are strictly regulated in the interest of safety, making it challenging to introduce new technologies. Therefore, in parallel with the development of successive generations of Medipix readout chips, a workshop series on the medical applications of spectroscopic X-ray detectors has been hosted at CERN. Now in its seventh edition (see “Threshold moment for medical photon counting”), the workshop gathers representatives of cross-disciplinary specialists ranging from the designers of readout chips to specialists in the large equipment suppliers, and from medical physicists all the way up to opinion-leading radiologists. The role of the workshop is the formation and development of a community of practitioners from diverse fields willing to share knowledge – and, of course, reasonable doubts – in order to encourage the transition of spectroscopic photon counting from the lab to the clinic. CERN and the Medipix collaborations have played a pathfinding role in this community, exploring avenues well in advance of their introduction to medical practice.

The Medipix2 (1999–present), Medipix3 (2005–present) and Medipix4 (2016–present) collaborations are composed only of publicly funded research institutes and universities, which helps keep the development programmes driven by science. There have been hundreds of peer-reviewed publications and dozens of PhD theses written by the designers and users of the various chips. With the help of CERN’s Knowledge Transfer Office, several start-up companies have been created and commercial licences signed. This has led to many unforeseen applications and helped enormously with the dissemination of the technology. The publications of the clients of the industrial partners now represent a large share of the scientific outcome from these efforts, totalling hundreds of papers.

Spectroscopic X-ray imaging is now arriving in clinical practice. Siemens Healthineers were first to market in 2022 with the Naeotom Alpha photon counting CT scanner, and many of the first users have been making ground-breaking studies exploiting the newly available spectroscopic information in the clinical domain. CERN’s Medipix3 chip is at the heart of the MARS Bioimaging scanner, which brings unprecedented imaging performance to the point of patient care, opening up new patient pathways and saving time and money.

ASIC (application-specific integrated circuit) development is still moving forwards rapidly in the Medipix collaborations. For example, in the Medipix3 and Medipix4 chips, on-pixel circuitry mitigates the impact of X-ray fluorescence  and charge diffusion in the semiconductor by summing up the charge in a localised region and allocating the hit to one pixel. The fine segmentation of the detector not only leads to unprecedented spatial resolution but also mitigates “hole trapping” – a common bugbear of the high-density sensor materials used in medical imaging, whereby photons of the same energy induce different charges according to their interaction depth in the sensor. Where the pixel size is significantly smaller than the perpendicular sensor thickness – as in the Medipix case – only one of the charge species (usually electrons) contributes to the measured charge, and no matter where the X-ray is deposited in the sensor thickness, the total charge detected is the same.

But photon counting is only half the story. Another parameter that has not yet been exploited in high-spatial-resolution medical imaging systems can also be measured at the pixel level.

A new dimension

In 2005, Dutch physicists working with gas detectors requested a modification that would permit each pixel to measure arrival times instead of counting photons. The Medipix2 collaboration agreed and designed a chip with three acquisition modes: photon counting, arrival time and time over threshold, which provides a measure of energy. The Timepix family of pixel-detector readout chips was born.

Xènia Turró using a Timepix-based thumb-drive detector

The most recent generations of Timepix chips, such as Timepix3 (released in 2016) and Timepix4 (released in 2022) stream hit information off chip as soon as it is generated – a significant departure from Medipix chips, which process hits locally, assuming them to be photons, sending only a spectroscopic image off chip. With Timepix, each time a charge exceeds the threshold, a packet of information is sent off chip that contains the coordinates of the hit pixel, the particle’s arrival time and the time over threshold (66 bits in total per hit). This allows offline reconstruction of individual clusters of hits, opening up a myriad of potential new applications.

One advantage of Timepix is that particle event reconstruction is not limited to photons. Cosmic muons leave a straight track. Low-energy X-rays interact in a point-like fashion, lighting up only a small number of pixels. Electrons interact with atomic electrons in the sensor material, leaving a curly track. Alpha particles deposit a large quantity of charge in a characteristic blob. To spark the imagination of young people, Timepix chips have been incorporated on a USB thumb drive that can be read out on a laptop computer (see “Thumb-drive detector” figure). The CERN & Society Foundation is raising funds to make these devices widely available in schools.

Timepix chips have also been adapted to dose monitoring for astronauts. Following a calibration effort by the University of Houston, NASA and the Institute for Experimental and Applied Physics in Prague, a USB device identical to that used in classrooms precisely measures the doses experienced by flight crews in space. Timepix is now deployed on the International Space Station (see “Radiation monitoring” figure), the Artemis programme and several European space-weather studies, and will be deployed on the Lunar Gateway programme.

Stimulating innovation

Applications in science, industry and medicine are too numerous to mention in detail. In time-of-flight mass spectrometry, the vast number of channels allowed by Timepix promises new insights into biomolecules. Large-area time-resolved X-ray cameras are valuable at synchrotrons, where they have applications in structural biology, materials science, chemistry and environmental science. In the aerospace, manufacturing and construction industries, non-destructive X-ray testing using back­scattering can probe the integrity of materials and structures while requiring access from one side only. Timepix chips also play a crucial role in X-ray diffraction for materials analysis and medical applications such as single-photon-emission computed tomography (SPECT), and beam tracking and dose-deposition moni­toring in hadron therapy (see “Carbon therapy” figure). The introduction of noise-free hit streaming with timestamp precision down to 200 picoseconds has also opened up entirely new possibilities in quantum science, and early applications of Timepix3 in experiments exploring the quantum behaviour of particles are already being reported. We are just beginning to uncover the potential of these innovations.

Chris Cassidy working near the Timepix USB

It’s also important to note that applications of the Timepix chips are not limited to the readout of semiconductor pixels made of silicon or cadmium telluride. A defining feature of hybrid pixel detectors is that the same readout chip can be used with a variety of sensor materials and structures. In cases where visible photons are to be detected, an electron can be generated in a photocathode and then amplified using a micro-channel plate. The charge cloud from the micro-channel plate is then detected on a bare readout chip in much the same way as the charge cloud in a semiconductor sensor. Some gas-filled detectors are constructed using gas electron multipliers and micromegas foils, which amplify charge passing through holes in the foils. Timepix chips can be used for readout in place of the conventional pad arrays, providing much higher spatial and time resolution than would otherwise be available.

Successive generations of Timepix and Medipix chips have followed Moore’s law, permitting more and more circuitry to be fitted into a single pixel as the minimum feature size of transistors has shrunk. In the Timepix3 and Timepix4 chips, data-driven architecture and on-pixel time stamping are the unique features. The digital circuitry of the pixel has become so complex that an entirely new approach to chip design – “digital-on-top” – was employed. These techniques were subsequently deployed in ASIC developments for the LHC upgrades.

Just as hybrid-pixel R&D at the LHC has benefitted societal applications, R&D for these applications now benefits fundamental research. Making highly optimised chips available to industry “off the shelf” can also save substantial time and effort in many applications in fundamental research, and the highly integrated R&D model whereby detector designers keep one foot in both camps generates creativity and the reciprocal sparking of ideas and sharing of expertise. Timepix3 is used as readout of the beam–gas-interaction monitors at CERN’s Proton Synchrotron and Super Proton Synchrotron, providing non-destructive images of the beams in real time for the first time. The chips are also deployed in the ATLAS and MoEDAL experiments at the LHC, and in numerous small-scale experiments, and Timepix3 know-how helped develop the VeloPix chip used in the upgraded tracking system for the LHCb experiment. Timepix4 R&D is now being applied to the development of a new generation of readout chips for future use at CERN, in applications where a time bin of 50 ps or less is desired.

Maria Martišíková and Laurent Kelleter

All these developments have relied on collaborating research organisations being willing to pool the resources needed to take strides into unexplored territory. The effort has been based on the solid technical and administrative infrastructure provided by CERN’s experimental physics department and its knowledge transfer, finance and procurement groups, and many applications have been made possible by hardware provided by the innovative companies that license the Medipix and Timepix chips.

With each new generation of chips, we have pushed the boundaries of what is possible by taking calculated risks ahead of industry. But the high-energy-physics community is under intense pressure, with overstretched resources. Can blue-sky R&D such as this be justified? We believe, in the spirit of Röntgen before us, that we have a duty to make our advancements available to a larger community than our own. Experience shows that when we collaborate across scientific disciplines and with the best in industry, the fruits lead directly back into advancements in our own community.

The post Watch out for hybrid pixels appeared first on CERN Courier.

]]>
Feature Hybrid pixel detectors are changing the face of societal applications such as X-ray imaging. https://cerncourier.com/wp-content/uploads/2024/09/CCSepOct24_DETECTOR_xray.jpg
Threshold moment for medical photon counting https://cerncourier.com/a/threshold-moment-for-medical-photon-counting/ Mon, 16 Sep 2024 09:01:26 +0000 https://preview-courier.web.cern.ch/?p=111160 The seventh workshop on Medical Applications of Spectroscopic X-ray Detectors was held at CERN from in April.

The post Threshold moment for medical photon counting appeared first on CERN Courier.

]]>
7th Workshop on Medical Applications of Spectroscopic X-ray Detectors participants

The seventh workshop on Medical Applications of Spectroscopic X-ray Detectors was held at CERN from 15 to 18 April. This year’s workshop brought together more than 100 experts in medical imaging, radiology, physics and engineering. The workshop focused on the latest advancements in spectroscopic X-ray detectors and their applications in medical diagnostics and treatment. Such detectors, whose origins are found in detector R&D for high-energy physics, are now experiencing a breakthrough moment in medical practice.

Spectroscopic X-ray detectors represent a significant advancement in medical imaging. Unlike traditional X-ray detectors that measure only the intensity of X-rays, these advanced detectors can differentiate the energies of X-ray photons. This enables enhanced tissue differentiation, improved tumour detection and advanced material characterisation, which may lead in certain cases to functional imaging without the need for radioactive tracers.

The technology has its roots in the 1980s and 1990s when the high-energy-physics community centred around CERN developed a combination of segmented silicon sensors and very large-scale integration (VLSI) readout circuits to enable precision measurements at unprecedented event rates, leading to the development of hybrid pixel detectors (see p37). In the context of the Medipix Collaborations, CERN has coordinated research on spectroscopic X-ray detectors including the development of photon-counting detectors and new semiconductor materials that offer higher sensitivity and energy resolution. By the late 1990s, several groups had proofs of concept, and by 2008, pre-clinical spectral photon-counting computed-tomography (CT) systems were under investigation.

Spectroscopic X-ray detectors offer unparalleled diagnostic capabilities, enabling more detailed imaging and earlier and precise disease detection

In 2011, leading researchers in the field decided to bring together engineers, physicists and clinicians to help address the scientific, medical and engineering challenges associated with guiding the technology toward clinical adoption. In 2021, the FDA approval of Siemens Healthineers’ photon-counting CT scanner marked a significant milestone in the field of medical imaging, validating the clinical benefits of spectroscopic X-ray detectors. The mobile CT scanner, OmniTom Elite from NeuroLogica, approved in March 2022, also integrates photon counting detector (PCD) technology. The 3D colour X-ray scanner developed by MARS Bioimaging, in collaboration with CERN based on Medipix3 technology, has already shown significant promise in pre-clinical and clinical trials. Clinical trials of MARS scanners demonstrated its applications for detecting acute fractures, evaluation of fracture healing and assessment of osseous integration at the bone–metal interface for fracture fixations and joint replacements. With more than 300 million CT scans being performed annually around the world, the potential impact for spectroscopic X-ray imaging is enormous, but technical and medical challenges remain, and the need for this highly specialised workshop continues.

The scientific presentations in the 2024 workshop covered the integration of spectroscopic CT in clinical workflows, addressed technical challenges in photon counting detector technology and explored new semiconductor materials for X-ray detectors. The technical sessions on detector physics and technology discussed new methodologies for manufacturing high-purity cadmium–zinc–tellurium semiconductor crystals and techniques to enhance the quantum efficiency of current detectors. Sessions on clinical applications and imaging techniques included case studies demonstrating the benefits of multi-energy CT in cardiology and neurology, and advances in using spectroscopic detectors for enhanced contrast agent differentiation. The sessions on computational methods and data processing covered the implementation of AI algorithms to improve image reconstruction and analysis, and efficient storage and retrieval systems for large-scale spectral imaging datasets. The sessions on regulatory and safety aspects focused on the regulatory pathway for new spectroscopic X-ray detectors, ensuring patient and operator safety with high-energy X-ray systems.

Enhancing patient outcomes

The field of spectroscopic X-ray detectors is rapidly evolving. Continued research, collaboration and innovation to enhance medical diagnostics and treatment outcomes will be essential. Spectroscopic X-ray detectors offer unparalleled diagnostic capabilities, enabling more detailed imaging and earlier and precise disease detection, which improves patient outcomes. To stay competitive and meet the demand for precision medicine, medical institutions are increasingly adopting advanced imaging technologies. Continued collaboration among researchers, physicists and industry leaders will drive innovation, benefiting patients, healthcare providers and research institutions.

The post Threshold moment for medical photon counting appeared first on CERN Courier.

]]>
Meeting report The seventh workshop on Medical Applications of Spectroscopic X-ray Detectors was held at CERN from in April. https://cerncourier.com/wp-content/uploads/2024/09/CCSepOct24_FN_Medical_feature.jpg
Near-detector upgrade in place at T2K https://cerncourier.com/a/near-detector-upgrade-in-place-at-t2k/ Mon, 16 Sep 2024 08:51:57 +0000 https://preview-courier.web.cern.ch/?p=111103 The Tokai-to-Kamioka (T2K) collaboration has brought an upgraded near detector online.

The post Near-detector upgrade in place at T2K appeared first on CERN Courier.

]]>
Neutrino physics requires baselines both big and small, and neutrinos both artificial and astrophysical. One of the most prominent experiments of the past two decades is Tokai-to-Kamioka (T2K), which observes electron–neutrino appearance in an accelerator-produced muon–neutrino “superbeam” travelling coast to coast across Japan. To squeeze systematics in their hunt for leptonic CP violation, the collaboration recently brought online an upgraded near detector.

“The upgraded detectors are precision detectors for a precision-physics era,” says international co-spokesperson Kendall Mahn (Michigan State). “Our current systematic constraint is at the level of a few percent. To make progress we need to be able to probe regions we’ve not probed before.”

T2K studies the oscillations of 600 MeV neutrinos that have travelled 295 km from the J-PARC accelerator complex in Tokai to Super-Kamiokande – a 50 kton gadolinium-doped water-Cherenkov detector in Kamioka that has also been used to perform seminal measurements of atmospheric neutrino oscillations and constrain proton decay. Since the start of data taking in 2010, the collaboration made the first observation of the appearance of a neutrino flavour due to quantum-mechanical oscillations and the most precise measurement of the θ23 parameter in the neutrino mixing matrix. As well as placing limits on sterile-neutrino oscillation parameters, the collaboration has constrained a wide range of the parameters that describe neutrino interactions with matter. The uncertainties of such measurements typically limit the precision of fits to the fundamental parameters of the three-neutrino paradigm, and constraining neutrino-interaction systematics is the main purpose of near detectors in superbeam experiments such as T2K and NOvA, and the future ones Hyper-Kamiokande and DUNE.

T2K’s near-detector upgrade improves the acceptance and precision of particle reconstruction for neutrino interactions. A new fine-grained “SuperFGD” detector (see pink rectangle, left, on “New and improved” image) serves as the target for neutrino interactions in the new experimental phase. Comprised of two million 1 cm3 cubes of scintillator strung with optical fibres, SuperFGD lowers the detection threshold for protons ejected from nuclei to 300 MeV/c, improving the reconstruction of neutrino energy. Two new time-projection chambers flank it above and below to more closely mimic the isotropic reconstruction of Super-Kamiokande. Finally, six new scintillator planes suppress particle backgrounds from outside the detector by measuring time of flight.

Following construction and testing at CERN’s neutrino platform, the new detectors were successfully integrated in the experiment’s global DAQ and slow-control system. The first neutrino-beam data with the fully upgraded detector was collected in June, with the collaboration also benefitting from an upgraded neutrino beam with 50% greater intensity. Beam intensity is set to increase further in the coming years, in preparation for commissioning the new 260 kton Hyper-Kamiokande water Cherenkov detector. Cavern excavation is underway in Kamioka, with first data-taking planned for 2027.

But much can already be accomplished in the new phase of the T2K experiment, says the team. As well as improving precision on θ23 and another key mixing parameter Δm223, and refining the theoretical models used in neutrino generators, T2K will improve its fit to δCP, the fundamental parameter describing CP violation in the leptonic sector. Measuring its value could shed light on the question of why the  universe is dominated by matter.

“T2K’s current best fit to δCP is –1.97,” says Mahn. “We expect to be able to observe leptonic CP violation at 3σ significance if the true value of δCP is –π/2.”

The post Near-detector upgrade in place at T2K appeared first on CERN Courier.

]]>
News The Tokai-to-Kamioka (T2K) collaboration has brought an upgraded near detector online. https://cerncourier.com/wp-content/uploads/2024/09/CCSepOct24_NA_T2K.jpg
Six rare decays at the energy frontier https://cerncourier.com/a/six-rare-decays-at-the-energy-frontier/ Fri, 05 Jul 2024 09:31:53 +0000 https://preview-courier.web.cern.ch/?p=110780 Andrzej Buras explains how two rare kaon decays and four rare B-meson decays will soon probe for new physics beyond the reach of direct searches at colliders.

The post Six rare decays at the energy frontier appeared first on CERN Courier.

]]>
Thanks to its 13.6 TeV collisions, the LHC directly explores distance scales as short as 5 × 10–20 m. But the energy frontier can also be probed indirectly. By studying rare decays, distance scales as small as a zeptometre (10–21 m) can be resolved, probing the existence of new particles with masses as high as 100 TeV. Such particles are out of the reach of any high-energy collider that could be built in this century.

The key concept is the quantum fluctuation. Just because a collision doesn’t have enough energy to bring a new particle into existence does not mean that a very heavy new particle cannot inform us about its existence. Thanks to Heisenberg’s uncertainty principle, new particles could be virtually exchanged between the other particles involved in the collisions, modifying the probabilities for the processes we observe in our detectors. The effect of massive new particles could be unmistakable, giving physicists a powerful tool for exploring more deeply into the unknown than accelerator technology and economic considerations allow direct searches to go.

The effect of massive new particles could be unmistakable

The search for new particles and forces beyond those of the Standard Model is strongly motivated by the need to explain dark matter, the huge range of particle masses from the tiny neutrino to the massive top quark, and the asymmetry between matter and antimatter that is responsible for our very existence. As direct searches at the LHC have not yet provided any clue as to what these new particles and forces might be, indirect searches are growing in importance. Studying very rare processes could allow us to see imprints of new particles and forces acting at much shorter distance scales than it is possible to explore at current and future colliders.

Anticipating the November Revolution

The charm quark is a good example. The story of its direct discovery unfolded 50 years ago, in November 1974, when teams at SLAC and MIT simultaneously discovered a charm–anticharm meson in particle collisions. But four years earlier, Sheldon Glashow, John Iliopoulos and Luciano Maiani had already predicted the existence of the charm quark thanks to the surprising suppression of the neutral kaon’s decay into two muons.

Neutral kaons are made up of a strange quark and a down antiquark, or vice versa. In the Standard Model, their decay to two muons can proceed most simply through the virtual exchange of two W bosons, one virtual up quark and a virtual neutrino. The trouble was that the rate for the neutral kaon decay to two muons predicted in this  manner turned out to be many orders of magnitude larger than observed experimentally.

NA62 experiment

Glashow, Iliopoulos and Maiani (GIM) proposed a simple solution. With visionary insight, they hypothesised a new quark, the charm quark, which would totally cancel the contribution of the up quark to this decay if their masses were equal to each other. As the rate was non-vanishing and the charm quark had not yet been observed experimentally, they concluded that the mass of the charm quark must be significantly larger than that of the up quark.

Their hunch was correct. In early 1974, months before its direct discovery, Mary K Gaillard and Benjamin Lee predicted the charm quark’s mass by analysing another highly suppressed quantity, the mass difference in K0K0 mixing.

As modifications to the GIM mechanism by new heavy particles are still a hot prospect for discovering new physics in the 2020s, the details merit a closer look. Years earlier, Nicola Cabibbo had correctly guessed that weak interactions act between up quarks and a mixture (d cos θ + s sin θ) of the down and strange quarks. We now know that charm quarks interact with the mixture (–d sin θ + s cos θ). This is just a rotation of the down and strange quarks through this Cabibbo angle. The minus sign causes the destructive interference observed in the GIM mechanism.

With the discovery of a third generation of quarks, quark mixing is now described by the Cabibbo–Kobayashi–Maskawa (CKM) matrix – a unitary three-dimensional rotation with complex phases that parameterise CP violation. Understanding its parameters may prove central to our ability to discover new physics this decade.

On to the 1980s

The story of indirect discoveries continued in the late 1980s, when the magnitude of B0d – B0d mixing implied the existence of a heavy top quark, which was confirmed in 1995, completing the third generation of quarks. The W, Z and Higgs bosons were also predicted well in advance of their discoveries. It’s only natural to expect that indirect searches for new physics will be successful at even shorter distance scales.

Belle II experiment at KEK

Rare weak decays of kaons and B mesons that are strongly suppressed by the GIM mechanism are expected to play a crucial role. Many channels of interest are predicted by the Standard Model to have branching ratios as low as 10–11, often being further suppressed by small elements of the CKM matrix. If the GIM mechanism is violated by new-physics contributions, these branching ratios – the fraction of times a particle decays that way – could be much larger.

Measuring suppressed branching ratios with respectable precision this decade is therefore an exciting prospect. Correlations between different branching ratios can be particularly sensitive to new physics and could provide the first hints of physics beyond the Standard Model. A good example is the search for the violation of lepton-flavour universality (CERN Courier May/June 2019 p33). Though hints of departures from muon–electron universality seem to be receding, hints that muon–tau universality may be violated still remain, and the measured branching ratios for B  K(K*+µ differ visibly from Standard Model predictions.

The first step in this indirect strategy is to search for discrepancies between theoretical predictions and experimental observables. The main challenge for experimentalists is the low branching ratios for the rare decays in question. However, there are very good prospects for measuring many of these highly suppressed branching ratios in the coming years.

Six channels for the 2020s

Six channels stand out today for their superb potential to observe new physics this decade. If their decay rates defy expectations, the nature of any new physics could be identified by studying the correlations between these six decays and others.

The first two channels are kaon decays: the measurements of K+ π+νν by the NA62 collaboration at CERN (see “Needle in a haystack” image), and the measurement of KL π0νν by the KOTO collaboration at J-PARC in Japan. The branching ratios for these decays are predicted to be in the ballpark of 8 × 10–11 and 3 × 10–11, respectively.

Independent observables

The second two are measurements of B  Kνν and B  K*νν by the Belle II collaboration at KEK in Japan. Branching ratios for these decays are expected to be much higher, in the ballpark of 10–5.

The final two channels, which are only accessible at the LHC, are measurements of the dimuon decays Bs µ+µ and Bd µ+µ by the LHCb, CMS and ATLAS collaborations. Their branching ratios are about 4 × 10–9 and 10–10 in the Standard Model. Though the decays B  K(K*+µare also promising, they are less theoretically clean than these six.

The main challenge for theorists is to control quantum-chromodynamics (QCD) effects, both below 10–16 m, where strong interactions weaken, and in the non-perturbative region at distance scales of about 10–15 m, where quarks are confined in hadrons and calculations become particularly tricky. While satisfactory precision has been achieved at short-distance scales over the past three decades, the situation for non-perturbative computations is expected to improve significantly in the coming years, thanks to lattice QCD and analytic approaches such as dual QCD and chiral perturbation theory for kaon decays, and heavy-quark effective field theory for B decays.

Another challenge is that Standard Model predictions for the branching ratios require values for four CKM parameters that are not predicted by the Standard Model, and which must be measured using kaon and B-meson decays. These are the magnitude of the up-strange (Vus) and charm-bottom (Vcb) couplings and the CP-violating phases β and γ. The current precision on measurements of Vus and β is fully satisfactory, and the error on γ = (63.8 ± 3.5)° should be reduced to 1° by LHCb and Belle II in the coming years. The stumbling block is Vcb, where measurements currently disagree. Though experimental problems have not been excluded, the tension is thought to originate in QCD calculations. While measurements of exclusive decays to specific channels yield 39.21(62) × 10–3, inclusive measurements integrated over final states yield 41.96(50) × 10–3. This discrepancy makes the predicted branching ratios differ by 16% for the four B-meson decays, and by 25% and 35% for K+ π+νν and KL π0νν. These discrepancies are a disaster for the theorists who had succeeded over many years of work to reduce QCD uncertainties in these decays to the level of a few percent.

One solution is to replace the CKM dependence of the branching ratios with observables where QCD uncertainties are under good control, for example: the mass differences in B0s  B0s and B0d  B0d mixing (∆Ms and ∆Md); a parameter that measures CP violation in K0 – K0 mixing (εK); and the CP-asymmetry that yields the angle β. Fitting these observables to the experimental data avoids us being forced to choose between inclusive and exclusive values for the charm-bottom coupling, and avoids the 3.5° uncertainty on γ, which in this strategy is reduced to 1.6°. Uncertainty on the predicted branching ratios is thereby reduced to 6% and 9% for B  Kνν and B  K*νν, to 5% for the two kaon decays, and to 4% for Bs µ+µ and Bd µ+µ.

So what is the current experimental situation for the six channels? The latest NA62 measurement of K+ π+νν is 25% larger than the Standard Model prediction. Its 36% uncertainty signals full compatibility at present, and precludes any conclusions about the size of new physics contributing to this decay. Next year, when the full analysis has been completed, this could turn out to be possible. It is unfortunate that the HIKE proposal was not adopted (CERN Courier May/June 2024 p7), as NA62’s expected precision of 15% could have been reduced to 5%. This could turn out to be crucial for the discovery of new physics in this decay.

The present upper bound on KL π0νν from KOTO is still two orders of magnitude above the Standard Model prediction. This bound should be lowered by at least one order of magnitude in the coming years. As this decay is fully governed by CP violation, one may expect that new physics will impact it significantly more than CP-conserving decays such as K+ π+νν.

Branching out from Belle

At present, the most interesting result concerns a 2023 update from Belle II to the measured branching ratio for B+ K+νν (see “Interesting excess” image). The resulting central value from Belle II and BaBar is currently a factor of 2.6 above the Standard Model prediction. This has sparked many theoretical analyses around the world, but the experimental error of 30% once again does not allow for firm conclusions. Measurements of other charge and spin configurations of this decay are pending.

Finally, both dimuon B-meson decays are at present consistent with Standard Model predictions, but significant improvements in experimental precision could still reveal new physics at work, especially in the case of Bd.

Hypothetical future measurements of branching ratios

It will take a few years to conclude if new physics contributions are evident in these six branching ratios, but the fact that all are now predicted accurately means that we can expect to observe or exclude new physics in them before the end of the decade. This would be much harder if measurements of the Vcb coupling were involved.

So far, so good. But what if the observables that replaced Vcb and γ are themselves affected by new physics? How can they be trusted to make predictions against which rare decay rates can be tested?

Here comes some surprisingly good news: new physics does not appear to be required to simultaneously fit them using our new basis of observables ΔMd, εK and ΔMs, as they intersect at a single point in the Vcbγ plane (see “No new physics” figure). This analysis favours the inclusive determination of Vcb and yields a value for γ that is consistent with the experimental world average and a factor of two more accurate. It’s important to stress, though, that non-perturbative four-flavour lattice-QCD calculations of ∆Ms and ∆Md by the HPQCD lattice collaboration played a key role here. It is crucial that another lattice QCD collaboration repeat these calculations, as the three curves cross at different points in three-flavour calculations that exclude charm.

Interesting years are ahead in the field of indirect searches for new physics

In this context, one realises the advantages of Vcbγ plots compared to the usual unitarity-triangle plots, where Vcb is not seen and 1° improvements in the determination of γ are difficult to appreciate. In the late 2020s, determining Vcb and γ from tree-level decays will be a central issue, and a combination of Vcb-independent and Vcb-dependent approaches will be needed to identify any concrete model of new physics.

We should therefore hope that the tension between inclusive and exclusive determinations of Vcb will soon be conclusively resolved. Forthcoming measurements of our six rare decays may then reveal new physics at the energy frontier (see “New physics” figure). With a 1° precision measurement of γ on the horizon, and many Vcb-independent ratios available, interesting years are ahead in the field of indirect searches for new physics.

In 1676 Antonie van Leeuwenhoek discovered a microuniverse populated by bacteria, which he called animalcula, or little animals. Let us hope that we will, in this decade, discover new animalcula on our flavour expedition to the zeptouniverse.

The post Six rare decays at the energy frontier appeared first on CERN Courier.

]]>
Feature Andrzej Buras explains how two rare kaon decays and four rare B-meson decays will soon probe for new physics beyond the reach of direct searches at colliders. https://cerncourier.com/wp-content/uploads/2024/07/CCJulAug24_FLAVOUR_frontis.jpg
How to democratise radiation therapy https://cerncourier.com/a/how-to-democratise-radiation-therapy/ Fri, 05 Jul 2024 09:27:03 +0000 https://preview-courier.web.cern.ch/?p=110863 Manjit Dosanjh and Steinar Stapnes tell the Courier about the need to disrupt the market for a technology that is indispensable when treating cancer.

The post How to democratise radiation therapy appeared first on CERN Courier.

]]>
How important is radiation therapy to clinical outcomes today?
Manjit Dosanjh

Manjit Fifty to 60% of cancer patients can benefit from radiation therapy for cure or palliation. Pain relief is also critical in low- and middle-income countries (LMICs) because by the time tumours are discovered it is often too late to cure them. Radiation therapy typically accounts for 10% of the cost of cancer treatment, but more than half of the cure, so it’s relatively inexpensive compared to chemotherapy, surgery or immunotherapy. Radiation therapy will be tremendously important for the foreseeable future.

What is the state of the art?

Manjit The most precise thing we have at the moment is hadron therapy with carbon ions, because the Bragg peak is very sharp. But there are only 14 facilities in the whole world. It’s also hugely expensive, with each machine costing around $150 million (M). Proton therapy is also attractive, with each proton delivering about a third of the radiobiological effect of a carbon ion. The first proton patient was treated at Berkeley in September 1954, in the same month CERN was founded. Seventy years later, we have about 130 machines and we’ve treated 350,000 patients. But the reality is that we have to make the machines more affordable and more widely available. Particle therapy with protons and hadrons probably accounts for less than 1% of radiation-therapy treatments whereas roughly 90 to 95% of patients are treated using electron linacs. These machines are much less expensive, costing between $1M and $5M, depending on the model and how good you are at negotiating.

Most radiation therapy in the developing world is delivered by cobalt-60 machines. How do they work?

Manjit A cobalt-60 machine treats patients using a radioactive source. Cobalt has a half-life of just over five years, so patients have to be treated longer and longer to be given the same dose as the cobalt-60 gets older, which is a hardship for them, and slows the number of patients who can be treated. Linacs are superior because you can take advantage of advanced treatment options that target the tumour using focusing, multi-beams and imaging. You come in from different directions and energies, and you can paint the tumour with precision. To the best extent possible, you can avoid damaging healthy tissue. And the other thing about linacs is that once you turn it off there’s no radiation anymore, whereas cobalt machines present a security risk. One reason we’ve got funding from the US Department of Energy (DOE) is because our work supports their goal of reducing global reliance on high-activity radioactive sources through the promotion of non-radioisotopic technologies. The problem was highlighted by the ART (access to radiotherapy technologies) study I led for International Cancer Expert Corps (ICEC) on the state of radiation therapy in former Soviet Union countries. There, the legacy has always been cobalt. Only three of the 11 countries we studied have had the resources and knowledge to be able to go totally to linacs. Most still have more than 50% cobalt radiation therapy.

The kick-off meeting for STELLA took place at CERN from 29 to 30 May. How will the project work?

Manjit STELLA stands for Smart Technology to Extend Lives with Linear Accelerators. We are an international collaboration working to increase access to radiation therapy in LMICs, and in rural regions in high-income countries. We’re working to develop a linac that is less expensive, more robust and, in time, less costly to operate, service and maintain than currently available options.

Steinar Stapnes

Steinar $1.75M funding from the DOE has launched an 18 month “pre-design” study. ICEC and CERN will collaborate with the universities of Oxford, Cambridge and Lancaster, and a network of 28 LMICs who advise and guide us, providing vital input on their needs. We’re not going to build a radiation-therapy machine, but we will specify it to such a level that we can have informed discussions with industry partners, foundations, NGOs and governments who are interested in investing in developing lower cost and more robust solutions. The next steps, including prototype construction, will require a lot more funding.

What motivates the project?

Steinar The basic problem is that access to radiation therapy in LMICs is embarrassingly limited. Most technical developments are directed towards high-income countries, ultimately profiting the rich people in the world – in other words, ourselves. At present, only 10% of patients in LMICs have access to radiation therapy.

Were working to develop a linac that is less expensive, more robust and less costly to operate, service and maintain than currently available options

Manjit The basic design of the linac hasn’t changed much in 70 years. Despite that, prices are going up, and the cost of service contracts and software upgrades is very high. Currently, we have around 420 machines in Africa, many of which are down for long intervals, which often impacts treatment outcomes. Often, a hospital can buy the linac but they can’t afford the service contract or repairs, or they don’t have staff with the skills to maintain them. I was born in a small village with no gas, electricity or water. I wasn’t supposed to go to school because girls didn’t. I was fortunate to have got an education that enabled me to have a better life with access to the healthcare treatments that I need. I look at this question from the perspective of how we can make radiation therapy available around the world in places such as where I’m originally from.

What’s your vision for the STELLA machine?

Steinar We want to get rid of the cobalt machines because they are not as effective as linacs for cancer treatment and they are a security risk. Hadron-therapy machines are more costly, but they are more precise, so we need to make them more affordable in the future. As Manjit said, globally 90 or 95% of radiation treatments are given by an electron linac, most often running at 6 MeV. In a modern radiation therapy facility today, such linacs are not developing so fast. Our challenge is to make them more reliable and serviceable. We want to develop a workhorse radiation therapy system that can do high-quality treatment. The other, perhaps more important, key parts are imaging and software. CERN has valuable experience here because we build and integrate a lot of detector systems including readout and data-analysis. From a certain perspective, STELLA will be an advanced detector system with an integrated linac.

Are any technical challenges common to both STELLA and to projects in fundamental physics?

Steinar The early and remote prediction of faults is one. This area is developing rapidly, and it would be very interesting for us to deploy this on a number of accelerators. On the detector and sensor side, we would like to make STELLA easily upgradeable, and some of these upgrades could be very much linked to what we want to do for our future detectors. This can increase the industrial base for developing these types of detectors as the medical market is very large. Software can also be interesting, for example for distributed monitoring and learning.

Where are the biggest challenges in bringing STELLA to market?

Steinar We must make medical linacs open in terms of hardware. Hospitals with local experts must be able to improve and repair the system. It must have a long lifetime. It needs to be upgradeable, particularly with regard to imaging, because detector R&D and imaging software are moving quickly. We want it to be open in terms of software, so that we can monitor the performance of the system, predict faults, and do treatment planning off site using artificial intelligence. Our biggest contribution will be to write a specification for a system where we “enforce” this type of open hardware and open software. Everything we do in our field relies on that open approach, which allows us to integrate the expertise of the community. That’s something we’re good at at CERN and in our community. A challenge for STELLA is to build in openness while ensuring that the machines can remain medically qualified and operational at all times.

How will STELLA disrupt the model of expensive service contracts and lower the cost of linacs?

Steinar This is quite a complex area, and we don’t know the solution yet. We need to develop a radically different service model so that developing countries can afford to maintain their machines. Deployment might also need a different approach. One of the work packages of this project is to look at different models and bring in expertise on new ideas. The challenges are not unique to radiation therapy. In the next 18 months we’ll get input from people who’ve done similar things.

A medical linac at the Genolier Clinic

Manjit Gavi, the global alliance for vaccines, was set up 24 years ago to save millions of children who died every year from vaccine-preventable diseases such as measles, TB, tetanus and rubella using vaccinations that were not available to millions of children in poorer parts of the world, especially Africa. Before, people were dying of these diseases, but now they get a vaccination and live. Vaccines and radiation therapy are totally different technologies, but we may need to think that way to really make a critical difference.

Steinar There are differences with respect to vaccine development. A vaccine is relatively cheap, whereas a linac costs millions of dollars. The diseases addressed by vaccines affect a lot of children, more so than cancer, so the patients have a different demographic. But nonetheless, the fact is that there was a group of countries and organisations who took this on as a challenge, and we can learn from their experiences.

Manjit We would like to work with the UN on their efforts to get rid of the disparities and focus on making radiation therapy available to the 70% of the world that doesn’t have access. To accomplish that, we need global buy-in, especially from the countries who are really suffering, and we need governmental, private and philanthropic support to do so.

What’s your message to policymakers reading this who say that they don’t have the resources to increase global access to radiation therapy?

Steinar Our message is that this is a solvable problem. The world needs roughly 5000 machines at $5M or less each. On a global scale this is absolutely solvable. We have to find a way to spread out the technology and make it available for the whole world. The problem is very concrete. And the solution is clear from a technical standpoint.

Manjit The International Atomic Energy Agency (IAEA) have said that the world needs one of these machines for every 200 to 250 thousand people. Globally, we have a population of 8 billion. This is therefore a huge opportunity for businesses and a huge opportunity for governments to improve the productivity of their workforces. If patients are sick they are not productive. Particularly in developing countries, patients are often of a working economic age. If you don’t have good machines and early treatment options for these people, not only are they not producing, but they’re going to have to be taken care of. That’s an economic burden on the health service and there is a knock-on effect on agriculture, food, the economy and the welfare of children. One example is cervical cancer. Nine out of 10 deaths from cervical cancer are in developing countries. For every 100 women affected, 20 to 30 children die because they don’t have family support.

How can you make STELLA attractive to investors?

Steinar Our goal is to be able to discuss the project with potential investor partners – and not only in industry but also governments and NGOs, because the next natural step will be to actually build a prototype. Ultimately, this has to be done by industry partners. We likely cannot rely on them to completely fund this out of their own pockets, because it’s a high-risk project from a business point of view. So we need to develop a good business model and find government and private partners who are willing to invest. The dream is to go into a five-year project after that.

We need to develop a good business model and find government and private partners who are willing to invest

Manjit It’s important to remember that this opportunity is not only linked to low-income countries. One in two UK citizens will get cancer in their lifetime, but according to a study that came out in February, only 25 to 28% of UK citizens have adequate access to radiation therapy. This is also an opportunity for young people to join an industrial system that could actually solve this problem. Radiation therapy is one of the most multidisciplinary fields there is, all the way from accelerators to radio-oncology and everything in between. The young generation is altruistic. This will capture their spirit and imagination.

Can STELLA help close the radiation-therapy gap?

Manjit When the IAEA first visualised radiation-therapy inequalities in 2012, it raised awareness, but it didn’t move the needle. That’s because it’s not enough to just train people. We also need more affordable and robust machines. If in 10 or 20 years people start getting treatment because they are sick, not because they’re dying, that would be a major achievement. We need to give people hope that they can recover from cancer.

The post How to democratise radiation therapy appeared first on CERN Courier.

]]>
Opinion Manjit Dosanjh and Steinar Stapnes tell the Courier about the need to disrupt the market for a technology that is indispensable when treating cancer. https://cerncourier.com/wp-content/uploads/2024/07/1106240_041-scaled.jpg
A gold mine for neutrino physics https://cerncourier.com/a/a-gold-mine-for-neutrino-physics/ Fri, 05 Jul 2024 08:58:45 +0000 https://preview-courier.web.cern.ch/?p=110791 In February this year, the DUNE experiment completed the excavation of three enormous caverns 1.5 kilometres below the surface at the new Sanford Underground Research Facility.

The post A gold mine for neutrino physics appeared first on CERN Courier.

]]>
In 1968, deep underground in the Homestake gold mine in South Dakota, Ray Davis Jr. observed too few electron neutrinos emerging from the Sun. The reason, we now know, is that many had changed flavour in flight, thanks to tiny unforeseen masses.

At the same time, Steven Weinberg and Abdus Salam were carrying out major construction work on what would become the Standard Model of particle physics, building the Higgs mechanism into Sheldon Glashow’s unification of the electromagnetic and weak interactions. The Standard Model is still bulletproof today, with one proven exception: the nonzero neutrino masses for which Davis’s observations were in hindsight the first experimental evidence.

Today, neutrinos are still one of the most promising windows into physics beyond the Standard Model, with the potential to impact many open questions in fundamental science (CERN Courier May/June 2024 p29). One of the most ambitious experiments to study them is currently taking shape in the same gold mine as Davis’s experiment more than half a century before.

Deep underground

In February this year, the international Deep Underground Neutrino Experiment (DUNE) completed the excavation of three enormous caverns 1.5 kilometres below the surface at the new Sanford Underground Research Facility (SURF) in the Homestake mine. 800,000 tonnes of rock have been excavated over two years to reveal an underground campus the size of eight soccer fields, ready to house four 17,500 tonne liquid–argon time-projection chambers (LArTPCs). As part of a diverse scientific programme, the new experiment will tightly constrain the working model of three massive neutrinos, and possibly even disprove it.

DUNE will measure the disappearance of muon neutrinos and the appearance of electron neutrinos over 1300 km and a broad spectrum of energies. Given the long journey of its accelerator-produced neutrinos from the Long Baseline Neutrino Facility (LBNF) at Fermilab in Illinois to SURF in South Dakota, DUNE will be uniquely sensitive to asymmetries between the appearance of electron neutrinos and antineutrinos. One predicted asymmetry will be caused by the presence of electrons and the absence of positrons in the Earth’s crust. This asymmetry will probe neutrino mass ordering – the still unknown ordering of narrow and broad mass splittings between the three tiny neutrino masses. In its first phase of operation, DUNE will definitively establish the neutrino mass ordering regardless of other parameters.

The field cage of a prototype liquid–argon time-projection chamber

If CP symmetry is violated, DUNE will then observe a second asymmetry between electron neutrinos and antineutrinos, which by experimental design is not degenerate with the first asymmetry. Potentially the first evidence for CP violation by leptons, this measurement will be an important experimental input to the fundamental question of how a matter–antimatter asymmetry developed in the early universe.

If CP violation is near maximal, DUNE will observe it at 3σ (99.7% confidence) in its first phase. In DUNE and LBNF’s recently reconceptualised second phase, which was strongly endorsed by the US Department of Energy’s Particle Physics Project Prioritization Panel (P5) in December (CERN Courier January/February 2024 p7), 3σ sensitivity to CP violation will be extended to more than 75% of possible values of δCP, the complex phase that parameterises this effect in the three-massive-neutrino paradigm.

Combining DUNE’s measurements with those by fellow next-generation experiments JUNO and Hyper-Kamiokande will test the three-flavour paradigm itself. This paradigm rotates three massive neutrinos into the mixtures that interact with charged leptons via the Pontecorvo–Maki–Nakagawa–Sakata (PMNS) matrix, which features three angles in addition to δCP.

As well as promising world-leading resolution on the PMNS angle θ23, DUNE’s measurements of θ13 and the Δm232 mass splitting will be different and complementary to those of JUNO in ways that could be sensitive to new physics. JUNO, which is currently under construction in China, will operate in the vicinity of a flux of lower-energy electron antineutrinos from nuclear reactors. DUNE and Hyper-Kamiokande, which is currently under construction in Japan, will both study accelerator-produced sources of muon neutrinos and antineutrinos, though using radically different baselines, energy spectra and detector designs.

Innovative and impressive

DUNE’s detector technology is innovative and impressive, promising millimetre-scale precision in imaging the interactions of neutrinos from accelerator and astrophysical sources (see “Millimetre precision” image). The argon target provides unique sensitivity to low-energy electron neutrinos from supernova bursts, while the detectors’ imaging capabilities will be pivotal when searching for beyond-the-Standard-Model physics such as dark matter, sterile-neutrino mixing and non-standard neutrino interactions.

First proposed by Nobel laureate Carlo Rubbia in 1977, LArTPC technology demonstrated its effectiveness as a neutrino detector at Gran Sasso’s ICARUS T600 detector more than a decade ago, and also more recently in the MicroBooNE experiment at Fermilab. Fermilab’s short-baseline neutrino programme now includes ICARUS and the new Short Baseline Neutrino Detector, which is due to begin taking neutrino data this year.

A charged pion ejects a proton

The first phase of DUNE will construct one LArTPC in each of the two detector caverns, with the second phase adding an additional detector in each. A central utility cavern between the north and south caverns will house infrastructure to support the operation of the detectors.

Following excavation by Thyssen Mining, final concrete work was completed in all the underground caverns and drifts, and the installation of power, lighting, plumbing, heating, ventilation and air conditioning is underway. 90% of the subcontracts for the installation of the civil infrastructure have already been awarded, with LBNF and DUNE’s economic impact in Illinois and South Dakota estimated to be $4.3 billion through fiscal years 2022 to 2030.

Once the caverns are prepared, two large membrane cryostats will be installed to house the detectors and their liquid argon. Shipment of material for the first of the two cryostats being provided by CERN is underway, with the first of approximately 2000 components having arrived at SURF in January; the remainder of the steel for the first cryostat was due to have been shipped from its port in Spain by the end of May. The manufacture of the second cryostat by Horta Coslada is ongoing (see “Cryostat creation” image).

Procedures for lifting and manipulating the components will be tested in South Dakota in spring 2025, allowing the collaboration to ensure that it can safely and efficiently handle bulky components with challenging weight distributions in an environment where clearances can reach as little as 3 inches on either side. Lowering detector components down the Homestake mine’s Ross shaft will take four months.

Two configurations

The two far-detector modules needed for phase one of the DUNE experiment will use the same LArTPC technology, though with different anode and high-voltage configurations. A “horizontal-drift” far detector will use 150 6 m-by-2.3 m anode plane assemblies (APAs). Each will be wound with 4000 150 μm diameter copper-beryllium wires to collect ionisation signals from neutrino interactions with the argon.

A section of the second cryostat for DUNE

A second “vertical-drift” far detector will instead use charge readout planes (CRPs) – printed circuit boards perforated with an array of holes to capture the ionisation signals. Here, a horizontal cathode plane will divide the detector into two vertically stacked volumes. This design yields a slightly larger instrumented volume, which is highly modular in design, and simpler and more cost-effective to construct and install. A small amount of xenon doping will significantly enhance photo detection, allowing more light to be collected beyond a drift length of 4 m.

The construction of the horizontal-drift APAs is well underway at STFC Daresbury Laboratory in the UK and at the University of Chicago in the US. Each APA takes several weeks to produce, motivating the parallelisation of production across five machines in Daresbury and one in Chicago. Each machine automates the winding of 24 km of wire onto each APA (see “Wind it up” image). Technicians then solder thousands of joints and use a laser system to ensure the wires are all wound to the required tension.

Two large ProtoDUNE detectors at CERN are an essential part of developing and validating DUNE’s detector design. Four APAs are currently installed in a horizontal-drift prototype that will take data this summer as a final validation of the design of the full detector. A vertical-drift prototype (see “Vertical drift” image) will then validate the production of CRP anodes and optimise their electronics. A full-scale test of vertical-drift-detector installation will take place at CERN later this year.

Phase transition

Alongside the deployment of two additional far-detector modules, phase two of the DUNE experiment will include an increase in beam power beyond 2 MW and the deployment of a more capable near detector (MCND) featuring a magnetised high-pressure gaseous-argon TPC. These enhancements pursue increased statistics, lower energy thresholds, better energy resolution and lower intrinsic backgrounds. They are key to DUNE’s measurement of the parameters governing long-baseline neutrino oscillations, and will expand the experiment’s physics scope, including searches for anomalous tau-neutrino appearance, long-lived particles, low-mass dark matter and solar neutrinos.

A winding machine producing a ProtoDUNE anode plane assembly

Phase-one vertical-drift technology is the starting point for phase-two far-detector R&D – a global programme under ECFA in Europe and CPAD in the US that seeks to reduce costs and improve performance. Charge-readout R&D includes improving charge-readout strips, 3D pixel readout and 3D readout using high-performance fast cameras. Light-readout R&D seeks to maximise light coverage by integrating bare silicon photomultipliers and photoconductors into the detector’s field-cage structure.

A water-based liquid scintillator module capable of separately measuring scintillation and Cherenkov light is currently being explored as a possible alternative technology for the fourth “module of opportunity”. This would require modifications to the near detector to include corresponding non-argon targets.

Intense work

At Fermilab, site preparation work is already underway for LBNF, and construction will begin in 2025. The project will produce the world’s most intense beam of neutrinos. Its wide-band beam will cover more than one oscillation period, allowing unique access to the shape of the oscillation pattern in a long-baseline accelerator-neutrino experiment.

LBNF will need modest upgrades to the beamline to handle the 2 MW beam power from the upgrade to the Fermilab accelerator complex, which was recently endorsed by P5. The bigger challenge to the facility will be the proton-target upgrades needed for operation at this beam power. R&D is now taking place at Fermilab and at the Rutherford Appleton Laboratory in the UK, where DUNE’s phase-one 1.2 MW target is being designed and built.

The next generation of big neutrino experiments promises to bring new insights into the nature of our universe

DUNE highlights the international and collaborative nature of modern particle physics, with the collaboration boasting more than 1400 scientists and engineers from 209 institutions in 37 countries. A milestone was achieved late last year when the international community came together to sign the first major multi-institutional memorandum of understanding with the US Department of Energy, affirming commitments to the construction of detector components for DUNE and pushing the project to its next stage. US contributions are expected to cover roughly half of what is needed for the far detectors and the MCND, with the international community contributing the other half, including the cryostat for the third far detector.

DUNE is now accelerating into its construction phase. Data taking is due to start towards the end of this decade, with the goal of having the first far-detector module operational before the end of 2028.

The next generation of big neutrino experiments promises to bring new insights into the nature of our universe – whether it is another step towards understanding the preponderance of matter, the nature of the supernovae explosions that produced the stardust of which we are all made, or even possible signatures of dark matter… or something wholly unexpected!

The post A gold mine for neutrino physics appeared first on CERN Courier.

]]>
Feature In February this year, the DUNE experiment completed the excavation of three enormous caverns 1.5 kilometres below the surface at the new Sanford Underground Research Facility. https://cerncourier.com/wp-content/uploads/2024/07/CCJulAug24_NEUTRINO_frontis.jpg
Tabletop experiment constrains neutrino size https://cerncourier.com/a/tabletop-experiment-constrains-neutrino-size/ Fri, 05 Jul 2024 08:46:51 +0000 https://preview-courier.web.cern.ch/?p=110840 How big is a neutrino? Results from BeEST set new limits on the size of the neutrino’s wave packet, but theorists are at odds over how to interpret the data.

The post Tabletop experiment constrains neutrino size appeared first on CERN Courier.

]]>
The BeEST experiment

How big is a neutrino? Though the answer depends on the physical process that created it, knowledge of the size of neutrino wave packets is at present so wildly unconstrained that every measurement counts. New results from the Beryllium Electron capture in Superconducting Tunnel junctions (BeEST) experiment in TRIUMF, Canada, set new lower limits on the size of the neutrino’s wave packet in terrestrial experiments – though theorists are at odds over how to interpret the data.

Neutrinos are created as a mixture of mass eigenstates. Each eigenstate is a wave packet with a unique group velocity. If the wave packets are too narrow, they eventually stop overlapping as the wave evolves, and quantum interference is lost. If the wave packets are too broad, a single mass eigenstate is resolved by Heisenberg’s uncertainty principle, and quantum interference is also lost. No quantum interference means no neutrino oscillations.

“Coherence conditions constrain the lengths of neutrino wave packets both from below and above,” explains theorist Evgeny Akhmedov of MPI-K Heidelberg. “For neutrinos, these constraints are compatible, and the allowed window is very large because neutrinos are very light. This also hints at an answer to the frequently asked question of why charged leptons don’t oscillate.”

The spatial extent of the neutrino wavepacket has so far only been constrained to within 13 orders of magnitude by reactor-neutrino oscillations, say the BeEST team. If wave-packet sizes were at the experimental lower limit set by the world’s oscillation data, it could have impacted future oscillation experiments, such as the Jiangmen Underground Neutrino Observatory (JUNO) that is currently under construction in China.

“This could have destroyed JUNO’s ability to probe the neutrino mass ordering,” says Akhmedov, “however, we expect the actual sizes to be at least six orders of magnitude larger than the lowest limit from the world’s oscillation data. We have no hope of probing them in terrestrial oscillation experiments, in my opinion, though the situation may be different for astrophysical and cosmological neutrinos.”

BeEST uses a novel method to constrain the size of the neutrino wavepacket. The group creates electron neutrinos via electron capture on unstable 7Be nuclei produced at the TRIUMF–ISAC facility in Vancouver. In the final state there are only two products: the electron neutrino and a newly transmuted 7Li daughter atom that receives a tiny energy “kick” by emitting the neutrino. By embedding the 7Be isotopes in superconducting quantum sensors at 0.1 K, the collaboration can measure this low-energy recoil to high precision. Via the uncertainty principle, the team infers a limit on the spatial localisation of the entire final-state system of 6.2 pm – more than 1000 times larger than the nucleus itself.

Consensus has not been reached on how to infer the new lower limit on the size of the neutrino wave packet, with the preprint quoting two lower limits in the vicinity of 10–11 m and 10–8 m based on different theoretical assumptions. Although they differ dramatically, even the weaker limit improves upon all previous reactor oscillation data by more than an order of magnitude, and is enough to rule out decoherence effects as an explanation for sterile-neutrino anomalies, says the collaboration.

“I think the more stringent limit is correct,” says Akhmedov, who points out that this is only about 1.5 orders of magnitude lower than some theoretical predictions. “I am not an experimentalist and therefore cannot judge whether an improvement of 1.5 orders of magnitude can be achieved in the foreseeable future, but I very much hope that this is possible.”

The post Tabletop experiment constrains neutrino size appeared first on CERN Courier.

]]>
News How big is a neutrino? Results from BeEST set new limits on the size of the neutrino’s wave packet, but theorists are at odds over how to interpret the data. https://cerncourier.com/wp-content/uploads/2024/07/CCJulAug24_NA_triumf_feature.jpg
Detectors in Particle Physics: A Modern Introduction https://cerncourier.com/a/detectors-in-particle-physics-a-modern-introduction/ Fri, 05 Jul 2024 07:39:16 +0000 https://preview-courier.web.cern.ch/?p=110893 In their new text, Georg Viehhauser and Tony Weidberg offer an accessible and comprehensive introduction to the intricate world of particle detectors, writes Fabio Sauli.

The post Detectors in Particle Physics: A Modern Introduction appeared first on CERN Courier.

]]>
Detectors in Particle Physics: A Modern Introduction

Progress in elementary particle physics is driven by the development of radiation-detection technologies. From early photographic emulsions to the gargantuan modern systems that are deployed at particle accelerators and astrophysics experiments, radiation detectors use extraordinary means to disclose the nature and fundamental interactions of elementary particles.

In Detectors in Particle Physics, Georg Viehhauser and Tony Weidberg offer an accessible and comprehensive introduction to this intricate world. Addressed to graduate students in particle and nuclear physics, and more advanced researchers, this book provides the knowledge needed to understand and appreciate these indispensable tools. Building on their personal contributions to the conception, construction and operation of major detector systems at the DELPHI and ATLAS detectors at CERN, the authors review basic physics principles to enable the reader to grasp the fundamental operating mechanisms of gaseous, liquid and semiconductor detectors, as well as systems for particle identification and calorimetry.

In addition to exploring core concepts in detector physics, another objective of the book is to introduce the reader to case studies of applications in particle physics and astrophysics. From the Large Hadron Collider to neutrino experiments, the University of Oxford-based authors connect theoretical physics to practical applications and present real-world examples of modern detectors, bridging the gap between theory and experimentation. The book describes key practical aspects of particle detectors, including electronics, alignment, calibration and simulation. These practical insights enhance the reader’s understanding of how detectors operate in experiments, and each chapter includes practical exercises to help further the reader’s understanding of the subject.

Detectors in Particle Physics offers a unique blend of theoretical foundations and practical considerations. Whether you’re fascinated by the mysteries of the universe or planning a career in experimental physics, Viehhauser and Weidberg will undoubtedly prove to be a valuable resource.

The post Detectors in Particle Physics: A Modern Introduction appeared first on CERN Courier.

]]>
Review In their new text, Georg Viehhauser and Tony Weidberg offer an accessible and comprehensive introduction to the intricate world of particle detectors, writes Fabio Sauli. https://cerncourier.com/wp-content/uploads/2024/07/CCJulAug24_REV_Detectors_feature.jpg
New subdetectors to extend ALICE’s reach https://cerncourier.com/a/new-subdetectors-to-extend-alices-reach/ Fri, 03 May 2024 12:45:42 +0000 https://preview-courier.web.cern.ch/?p=110626 The LHC’s dedicated heavy-ion experiment is to be equipped with an upgraded inner tracker and a new forward calorimeter during the next long shutdown.

The post New subdetectors to extend ALICE’s reach appeared first on CERN Courier.

]]>
ALICE components

The LHC’s dedicated heavy-ion experiment, ALICE, is to be equipped with an upgraded inner tracking system and a new forward calorimeter to extend its physics reach. The upgrades have been approved for installation during the next long shutdown from 2026 to 2028.

With 10 m2 of active silicon and nearly 13 billion pixels, the current ALICE inner tracker, which has been in place since 2021, is the largest pixel detector ever built. It is also the first detector at the LHC to use monolithic active pixel sensors (MAPS) instead of the more traditional hybrid pixels and silicon microstrips. The new inner tracking system, ITS3, uses a novel stitching technology to construct MAPS of 50 µm thickness and up to 26 × 10 cm2 in area that can be bent around the beampipe in a truly cylindrical shape. The first layer will be placed just 2 mm from the beampipe and 19 mm from the interaction point, with a much lighter support structure that significantly reduces the material volume and therefore its effect on particle trajectories. Overall, the new system will boost the pointing resolution of the tracks by a factor of two compared to the present ITS detector, strongly enhancing measurements of thermal radiation emitted by the quark–gluon plasma and enabling insights into the interactions of charm and beauty quarks as they propagate through it.

The new forward calorimeter, FoCal, is optimised for photon detection in the forward direction. It consists of a highly granular electromagnetic calorimeter, composed of 18 layers of 1 × 1 cm2 silicon-pad sensors paired with tungsten converter plates and two additional layers of 30 × 30 μm2 pixels, and a hadronic calorimeter made of copper capillary tubes and scintillating fibres. By measuring inclusive photons and their correlations with neutral mesons, as well as the production of jets and charmonia, FoCal will add new capabilities to explore the small Bjorken-x parton structure of nucleons and nuclei.

Technical design reports for the ITS3 and FoCal projects were endorsed by the relevant CERN review committees in March. The construction phase has now started, with the detectors due to be installed in early 2028 in order to be ready for data taking in 2029. The upgrades, in particular ITS3, are also an important step on the way to ALICE 3 – a major proposed upgrade of ALICE that, if approved, would enter operation in the mid-2030s.

The post New subdetectors to extend ALICE’s reach appeared first on CERN Courier.

]]>
News The LHC’s dedicated heavy-ion experiment is to be equipped with an upgraded inner tracker and a new forward calorimeter during the next long shutdown. https://cerncourier.com/wp-content/uploads/2024/05/CCMayJun24_NA_alice_feature.jpg
Belle II back in business https://cerncourier.com/a/belle-ii-back-in-business/ Wed, 27 Mar 2024 18:58:33 +0000 https://preview-courier.web.cern.ch/?p=110344 Having emerged from a scheduled long shutdown, the upgraded Belle II detector in Japan recorded its first collisions on 20 February.

The post Belle II back in business appeared first on CERN Courier.

]]>
On 20 February the Belle II detector at SuperKEKB in Japan recorded its first e+e collisions since summer 2022, when the facility entered a scheduled long shutdown. During the shutdown, a new vertex detector incorporating a fully implemented pixel detector, together with an improved beam pipe at the collision point, was installed to better handle the expected increases in luminosity and backgrounds originating from the beams. Furthermore, the radiation shielding around the detector was enhanced, and other measures to improve the data-collection performance were implemented.

Belle II, for which first collisions were recorded in the fully instrumented detector in March 2019, aims to uncover new phenomena through precise analysis of the properties of B mesons and other particles produced by the SuperKEKB accelerator. Its long-term goal is to accumulate a dataset 50 times larger than that of the former Belle experiment.

The post Belle II back in business appeared first on CERN Courier.

]]>
News Having emerged from a scheduled long shutdown, the upgraded Belle II detector in Japan recorded its first collisions on 20 February. https://cerncourier.com/wp-content/uploads/2024/03/CCMarApr24_NA_belle.jpg
First TIPP in Africa a roaring success https://cerncourier.com/a/first-tipp-in-africa-a-roaring-success/ Wed, 17 Jan 2024 09:44:28 +0000 https://preview-courier.web.cern.ch/?p=110085 The 6th conference on Technology and Instrumentation in Particle Physics highlighted strong knowledge-transfer opportunities.

The post First TIPP in Africa a roaring success appeared first on CERN Courier.

]]>
The Conference of Technology and Instrumentation in Particle Physics (TIPP) is the largest conference of its kind. The sixth edition, which took place in Cape Town from 4 to 8 September 2023 and attracted 250 participants, was the first in Africa. More than 200 presentations covered state-of-the-art developments in detector development and instrumentation in particle physics, astroparticle physics and closely related fields. 

“As South Africa, we regard this opportunity as a great privilege for us to host this year’s edition of the TIPP conference,” said minister of higher education, science and innovation Blade Nzimande during an opening address. He was followed by speeches from Angus Paterson, deputy CEO of the National Research Foundation, and Makondelele Victor Tshivhase, director of the national research facility iThemba LABS.

The South African CERN (SA–CERN) programme within the National Research Foundation and iThemba LABS supports more than 120 physicists, engineers and students that contribute to the ALICE, ATLAS and ISOLDE experiments, and to theoretical particle physics. The SA–CERN programme identifies technology transfer in particle physics as key to South African society. This aligns symbiotically with the technology innovation platform of iThemba LABS to create a platform for innovation, incubation, industry collaboration and growth. For the first time, TIPP 2023 included a dedicated parallel session on technology transfer, which was chaired by Massimo Caccia (University of Insubria), Paolo Giacomelli (INFN Bologna) and Christophe De La Taille (CNRS/IN2P3).

The scientific programme kicked off with a plenary presentation on the implementation of the ECFA detector R&D roadmap in Europe by Thomas Bergauer (HEPHY). Other plenary presentations included overviews on bolometers for neutrinos, the Square Kilometre Array (SKA), technological advances by the LHC experiments, NaI experiments, advances in instrumentation at iThemba LABS, micro-pattern gaseous detectors, inorganic and liquid scintillator detectors, noble liquid experiments, axion detection, water cherenkov detectors for neutrinos, superconducting technology for future colliders and detectors, and the PAUL facility in South Africa.

A panel discussion between former CERN Director-General Rolf Heuer (DESY), Michel Spiro (IRFU) and Manfred Krammer (CERN), Imraan Patel (deputy director general of the Department of Science and Innovation), Angus Paterson and Rob Adam (SKA) triggered an exchange of insights about international research infrastructures such as CERN and SESAME for particle physics and science diplomacy.

Prior to TIPP2023, 25 graduate students from Botswana, Cameroon, Ghana, South Africa and Zambia participated in a school of instrumentation in particle, nuclear and medical physics held at iThemba LABS, comprising lectures, hands-on demonstrations, and insightful presentations by researchers from CERN, DESY and IJCLAB, which provided a global perspective on instrumentation.

The post First TIPP in Africa a roaring success appeared first on CERN Courier.

]]>
Meeting report The 6th conference on Technology and Instrumentation in Particle Physics highlighted strong knowledge-transfer opportunities. https://cerncourier.com/wp-content/uploads/2024/01/CCJanFeb24_FN_CapeTown1.jpg
3D-printing milestone at CERN https://cerncourier.com/a/3d-printing-milestone-at-cern/ Thu, 11 Jan 2024 16:36:16 +0000 https://preview-courier.web.cern.ch/?p=109880 The first fully monolithic 3D-printed detector sets a new milestone for fast and cost-efficient detector construction.

The post 3D-printing milestone at CERN appeared first on CERN Courier.

]]>
3D printed detectors

Plastic scintillator detectors are used extensively in high-energy physics experiments because they are cost-effective and enable sub-ns particle tracking and calorimetry. The next generation of plastic-scintillator detectors aims to instrument large active volumes with a fine 3D segmentation, raising major challenges for both production and assembly. One example is the two-tonne “super fine-granularity detector”, an active target made of two million 1 × 1 × 1 cm3 scintillating cubes at the T2K neutrino experiment in Japan. Scaling up this intricate workflow or aiming for more precise segmentation calls for technological innovation.

Enter the 3DET (3D printed detector) R&D collaboration at CERN. Also involving ETH Zurich, the School of Management and Engineering Vaud in Yverdon-les-Bains and the Institute for Scintillation Materials in Ukraine, 3DET is advancing additive-manufacturing methods to create plastic scintillator detectors that do not require post-processing and machining, thereby significantly streamlining the assembly process.

The 3DET collaboration has now passed a major milestone with a completely 3D-printed monolithic detector comprising active plastic scintillator cubes, the reflective coating to make the cubes optically independent, and the holes to insert wavelength-shifting optical fibres through the whole structure. Without the need for additional production steps, the prototype can be instrumented with fibres, photocounters and readout electronics right after the printing process to produce a working particle-physics detector. The team used the device to image cosmic rays with a scintillation light yield and cube-to-cube optical separation of the same quality as state-of-the-art detectors, and the results were confirmed with beam tests at the T9 area.

“This achievement represents a substantial advance in facilitating the creation of intricate, monolithic geometries in just one step. Moreover, it demonstrates that upscaling to larger volumes should be easy, cheaper and may be produced fast,” write authors Davide Sgalaberna and Tim Weber of ETH Zurich. “Applications that can profit from sub-ns particle tracking and calorimetry in large volumes will be massive neutrino detectors, hadronic and electromagnetic calorimeters or high-efficiency neutron detectors.”

The post 3D-printing milestone at CERN appeared first on CERN Courier.

]]>
News The first fully monolithic 3D-printed detector sets a new milestone for fast and cost-efficient detector construction. https://cerncourier.com/wp-content/uploads/2024/01/CCJanFeb24_NA_3D_feature.jpg
Looking forward at the LHC https://cerncourier.com/a/looking-forward-at-the-lhc/ Fri, 01 Sep 2023 12:55:49 +0000 https://preview-courier.web.cern.ch/?p=109206 The proposed Forward Physics Facility at CERN offers a broad programme ranging from neutrino, QCD and hadron-structure studies to beyond-the-Standard Model searches.

The post Looking forward at the LHC appeared first on CERN Courier.

]]>
Proposed Forward Physics Facility

The Forward Physics Facility (FPF) is a proposed new facility to operate concurrently with the High-Luminosity LHC, housing several new experiments on the ATLAS collision axis. The FPF offers a broad, far-reaching physics programme ranging from neutrino, QCD and hadron-structure studies to beyond-the-Standard Model (BSM) searches. The project, which is being studied within the Physics Beyond Colliders initiative, would exploit the pre-existing HL-LHC beams and thus have minimal energy-consumption requirements.

On 8 and 9 June, the 6th workshop on the Forward Physics Facility was held at CERN and online. Attracting about 160 participants, the workshop was organised in sessions focusing on the facility design, the proposed experiments and physics studies, leaving plenty of time for discussion about the next steps.

Groundbreaking

Regarding the facility itself, CERN civil-engineering experts presented its overall design: a 65 m-long, 10 m-high/wide cavern connected to the surface via an 88 m-deep shaft. The facility is located 600 m from the ATLAS collision point, in the SM18 area of CERN. A workshop highlight was the first results from a site investigation study, whereby a 20 cm-diameter core was taken at the proposed location of the FPF shaft to a depth of 100 m. The initial analysis of the core showed that the geological conditions are positive for work in this area. Other encouraging studies towards confirming the FPF feasibility were FLUKA simulations of the expected muon flux in the cavern (the main background for the experiments), the expected radiation level (shown to allow people to enter the cavern during LHC operations with various restrictions), and the possible effect on beam operations of the excavation works. One area where more work is required concerns the possible need to install a sweeper magnet in the LHC tunnel between ATLAS and the FPF to reduce the muon backgrounds.

Currently there are five proposed experiments to be installed in the FPF: FASER2 (to search for decaying long-lived particles); FASERν2 and AdvSND (dedicated neutrino detectors covering complementary rapidity regions); FLArE (a liquid-argon time projection chamber for neutrino physics and light dark-matter searches); and FORMOSA (a scintillator-based detector to search for milli-charged particles). The three neutrino detectors offer complementary designs to exploit the huge number of TeV energy neutrinos of all flavours that would be produced in such a forward-physics configuration. Four of these have smaller pathfinder detectors, FASER(ν), SND@LHC and milliQan that are already operating during LHC Run 3. First results from these pathfinder experiments were presented at the CERN workshop, including the first ever direct observation of collider neutrinos by FASER and SND@LHC, which provide a key proof of principle for the FPF. The latest conceptual design and expected performance of the FPF experiments were presented. Furthermore, first ideas on models to fund these experiments are in place and were discussed at the workshop.

In the past year, much progress has been made in quantifying the physics case of the FPF. It effectively extends the LHC with a “neutrino–ion collider’’ with complementary reach to the Electron–Ion Collider under construction in the US. The large number of high-energy neutrino interactions that will be observed at the FPF allows detailed studies of deep inelastic scattering to constrain proton and nuclear parton distribution functions (PDFs). Dedicated projections of the FPF reveal that uncertainties in light-quark PDFs could be reduced by up to a factor of two or even more compared to current models, leading to improved HL-LHC predictions for key measurements such as the W-boson mass.

In the past year, much progress has been made in quantifying the physics case of the FPF

High-energy electrons and tau neutrinos at the FPF predominantly arise from forward charm production. This is initiated by gluon–gluon scattering involving very low and high momentum fractions, with the former reaching down to Bjorken-x values of 10–7 – beyond the range of any other experiment. The same FPF measurements of forward charm production are relevant for testing different models of QCD at small-x, which would be instrumental for Higgs production at the proposed Future Circular Collider (FCC-hh). This improved modeling of forward charm production is also essential for understanding the backgrounds to diffuse astrophysics neutrinos at telescopes such as IceCube and KM3NeT. In addition, measurements of the ratio of electron-to-muon neutrinos at the FPF probe forward kaon-to-pion production ratios that could explain the so-called muon puzzle (a deficit in muons in simulations compared to measurements), affecting cosmic-ray experiments.

The FPF experiments would also be able to probe a host of BSM scenarios in uncharted regions of parameter space, such as dark-matter portals, dark Higgs bosons and heavy neutral leptons. Furthermore, experiments at the FPF will be sensitive to the scattering of light dark-matter particles produced in LHC collisions, and the large centre-of-mass energy enables probes of models, such as quirks (long-lived particles that are charged under a hidden-sector gauge interaction), and some inelastic dark-matter candidates, which are inaccessible at fixed-target experiments. On top of that, the FPF experiments will significantly improve the sensitivity of the LHC to probe millicharged particles.

The June workshop confirmed both the unique physics motivation for the FPF and the excellent progress in technical and feasibility studies towards realising it. Motivated by these exciting prospects, the FPF community is now working on a Letter of Intent to submit to the LHC experiments committee as the next step.

The post Looking forward at the LHC appeared first on CERN Courier.

]]>
Meeting report The proposed Forward Physics Facility at CERN offers a broad programme ranging from neutrino, QCD and hadron-structure studies to beyond-the-Standard Model searches. https://cerncourier.com/wp-content/uploads/2023/08/CCSepOct23_FN_forward_feature.jpg
A new TPC for T2K upgrade https://cerncourier.com/a/a-new-tpc-for-t2k-upgrade/ Wed, 05 Jul 2023 08:56:29 +0000 https://preview-courier.web.cern.ch/?p=108802 In the latest milestone for the CERN Neutrino Platform, a state-of-the-art time projection chamber for the near detector of the upgraded T2K experiment in Japan is now fully operational.

The post A new TPC for T2K upgrade appeared first on CERN Courier.

]]>
In the latest milestone for the CERN Neutrino Platform, a key element of the near detector for the T2K (Tokai to Kamioka) neutrino experiment in Japan – a state-of-the-art time projection chamber (TPC) – is now fully operational and taking cosmic data at CERN. T2K detects a neutrino beam at two sites: a near-detector complex close to the neutrino production point and Super-Kamiokande 300 km away. The ND280 detector is one of the near detectors necessary to characterise the beam before the neutrinos oscillate and to measure interaction cross sections, both of which are crucial to reduce systematic uncertainties. 

To improve the latter further, the T2K collaboration decided in 2016 to upgrade ND280 with a novel scintillator tracker, two TPCs and a time-of-flight system. This upgrade, in combination with an increase in neutrino beam power from the current 500 kW to 1.3 MW, will increase the statistics by a factor of about four and reduce the systematic uncertainties from 6% to 4%. The upgraded ND280 is also expected to serve as a near detector of the next generation long-baseline neutrino oscillation experiment Hyper-Kamiokande. 

Meanwhile, R&D and testing for the prototype detectors for the DUNE experiment at the Long Baseline Neutrino Facility at Fermilab/SURF in the US is entering its final stages. 

The post A new TPC for T2K upgrade appeared first on CERN Courier.

]]>
News In the latest milestone for the CERN Neutrino Platform, a state-of-the-art time projection chamber for the near detector of the upgraded T2K experiment in Japan is now fully operational. https://cerncourier.com/wp-content/uploads/2023/07/CCJulAug23_NA_tpc.jpg
Extreme detector design for a future circular collider https://cerncourier.com/a/extreme-detector-design-for-a-future-circular-collider/ Mon, 03 Jul 2023 13:31:46 +0000 https://preview-courier.web.cern.ch/?p=108725 A pileup of 1000 proton–proton collisions per bunch-crossing is just one of the challenges in extracting physics from a next-generation hadron collider to follow the LHC.

The post Extreme detector design for a future circular collider appeared first on CERN Courier.

]]>
FCC-hh reference detector

The Future Circular Collider (FCC) is the most powerful post-LHC experimental infrastructure proposed to address key open questions in particle physics. Under study for almost a decade, it envisions an electron–positron collider phase, FCC-ee, followed by a proton–proton collider in the same 91 km-circumference tunnel at CERN. The hadron collider, FCC-hh, would operate at a centre-of-mass energy of 100 TeV, extending the energy frontier by almost an order of magnitude compared to the LHC, and provide an integrated luminosity a factor of 5–10 larger. The mass reach for direct discovery at FCC-hh will reach several tens of TeV and allow, for example, the production of new particles whose existence could be indirectly exposed by precision measurements at FCC-ee. 

The potential of FCC-hh offers an unprecedented opportunity to address fundamental unknowns about our universe

At the time of the kickoff meeting for the FCC study in 2014, the physics potential and the requirements for detectors at a 100 TeV collider were already heavily debated. These discussions were eventually channelled into a working group that provided the input to the 2020 update of the European strategy for particle physics and recently concluded with a detailed writeup in a 300-page CERN Yellow Report. To focus the effort, it was decided to study one reference detector that is capable of fully exploiting the FCC-hh physics potential. At first glance it resembles a super CMS detector with two LHCb detectors attached (see “Grand designs” image). A detailed detector performance study followed, allowing a very efficient study of the key physics capabilities. 

The first detector challenge at FCC-hh is related to the luminosity, which is expected to reach 3 × 1035 cm–2s–1. This is six times larger than the HL-LHC luminosity and 30 times larger than the nominal LHC luminosity. Because the FCC will operate beams with a 25 ns bunch spacing, the so-called pile-up (the number of pp collisions per bunch crossing) scales by approximately the same factor. This results in almost 1000 simultaneous pp collisions, requiring a highly granular detector. Evidently, the assignment of tracks to their respective vertices in this environment is a formidable task. 

Longitudinal cross-section of the FCC-hh reference detector

The plan to collect an integrated pp luminosity of 30 ab–1 brings the radiation hardness requirements for the first layers of the tracking detector close to 1018 hadrons/cm2, which is around 100 times more than the requirement for the HL-LHC. Still, the tracker volume with such high radiation load is not excessively large. From a radial distance of around 30 cm outwards, radiation levels are already close to those expected for the HL-LHC, thus the silicon technology for these detector regions is already available.

The high radiation levels also need very radiation-hard calorimetry, making a liquid-argon calorimeter the first choice for the electromagnetic calorimeter and forward regions of the hadron calorimeter. The energy deposit in the very forward regions will be 4 kW per unit of rapidity and it will be an interesting task to keep cryogenic liquids cold in such an environment. Thanks to the large shielding effect of the calorimeters, which have to be quite thick to contain the highest energy particles, the radiation levels in the muon system are not too different from those at the HL-LHC. So the technology needed for this system is available. 

Looking forward 

At an energy of 100 TeV, important SM particles such as the Higgs boson are abundantly produced in the very forward region. The forward acceptance of FCC-hh detectors therefore has to be much larger than at the LHC detectors. ATLAS and CMS enable momentum measurements up to pseudorapidities (a measure of the angle between the track and beamline) of around η = 2.5, whereas at FCC-hh this will have to be extended to η = 4 (see “Far reaching” figure). Since this is not achievable with a central solenoid alone, a forward magnet system is assumed on either side of the detector. Whether the optimum forward magnets are solenoids or dipoles still has to be studied and will depend on the requirements for momentum resolution in the very forward region. Forward solenoids have been considered that extend the precision of momentum measurements by one additional unit of rapidity. 

Momentum resolution versus pseudorapidity

A silicon tracking system with a radius of 1.6 m and a total length of 30 m provides a momentum resolution of around 0.6% for low-momentum particles, 2% at 1 TeV and 20% at 10 TeV (see “Forward momentum” figure). To detect at least 90% of the very forward jets that accompany a Higgs boson in vector-boson-fusion production, the tracker acceptance has to be extended up to η = 6. At the LHC such an acceptance is already achieved up to η = 4. The total tracker surface of around 400 m2 at FCC-hh is “just” a factor two larger than the HL-LHC trackers, and the total number of channels (16.5 billion) is around eight times larger.

It is evident that the FCC-hh reference detector is more challenging than the LHC detectors, but not at all out of reach. The diameter and length are similar to those of the ATLAS detector. The tracker and calorimeters are housed inside a large superconducting solenoid 10 m in diameter, providing a magnetic field of 4 T. For comparison, CMS uses a solenoid with the same field and an inner diameter of 6 m. This difference does not seem large at first sight, but of course the stored energy (13 GJ) is about five times larger than the CMS coil, which needs very careful design of the quench protection system.

For the FCC-hh calorimeters, the major challenge, besides the high radiation dose, is the required energy resolution and particle identification in the high pile-up environment. The key to achieve the required performance is therefore a highly segmented calorimeter. The need for longitudinal segmentation calls for a solution different from the “accordion” geometry employed by ATLAS. Flat lead/steel absorbers that are inclined by 50 degrees with respect to the radial direction are interleaved with liquid-argon gaps and straight electrodes with high-voltage and signal pads (see “Liquid argon” figure). The readout of these pads on the back of the calorimeter is then possible thanks to the use of multi-layer electrodes fabricated as straight printed circuit boards. This idea has already been successfully prototyped within the CERN EP detector R&D programme.

The considerations for a muon system for the reference detector are quite different compared to the LHC experiments. When the detectors for the LHC were originally conceived in the late 1980s, it was not clear whether precise tracking in the vicinity of the collision point was possible in this unprecedented radiation environment. Silicon detectors were excessively expensive and gas detectors were at the limit of applicability. For the LHC detectors, a very large emphasis was therefore put on muon systems with good stand-alone performance, specifically for the ATLAS detector, which is able to provide a robust measurement of, for example, the decay of a Higgs particle into four muons, with the muon system alone. 

Liquid argon

Thanks to the formidable advancement of silicon-sensor technology, which has led to full silicon trackers capable of dealing with around 140 simultaneous pp collisions every 25 ns at the HL-LHC, standalone performance is no longer a stringent requirement. The muon systems for FCC-hh can therefore fully rely on the silicon trackers, assuming just two muon stations outside the coil that measure the exit point and the angle of the muons. The muon track provides muon identification, the muon angle provides a coarse momentum measurement for triggering and the track position provides improved muon momentum measurement when combined with the inner tracker. 

The major difference between an FCC-hh detector and CMS is that there is no yoke for the return flux of the solenoid, as the cost would be excessive and its only purpose to shield the magnetic field towards the cavern. The baseline design assumes the cavern infrastructure can be built to be compatible with this stray field. Infrastructure that is sensitive to the magnetic field will be placed in the service cavern 50 m from the solenoid, where the stray field is sufficiently low.

Higgs self-coupling

The high granularity and acceptance of the FCC-hh reference detector will result in about 250 TB/s of data for calorimetry and the muon system, about 10 times more than the ATLAS and CMS HL-LHC scenarios. There is no doubt that it will be possible to digitise and read this data volume at the full bunch-crossing rate for these detector systems. The question remains whether the data rate of almost 2500 TB/s from the tracker can also be read out at the full bunch-crossing rate or whether calorimeter, muon and possible coarse tracker information need to be used for a first-level trigger decision, reducing the tracker readout rate to the few MHz level, without the loss of important physics. Even if the optical link technology for full tracker readout were available and affordable, sufficient radiation hardness of devices and infrastructure constraints from power and cooling services are prohibitive with current technology, calling for R&D on low-power radiation-hard optical links. 

Benchmarks physics

The potential of FCC-hh in the realms of precision Higgs and electroweak physics, high mass reach and dark-matter searches offers an unprecedented opportunity to address fundamental unknowns about our universe. The performance requirements for the FCC-hh baseline detector have been defined through a set of benchmark physics processes, selected among the key ingredients of the physics programme. The detector’s increased acceptance compared to the LHC detectors, and the higher energy of FCC-hh collisions, will allow physicists to uniquely improve the precision of measurements of Higgs-boson properties for a whole spectrum of production and decay processes complementary to those accessible at the FCC-ee. This includes measurements of rare processes such as Higgs pair-production, which provides a direct measure of the Higgs self-coupling – a crucial parameter for understanding the stability of the vacuum and the nature of the electroweak phase transition in the early universe – with a precision of 3 to 7% (see “Higgs self-coupling” figure).

Dark matters

Moreover, thanks to the extremely large Higgs-production rates, FCC-hh offers the potential to measure rare decay modes in a novel boosted kinematic regime well beyond what is currently studied at the LHC. These include the decay to second-generation fermions, muons, which can be measured to a precision of 1%. The Higgs branching fraction to invisible states can be probed to a value of 10–4, allowing the parameter space for dark matter to be further constrained. The much higher centre-of-mass energy of FCC-hh, meanwhile, significantly extends the mass reach for discovering new particles. The potential for detecting heavy resonances decaying into di-muons and di-electrons extends to 40 TeV, while for coloured resonances like excited quarks the reach extends to 45 TeV, thus extending the current limit by almost an order of magnitude. In the context of supersymmetry, FCC-hh will be capable of probing stop squarks with masses up to 10 TeV, also well beyond the reach of the LHC.

In terms of dark-matter searches, FCC-hh has immense potential – particularly for probing scenarios of weakly interacting massive particles such as higgsinos and winos (see “Dark matters” figure). Electroweak multiplets are typically elusive, especially in hadron collisions, due to their weak interactions and large masses (needed to explain the relic abundance of dark matter in our universe). Their nearly degenerate mass spectrum produces an elusive final state in the form of so-called “disappearing tracks”. Thanks to the dense coverage of the FCC-hh detector tracking system, a general-purpose FCC-hh experiment could detect these particle decays directly, covering the full mass range expected for this type of dark matter. 

A detector at a 100 TeV hadron collider is clearly a challenging project. But detailed studies have shown that it should be possible to build a detector that can fully exploit the physics potential of such a machine, provided we invest in the necessary detector R&D. Experience with the Phase-II upgrades of the LHC detectors for the HL-LHC, developments for further exploitation of the LHC and detector R&D for future Higgs factories will be important stepping stones in this endeavour.

The post Extreme detector design for a future circular collider appeared first on CERN Courier.

]]>
Feature A pileup of 1000 proton–proton collisions per bunch-crossing is just one of the challenges in extracting physics from a next-generation hadron collider to follow the LHC. https://cerncourier.com/wp-content/uploads/2023/06/CCJulAug23_FCChh_feature.jpg
LHCb looks forward to the 2030s https://cerncourier.com/a/lhcb-looks-forward-to-the-2030s/ Wed, 01 Mar 2023 13:19:43 +0000 https://preview-courier.web.cern.ch/?p=107865 The challenges of performing precision flavour physics in the very harsh conditions of the HL-LHC are triggering a vast R&D programme at the forefront of technology.

The post LHCb looks forward to the 2030s appeared first on CERN Courier.

]]>
LHCb Upgrade II detector

The LHCb collaboration is never idle. While building and commissioning its brand new Upgrade I detector, which entered operation last year with the start of LHC Run 3, planning for Upgrade II was already under way. This proposed new detector, envisioned to be installed during Long Shutdown 4 in time for High-Luminosity LHC (HL-LHC) operations continuing in Run 5, scheduled to begin in 2034/2035, would operate at a peak luminosity of 1.5 × 1034cm–2s–1. This is 7.5 times higher than at Run 3 and would generate data samples of heavy-flavoured hadron decays six times larger than those obtainable at the LHC, allowing the collaboration to explore a wide range of flavour-physics observables with extreme precision. Unprecedented tests of the CP-violation paradigm (see “On point” figure) and searches for new physics at double the mass scales possible during Run 3 are among the physics goals on offer. 

Attaining the same excellent performance as the original detector has been a pivotal constraint in the design of LHCb Upgrade I. While achieving the same in the much harsher collision environments at the HL-LHC remains the guiding principle for Upgrade II, the LHCb collaboration is investigating the possibilities to go even further. And these challenges need to be met while keeping the existing footprint and arrangement of the detector (see “Looking forward” figure). Radiation-hard and fast 3D silicon pixels, a new generation of extremely fast and efficient photodetectors, and front-end electronics chips based on 28 nm semiconductor technology are just a few examples of the innovations foreseen for LHCb Upgrade II, and will also set the direction of R&D for future experiments.

LHCb constraints

Rethinking the data acquisition, trigger and data processing, along with intense use of hardware accelerators such as field-programmable gate arrays (FPGAs) and graphics processing units (GPUs), will be fundamental to manage the expected five-times higher average data rate than in Upgrade I. The Upgrade II “framework technical design report”, completed in 2022, is also the first to consider the experiment’s energy consumption and greenhouse-gas emissions, as part of a close collaboration with CERN to define an effective environmental protection strategy.

Extreme tracking 

At the maximum expected luminosity of the HL-LHC, around 2000 charged particles will be produced per bunch crossing within the LHCb apparatus. Efficiently reconstructing these particles and their associated decay vertices in real time represents a significant challenge. It requires the existing detector components to be modified to increase the granularity, reduce the amount of material and benefit from the use of precision timing.

The future VELO will be a true 4D-tracking detector

The new Vertex Locator (VELO) will be based, as it was for Upgrade I (CERN Courier May/June 2022 p38), on high-granularity pixels operated in vacuum in close proximity to the LHC beams. For Upgrade II, the trigger and online reconstruction will rely on the selection of events, or parts of events, with displaced tracks at the early stage of the event. The VELO must therefore be capable of independently reconstructing primary vertices and identifying displaced tracks, while coping with a dramatic increase in event rate and radiation dose. Excellent spatial resolution will not be sufficient, given the large density of primary interactions along the beam axis expected under HL-LHC conditions. A new coordinate – time – must be introduced. The future VELO will be a true 4D-tracking detector that includes timing information with a precision of better than 50 ps per hit, leading to a track time-stamp resolution of about 20 ps (see “Precision timing” figure). 

Precision timing

The new VELO sensors, which include 28 nm technology application-specific integrated circuits (ASICs), will need to achieve this time resolution while being radiation-hard. The important goal of a 10 ps time resolution has recently been achieved with irradiated prototype 3D-trench silicon sensors. Depending on the rate-capability of the new detectors, the pitch may have to be reduced and the mat­erial budget significantly decreased to reach comparable spatial resolution to the current Run 3 detector. The VELO mechanics have to be redesigned, in particular to reduce the material of the radio-frequency foil that separates the secondary vacuum – where the sensors are located – from the machine vacuum. The detector must be built with micron-level precision to control systematic uncertainties.

The tracking system will take advantage of a detector located upstream of the dipole magnet, the Upstream Tracker (UT), and of a detector made of three tracking stations, the Mighty Tracker (MT), located downstream of the magnet. In conjunction with the VELO, the tracking system ensures the ability to reconstruct the trajectory of charged particles bending through the detector due to the magnetic field, and provides a high-precision momentum measurement for each particle. The track direction is a necessary input to the photon-ring searches in Ring Imaging Cherenkov (RICH) detectors, which identify the particle species. Efficient real-time charged-particle reconstruction in a very high particle-density environment requires not only good detector efficiency and granularity, but also the ability to quickly reject combinations of hits not produced by the same particle. 

LHCb-dedicated high-voltage CMOS sensor

The UT and the inner region of the MT will be instrumented with high-granularity silicon pixels. The emerging radiation-hard monolithic active pixel sensor (MAPS) technology is a strong candidate for these detectors. LHCb Upgrade II would represent the first large-scale implementation of MAPS in a high-radiation environment, with the first prototypes currently being tested (see “Mighty pixels” figure). The outer region of the MT will be covered by scintillating fibres, as in Run 3, with significant developments foreseen to cope with the radiation damage. The availability of high-precision vertical-coordinate hit information in the tracking, provided for the first time in LHCb by pixels in the high-occupancy regions of the tracker, will be crucial to reject combinations of track segments or hits not produced by the same particle. To substantially extend the coverage of the tracking system to lower momenta, with consequent gains for physics measurements, the internal surfaces of the magnet side walls will be instrumented with scintillating bar detectors, the so-called magnet stations (MS). 

Extreme particle identification 

A key factor in the success of the LHCb experiment has been its excellent particle identification (PID) capabilities. PID is crucial to distinguish different decays with final-state topologies that are backgrounds to each other, and to tag the flavour of beauty mesons at production, which is a vital ingredient to many mixing and CP-violation measurements. For particle momenta from a few GeV/c up to 100 GeV/c, efficient hadron identification at LHCb is provided by two RICH detectors. Cherenkov light emitted by particles traversing the gaseous radiators of the RICHes is projected by mirrors onto a plane of photodetectors. To maintain Upgrade I performances, the maximum occupancy over the photodetector plane must be kept below 30%, the single-photon Cherenkov-angle resolution must be below 0.5 mrad, and the time resolution on single-photon hits should be well below 100 ps (see “RICH rewards” figure). 

Photon hits on the RICH photodetector plane

Next-generation silicon photomultipliers (SiPMs) with improved timing and a pixel size of 1 × 1 mm2, together with re-optimised optics, are deemed capable of delivering these specifications. The high “dark” rates of SiPMs, especially after elevated radiation doses, would be controlled with cryogenic cooling and neutron shielding. Vacuum tubes based on micro-channel plates (MCPs) are a potential alternative due to their excellent time resolution (30 ps) for single-photon hits and lower dark rate, but suffer in high-rate environments. New eco-friendly gaseous radiators with a lower refractive index can improve the PID performance at higher momenta (above 80 GeV/c), but meta-materials such as photonic crystals are also being studied. In the momentum region below 10 GeV/c, PID will profit from TORCH – an innovative 30 m2 time-of-flight detector consisting of quartz plates where charged particles produce Cherenkov light. The light propagates by internal reflection to arrays of high-granularity MCP–PMTs optimised to operate at high rates, with a prototype already showing performances close to the target of 70 ps per photon.

Excellent photon and π0 reconstruction and e–π separation are provided by LHCb’s electromagnetic calorimeter (ECAL). But the harsh occupancy conditions of the HL-LHC impose the development of 5D calorimetry, which complements precise position and energy measurements of electromagnetic clusters with a time resolution of about 20 ps. The most crowded inner regions will be equipped with so-called spaghetti calorimeter (SPACAL) technology, which consists of arrays of scintillating fibres either made of plastic or garnet crystals arranged along the beam direction, embedded in a lead or tungsten matrix. The less-crowded outer regions of the calorimeter will continue to be instrumented with the current “Shashlik” technology with refurbished modules and increased granularity. A timing layer, either based on MCPs or on alternated tungsten and silicon-sensor layers placed within the front and back ECAL sections, is also a possibility to achieve the ultimate time resolution. Several SPACAL prototypes have already demonstrated that time resolutions down to an impressive 15 ps are feasible (see “Spaghetti calorimetry” image).

A SPACAL prototype being prepared for beam tests

The final main LHCb subdetector is the muon system, based on four stations of multiwire proportional chambers (MWPCs) interleaved with iron absorbers. For Upgrade II, it is proposed that MWPCs in the inner regions, where the rate will be as high as a few MHz/cm2, are replaced with new-generation micro-pattern gaseous detectors, the micro-RWELL, a prototype of which has proved able to reach a detection efficiency of approximately 97% and a rate-capability of around 10 MHz/cm2. The outer regions, characterised by lower rates, will be instrumented either by reusing a large fraction (95%) of the current MWPCs or by implementing other solutions based on resistive plate chambers or scintillating-tile-based detectors. As with all Upgrade II subdetectors, dedicated ASICs in the front-end electronics, which integrate fast time-to-digital converters or high-frequency waveform samplers, will be necessary to measure time with the required precision.

Trigger and computing 

The detectors for LHCb Upgrade II will produce data at a rate of up to 200 Tbit/s (see “On the up” figure), which for practical reasons needs to be reduced by four orders of magnitude before being written to permanent storage. The data acquisition therefore needs to be reliable, scalable and cost-efficient. It will consist of a single type of custom-made readout board combined with readily available data-centre hardware. The readout boards collect the data from the various sub-detectors using the radiation-hard, low-power GBit transceiver links developed at CERN and transfer the data to a farm of readout servers via next- generation “PCI Express” connections or Ethernet. For every collision, the information from the subdetectors is merged by passing through a local area network to the builder server farm.

With up to 40 proton–proton interactions, every bunch crossing at the HL-LHC will contain multiple heavy-flavour hadrons within the LHCb acceptance. For efficient event selection, hits not associated with the proton–proton collision of interest need to be discarded as early as possible in the data-processing chain. The real-time analysis system performs reconstruction and data reduction in two high-level-trigger (HLT) stages. HLT1 performs track reconstruction and partial PID to apply inclusive selections, after which the data is stored in a large disk buffer while alignment and calibration tasks run in semi real-time. The final data reduction occurs at the HLT2 level, with exclusive selections based on full offline-quality event reconstruction. Starting from Upgrade I, all HLT1 algorithms are running on a farm of GPUs, which enabled, for the first time at the LHC, track reconstruction to be performed at a rate of 30 MHz. The HLT2 sequence, on the other hand, is run on a farm of CPU servers – a model that would be prohibitively costly for Upgrade II. Given the current evolution of processor performance, the baseline approach for Upgrade II is to perform the reconstruction algorithms of both HLT1 and HLT2 on GPUs. A strong R&D activity is also foreseen to explore alternative co-processors such as FPGAs and new emerging architectures.

Real-time versus the start date of various high-energy physics experiments

The second computing challenge for LHCb Upgrade II derives from detector simulations. A naive extrapolation from the computing needs of the current detector implies that 2.5 million cores will be needed for simulation in Run 5, which is one order of magnitude above what is available with a flat budget assuming a 10% performance increase of processors per year. All experiments in high-energy physics face this challenge, motivating a vigorous R&D programme across the community to improve the processing time of simulation tools such as GEANT4, both by exploiting co-processors and by parametrising the detector response with machine-learning algorithms.

Intimately linked with digital technologies today are energy consumption and efficiency. Already in Run 3, the GPU-based HLT1 is up to 30% more energy-efficient than the originally planned CPU-based version. The data centre is designed for the highest energy-efficiency, resulting in a power usage that compares favourably with other large computing centres. Also for Upgrade II, special focus will be placed on designing efficient code and fully exploiting efficient technologies, as well as designing a compact data acquisition system and optimally using the data centre.

A flavour of the future 

The LHC is a remarkable machine that has already made a paradigm-shifting discovery with the observation of the Higgs boson. Exploration of the flavour-physics domain, which is a complementary but equally powerful way to search for new particles in high-energy collisions, is essential to pursue the next major milestone. The proposed LHCb Upgrade II detector will be able to accomplish this by exploring energy scales well beyond those reachable by direct searches. The proposal has received strong support from the 2020 update of the European strategy for particle physics, and the framework technical design report was positively reviewed by the LHC experiments committee. The challenges of performing precision flavour physics in the very harsh conditions of the HL-LHC are daunting, triggering a vast R&D programme at the forefront of technology. The goal of the LHCb teams is to begin construction of all detector components in the next few years, ready to install the new detector at the time of Long Shutdown 4.

The post LHCb looks forward to the 2030s appeared first on CERN Courier.

]]>
Feature The challenges of performing precision flavour physics in the very harsh conditions of the HL-LHC are triggering a vast R&D programme at the forefront of technology. https://cerncourier.com/wp-content/uploads/2023/03/LHCb_beampipie_202204-063_02.jpg
ALICE 3: a heavy-ion detector for the 2030s https://cerncourier.com/a/alice-3-a-heavy-ion-detector-for-the-2030s/ Wed, 01 Mar 2023 12:56:52 +0000 https://preview-courier.web.cern.ch/?p=107852 The ALICE collaboration is charting a course to an exciting heavy-ion physics programme for Runs 5 and 6 at the High-Luminosity LHC.

The post ALICE 3: a heavy-ion detector for the 2030s appeared first on CERN Courier.

]]>
ALICE 3

The ALICE experiment at the LHC was conceived to study the properties of the quark–gluon plasma (QGP), the state of matter prevailing a few microseconds after the Big Bang. Collisions between large nuclei in the LHC produce matter at temperatures of about 3 × 1012 K, sufficiently high to liberate quarks and gluons, and thus to study the deconfined QGP state in the laboratory. The heavy-ion programme at LHC Runs 1 and 2 has already enabled the ALICE collaboration to study the formation of the QGP, its collective expansion and its properties, using for example the interactions of heavy quarks and high-energy partons with the QGP. ALICE 3 builds on these discoveries to reach the next level of understanding. 

One of the most striking discoveries at the LHC is that J/ψ mesons not only “melt” in the QGP but can also be regenerated from charm quarks produced in independent hard scatterings. The LHC programme has also shown that the energy loss of partons propagating through the plasma depends on their mass. Furthermore, collective behaviour and enhanced strange-baryon production have been observed in selected proton–proton collisions in which large numbers of particles are produced, signalling that high densities may be reached in such collisions. 

During Long Shutdown 2, a major upgrade of the ALICE detector (ALICE 2) was completed on budget and in time for the start of Run 3 in 2022. Together with improvements in the LHC itself, the experiment will profit from a factor-50 higher Pb–Pb collision rate and also provide a better pointing resolution. This will bring qualitative improvements for the entire physics programme, in particular for the detection of heavy-flavour hadrons and thermal di-electron radiation. However, several important questions – for example concerning the mechanisms leading to thermal equilibrium and the formation of hadrons in the QGP – will remain open even after Runs 3 and 4. To address these, the collaboration is pursuing next-generation technologies to build a new detector with a significantly larger rapidity coverage and excellent pointing resolution and particle identification (see “Brand new” figure). A letter of intent for ALICE 3, to be installed in 2033/2034 (Long Shutdown 4) and operated during Runs 5 and 6 (starting in 2035), was submitted to the LHC experiments committee in 2021 and led to a positive evaluation by the extended review panel in March 2022. 

Behind the curtain of hadronisation

In heavy-ion collisions at the LHC, a large amount of energy is deposited in a small volume, forming a QGP. The plasma immediately starts expanding and cooling down, eventually reaching a temperature at which hadrons are formed. Although hadrons formed at the boundary of this phase transition carry information about the expansion of the plasma, they do not inform us directly about the temperature and other properties of the hot plasma phase of the collision before hadronisation takes place. Photons and di-lepton pairs, which are produced as thermal radiation in electromagnetic processes and do not participate in the strong interaction, allow us to look behind the curtain of hadronisation. However, measurements of photon and dilepton emission are challenging due to the large background from electromagnetic decays of light hadrons and weak decays of heavy-flavour hadrons. 

Distribution of electron–positron pairs in Pb–Pb collisions at the LHC

One of the goals of the current ALICE 2 upgrades is to enable the first measurements of the thermal emission of electron–positron pairs (from virtual photons), and thus to determine the average temperature of the system before the formation of hadrons, during Runs 3 and 4. To further understand the evolution of temperature with time, larger data samples and excellent background rejection are needed. The early-stage temperature is determined from the exponential slope of the mass distribution above the ρ resonance, i.e. pair masses larger than 1.2 GeV/c2 (see “Taking the temperature” figure, upper panel). ALICE 3 would be able to explore the time dependence of the temperature before hadronisation using more differential measurements, e.g. of the azimuthal asymmetry of di-electron emission and of the slope of the mass spectrum as a function of transverse momentum. 

The di-electron mass spectrum also carries unique information about the mechanism of chiral symmetry breaking – a fundamental quantum-chromodynamics (QCD) effect that generates most of the hadron mass. At the phase transition to the QGP, chiral symmetry is restored and quarks and gluons are deconfined. One of the predicted signals of this transition is mixing between the ρ and a1 vector-meson states, which gives the di-electron invariant mass spectrum a characteristic exponential shape in the mass range above the ρ meson peak (0.8–1.1 GeV/c2). Only the excellent electron identification and rejection of electrons from heavy-flavour decays possible with ALICE 3 can give physicists experimental access to this effect (see “Taking the temperature” figure, lower panel).

Multi-charm production

Another important goal of the ALICE physics programme is to understand how energetic quarks and gluons interact with the QGP and eventually thermalise and form a plasma that behaves as a fluid with very low internal friction. The thermalisation process and the properties of the QGP are governed by low-momentum interactions between quarks and gluons, which cannot be calculated using perturbative techniques. Experimental input is therefore important to understand these phenomena and to link them to fundamental QCD.

Heavy quarks  

The heavy charm and beauty quarks are of particular interest because their interactions with the plasma can be calculated using lattice-QCD techniques with good theoretical control. Heavy quarks and antiquarks are mostly produced as back-to-back pairs in hard scatterings in the early phase of the collision. Subsequent interactions between the quarks and the plasma change the angle between the quark and antiquark. In addition, the “drag” from the plasma leads to an asymmetry in the overall azimuthal distributions of heavy quarks (elliptic flow) with respect to the reaction plane. The size of these effects is a measure of the strength of the interactions with the plasma. Since quark flavour is conserved in interactions in the plasma, measurements of hadrons containing heavy quarks, such as the D meson and Λc baryon, are directly sensitive to the interactions between heavy quarks and the plasma. While the increase in statistics and the improved spatial resolution of ALICE 2 will already allow us to measure the production of charm baryons, measurements of azimuthal correlations of charm–hadron pairs are needed to directly address how they interact with the plasma. These will only become possible with the precision, statistics and acceptance of ALICE 3. 

Heavier beauty quarks are expected to take longer to thermalise and therefore lose less information through their interactions with the QGP. Therefore, systematic measurements of transverse-momentum distributions and azimuthal asymmetries of beauty mesons and baryons in heavy-ion collisions are essential to map out the interactions of heavy-flavour quarks with the QGP and to understand the mechanisms that drive the system towards thermal equilibrium.

To understand how hadrons emerge from the QGP, those containing multiple heavy quarks are of particular interest because they can only be formed from quarks that were produced in separate hard-scattering processes. If full thermal equilibrium is reached in Pb–Pb collisions, the production rates of such states are expected to be enhanced by up to three orders of magnitude with respect to pp collisions. This implies enormous sensitivity to the probability for combining independently produced quarks during hadronisation and to the degree of thermalisation. At ALICE 3, the precision with which multi-charm baryon yields can be measured is enhanced (see “Multi-charm production” figure). 

Model of a novel design for a retractable tracker

In addition to precision measurements of di-electrons and heavy-flavour hadrons, ALICE 3 will allow us to investigate many more aspects of the QGP. These include fluctuations of conserved quantum numbers, such as flavour and baryon number, which are sensitive to the nature of the deconfinement phase transition of QCD. ALICE 3 will also aim to answer questions in hadron physics, for example by searching for the existence of nuclei containing charm baryons (analogous to strange baryons in hypernuclei) and by studying the interaction potentials between unstable hadrons, which may elucidate the structure of exotic hadronic states that have recently been discovered in electron–positron collisions and in hadronic collisions at the LHC. In addition, ALICE 3 will use ultra-peripheral collisions to study the structure of resonances such as the ρ′ and to look for new fundamental particles, such as axion-like particles and dark photons. A dedicated detector system is foreseen to study very low-energy photon production, which can be used to test “soft theorems” that link the production of very soft photons in a collision to the hadronic final state.

Pushing the experimental limits 

To pursue this ambitious physics programme, ALICE 3 is designed to be a compact, large-acceptance tracking and particle-identification detector with excellent pointing resolution as well as high readout rates. The main tracking information is provided by an all-silicon tracker in a magnetic field provided by a superconducting magnet system, complemented by a dedicated vertex detector that will have to be retractable to provide the required aperture for the LHC at injection energy. To achieve the ultimate pointing resolution, the first hits must be detected as close as possible to the interaction point (5 mm at the highest energy) and the amount of material in front of it be kept to a minimum. The inner tracking layers will also enable so-called strangeness tracking – the direct detection of strange baryons before they decay – to improve the pointing resolution and suppress combinatorial background, for example in the measurement of multi-charm baryon decays.

ALICE 3 is a compact, large-acceptance tracking and particle-identification detector with excellent pointing resolution as well as high readout rates

First feasibility studies of the mechanical design and the integration with the LHC for the vertex tracker have been conducted and engineering models have been produced to demonstrate the concept and explore production techniques for the components (see “Close encounters” image). The detection layers are to be constructed from bent, wafer-scale pixel sensors. The development of the next generation of CMOS pixel sensors in 65 nm technology with higher radiation tolerance and improved spatial resolution has already started in the context of the ITS 3 project in ALICE, which will be an important milestone on the way to ALICE 3 (see “Next-gen tracking” image). The outer tracker, which has to cover the cylindrical volume to a radius of 80 cm over a total length of ±4 m, will also use CMOS pixel sensors. These will be integrated into larger modules for an effective instrumentation of about 60 m2 while minimising the material used for mechanical support and services. The foreseen material budget for the tracker is 1% of a radiation length per layer for the outer tracker, and only 0.05% per layer for the vertex tracker.

An engineering model of ITS 3

For particle identification, five different detector systems are foreseen: a silicon-based time-of-flight system and a ring-imaging Cherenkov (RICH) detector that provide hadron and electron identification over a broad momentum range, a muon identifier starting from a transverse momentum of about 1.5 GeV/c, an electromagnetic calorimeter for photon detection and identification, and a forward tracker to reconstruct photons at very low momentum from their conversions to electron–positron pairs. For the time-of-flight system, the main R&D line aims at the integration of a gain layer in monolithic CMOS sensors to achieve the required time resolution of at least 20 ps (alternatively, low-gain avalanche diodes with external readout circuitry can be used). The calorimeter is based on a combination of lead-sampling and lead-tungstate segments, both of which would be read out by commercially available silicon photomultipliers (SiPMs). For the detection layers of the muon identifier, both resistive plate chambers and scintillating bars are being considered. Finally, for the RICH design, the R&D goal is to integrate the digital readout circuitry in SiPMs to enable efficient detection of photons in the visible range. 

ALICE 3 provides a roadmap for an exciting heavy-ion physics programme, along with the other three large LHC experiments, in Runs 5 and 6. An R&D programme for the coming years is being set up to establish the technologies and enable the preparation of technical design reports in 2026/2027. These developments not only constitute an important contribution to the full physics exploitation of the LHC, but are of strategic interest for future particle detectors and will benefit the particle and nuclear physics community at large.

The post ALICE 3: a heavy-ion detector for the 2030s appeared first on CERN Courier.

]]>
Feature The ALICE collaboration is charting a course to an exciting heavy-ion physics programme for Runs 5 and 6 at the High-Luminosity LHC. https://cerncourier.com/wp-content/uploads/2019/12/201902-053_01.jpg
CMS looks forward to new physics with PPS https://cerncourier.com/a/cms-looks-forward-to-new-physics-with-pps/ Mon, 05 Sep 2022 09:09:12 +0000 https://preview-courier.web.cern.ch/?p=105821 A new CMS subdetector – the Precision Proton Spectrometer (PPS) – allows the electroweak sector of the Standard Model to be probed in regions so far unexplored.

The post CMS looks forward to new physics with PPS appeared first on CERN Courier.

]]>
PPS timing detector

Colliding particles at high energies is a tried and tested route to uncover the secrets of the universe. In a collider, charged particles are packed in bunches, accelerated and smashed into each other to create new forms of matter. Whether accelerating elementary electrons or composite hadrons, past and existing colliders all deal with matter constituents. Colliding force-carrying particles such as photons is more ambitious, but can be done, even at the Large Hadron Collider (LHC). 

The LHC, as its name implies, collides hadrons (protons or ions) into one another. In most cases of interest, projectile protons break up in the collision and a large number of energetic particles are produced. Occasionally, however, protons interact through a different mechanism, whereby they remain intact and exchange photons that fuse to create new particles (see “Photon fusion” figure). Photon–photon fusion has a unique signature: the particles originating from this kind of interaction are produced exclusively, i.e. they are the only ones in the final state along with the protons, which often do not disintegrate. Despite this clear imprint, when the LHC operates at nominal instantaneous luminosities, with a few dozen proton–proton interactions in a single bunch crossing, the exclusive fingerprint is contaminated by extra particles from different interactions. This makes the identification of photon–photon fusion challenging.

The sensitivity in many channels is expected to increase by a factor of four or five compared to that in Run 2

Protons that survive the collision, having lost a small fraction of their momentum, leave the interaction point still packed within the proton bunch, but gradually drift away as they travel further along the beamline. During LHC Run 2, the CMS collaboration installed a set of forward proton detectors, the Precision Proton Spectrometer (PPS), at a distance of about 200 m from the interaction point on both sides of the CMS apparatus. The PPS detectors can get as close to the beam as a few millimetres and detect protons that have lost between 2% and 15% of their initial kinetic energy (see “Precision Proton Spectrometer up close” panel). They are the CMS detectors located the farthest from the interaction point and the closest to the beam pipe, opening the door to a new physics domain, represented by central-exclusive-production processes in standard LHC running conditions.

Testing the Standard Model

Central exclusive production (CEP) processes at the LHC allow novel tests of the Standard Model (SM) and searches for new phenomena by potentially granting access to some of the rarest SM reactions so far unexplored. The identification of such exclusive processes relies on the correlation between the proton momentum loss measured by PPS and the kinematics of the central system, allowing the mass and rapidity of the central system in the interaction to be inferred very accurately (see “Tagging exclusive events” and “Exclusive identification” figures). Furthermore, the rules for exclusive photon–photon interactions only allow states with certain quantum numbers (in particular, spin and parity) to be produced. 

Precision Proton Spectrometer up close

Tracking station

PPS was born in 2014 as a joint project between the CMS and TOTEM collaborations (CERN Courier April 2017 p23), and in 2018 became a subsystem of CMS following an MoU between CERN, CMS and TOTEM. For the specialised PPS setup to work as designed, its detectors must be located within a few millimetres of the LHC proton beam. The Roman Pots technique – moveable steel “pockets” enclosing the detectors under moderate vacuum conditions with a thin wall facing the beam – is perfectly suited for this task. This technique has been successfully exploited by the TOTEM and ATLAS collaborations at the LHC and was used in the past by experiments at the ISR, the SPS, the Tevatron and HERA. The challenge for PPS is the requirement that the detectors operate continuously during standard LHC running conditions, as opposed to dedicated special runs with a very low interaction rate.

The PPS design for LHC Run 2 incorporated tracking and timing detectors on both sides of CMS. The tracking detector comprises two stations located 10 m apart, capable of reconstructing the position and angle of the incoming proton. Precise timing is needed to associate the production vertex of two protons to the primary interaction vertex reconstructed by the CMS tracker. The first tracking stations of the proton spectrometer were equipped with silicon-strip trackers from TOTEM – a precise and reliable system used since the start of the LHC. In parallel, a suitable detector technology for efficient operation during standard LHC runs was developed, and in 2017 half of the tracking stations (one per side) were replaced by new silicon pixel trackers designed to cope with the higher hit rate. The x, y coordinates provided by the pixels resolve multiple proton tracks in the same bunch crossing, while the “3D” technology used for sensor fabrication greatly enhances resistance against radiation damage. The transition from strips was completed in 2018, when the fully pixel-based tracker was employed.

In parallel, the timing system was set up. It is based on diamond pad sensors initially developed for a new TOTEM detector. The signal collection is segmented in relatively large pads, read out individually by custom, high-speed electronics. Each plane contributes to the time measurement of the proton hit with a resolution of about 100 ps. The design of the detector evolved during Run 2 with different geometries and set-ups, improving the performance in terms of efficiency and overall time resolution.

The most common and cleanest process in photon–photon collisions is the exclusive production of a pair of leptons. Theoretical calculations of such processes date back almost a century to the well-known Breit–Wheeler process. The first result obtained by PPS after commissioning in 2016 was the measurement of (semi-)exclusive production of e+e and μ+μ pairs using about 10 fb–1 of CMS data: 20 candidate events were identified with a di-lepton mass greater than 110 GeV. This process is now used as a “standard candle” to calibrate PPS and validate its performance. The cross section of this process has been measured by the ATLAS collaboration with their forward proton spectrometer, AFP (CERN Courier September/October 2020 p15). 

An interesting process to study is the exclusive production of W-boson pairs. In the SM, electroweak gauge bosons are allowed to interact with each other through point-like triple and quartic couplings. Most extensions of the SM modify the strength of these couplings. At the LHC, electroweak self-couplings are probed via gauge-boson scattering, and specifically photon–photon scattering. A notable advantage of exclusive processes is the excellent mass resolution obtained from PPS, allowing the study of self-couplings at different scales with very high precision. 

During Run 2, PPS reconstructed intact protons that lost down to 2% of their kinetic energy, which for proton–proton collisions at 13 TeV translates to sensitivity for
central mass values above 260 GeV. In the production of electroweak boson pairs, WW or ZZ, the quartic self-coupling mainly contributes to the high invariant-mass tail of the di-boson system. The analysis searched for anomalously large values of the quartic gauge coupling and the results provide the first constraint on γγZZ in an exclusive channel and a competitive constraint on γγWW compared to other vector-boson-scattering searches.

Final states produced via photon–photon fusion

Many SM processes proceeding via photon fusion have a relatively low cross section. For example, the predicted cross section for CEP of top quark–antiquark pairs is of the order of 0.1 fb. A search for this process was performed early this year using about 30 fb–1 of CMS data recorded in 2017, with protons tagged by PPS. While the sensitivity of the analysis is not sufficient to test the SM prediction, it can probe possible enhancements due to additional contributions from new physics. Also, the analysis established tools with which to search for exclusive production processes in a multi-jet environment using machine-learning techniques. 

Uncharted domains 

The SM provides very accurate predictions for processes occurring at the LHC. Yet, it cannot explain the origin of several observations such as the existence of dark matter, the matter–antimatter asymmetry in the universe and neutrino masses. So far, the LHC experiments have been unable to provide answers to those questions, but the search is ongoing. Since physics with PPS mostly targets photon collisions, the only assumption is that the new physics is coupled to the electroweak sector, opening a plethora of opportunities for new searches. 

Tagging exclusive events

Photon–photon scattering has already been observed in heavy-ion collisions by the LHC experiments, for example by ATLAS (CERN Courier December 2016 p9). But new physics would be expected to enter at higher di-photon masses, which is where PPS comes into play. Recently, a search for di-photon exclusive events was performed using about 100 fb–1 of CMS data at a di-photon mass greater than 350 GeV, where SM contributions are negligible. In the absence of an unexpected signal, a new best limit was set on anomalous four-photon coupling parameters. In addition, a limit on the coupling of axion-like particles to photon was set in the mass region 500–2000 GeV. These are the most restrictive limits to date.

A new, interesting possibility to look for unknown particles is represented by the “missing mass” technique. The exclusivity of CEP makes it possible, in two-particle final states, to infer the four-momentum of one particle if the other is measured. This is done by exploiting the fact that, if the protons are measured and the beam energy is known, the kinematics of the centrally produced final state can be determined: no direct measurements of the second particle are required, allowing us to “see the unseen”. This technique was demonstrated for the first time at the LHC this year, using around 40 and 2 fb–1 of Run 2 data in a search for pp  pZXp and pp  pγXp, respectively, where X represents a neutral, integer-spin particle with an unspecified decay mode. In the absence of an observed signal, the analysis sets the first upper limits for the production of an unspecified particle in the mass range 600–1600 GeV.

Looking forward with PPS

di-photon exclusive production

For LHC Run 3, which began in earnest on 5 July, the PPS team has implemented several upgrades to maximise the physics output from the expected increase in integrated luminosity. The mechanics and readout electronics of the pixel tracker have been redesigned to allow remote shifting of the sensors in several small steps, which better distributes the radiation damage caused by the highly non-uniform irradiation. All timing stations are now equipped with “double diamond” sensors, and from 2023 an additional, second station will be added to each PPS arm. This will improve the resolution of the measured arrival time of protons, which is crucial for reconstructing the z coordinate of a possible common vertex, by at least a factor of two. Finally, a new software trigger has been developed that requires the presence of tagged protons in both PPS arms, thus allowing the use of lower energy thresholds for the selection of events with two particle jets in CMS.

The sensitivity in many channels is expected to increase by a factor of four or five compared to that in Run 2, despite only a doubling of the integrated luminosity. This significant increase is due to the upgrade of the detectors, especially of the timing stations, thus placing PPS in the spotlight of the Run 3 research programme. Timing detectors also play a crucial role in the planning for the high-luminosity LHC (HL-LHC) phase. The CMS collaboration has released an expression of interest to pursue studies of CEP at the HL-LHC with the ambitious plan of installing near-beam proton spectrometers at 196, 220, 234, and 420 m from the interaction point. This would extend the accessible mass range to the region between 50 GeV and 2.7 TeV. The main challenge here is to mitigate high “pileup” effects using the timing information, for which new detector technologies, including synergies with the future CMS timing detectors, are being considered.

PPS significantly extends the LHC physics programme, and is a tribute to the ingenuity of the CMS collaboration in the ongoing search for new physics.

The post CMS looks forward to new physics with PPS appeared first on CERN Courier.

]]>
Feature A new CMS subdetector – the Precision Proton Spectrometer (PPS) – allows the electroweak sector of the Standard Model to be probed in regions so far unexplored. https://cerncourier.com/wp-content/uploads/2022/08/CCSepOct22_PPS_feature.jpg
Limbering up for the Einstein Telescope https://cerncourier.com/a/limbering-up-for-the-einstein-telescope/ Thu, 30 Jun 2022 13:44:18 +0000 https://preview-courier.web.cern.ch/?p=102001 Preparations for a next-generation gravitational-wave observatory in Europe gather pace, with a conditional allocation of €42 million from the Dutch government.

The post Limbering up for the Einstein Telescope appeared first on CERN Courier.

]]>
Einstein Telescope

On 14 April the government of the Netherlands announced that it intends to conditionally allocate €42 million to the development of the Einstein Telescope – a proposed next-generation gravitational-wave observatory in Europe. It also pledged a further €870 million for a potential future Dutch contribution to the construction. The decision was taken by the Dutch government based on the advice of the Advisory Committee of the National Growth Fund, stated a press release from Nikhef and the regional development agency for Limburg. 

The Einstein Telescope (ET) is a triangular laser interferometer with sides 10 km-long that would be at least 10 times more sensitive than the Advanced LIGO and Virgo observatories, extending its scope for detections and enabling physicists to look back much further in cosmological time. To reach the required sensitivities, the interferometer has to be built at least 200 m underground in a geologically stable area. Its mirrors will have to operate in cryogenic conditions to reduce thermal disturbance, and be larger and heavier than those currently employed to allow for a larger and more powerful laser beam. 

Activities have been taking place at two potential sites in Europe: the border region of South Limburg (the Euregio Meuse-Rhine) in the Netherlands; and the Sar-Grav laboratory in the Sos Enattos mine in Sardinia, Italy. For the Sardinia site, a similar proposal has been submitted to the Italian government and feedback is expected in July.

The Netherlands’ intended €42 million investment will go towards preparatory work such as innovation of the necessary technology, location research, building up a high-tech ecosystem and organisation, stated the press release, while the reservation of €870 million is intended to put the Netherlands in a strong position to apply in the future – together with Belgium and Germany – to host and build the ET. 

It is fantastic that the cabinet embraces the ambition to make the Netherlands a world leader in research into gravity waves

“It is fantastic that the cabinet emb­races the ambition to make the Netherlands a world leader in research into gravity waves,” said Nikhef director Stan Bentvelsen, who has been involved with the ET for several years. “These growth-fund resources form the basis for further cooperation with our partners in Germany and Belgium, and for research into the geological subsurface in the border region of South Limburg. A major project requires a careful process, and I am confident that we will meet the additional conditions.”

Housing the ET in the region could have a major positive impact on science, the economy and society in the Netherlands, said provincial executive member for Limburg Stephan Satijn. “With today’s decision, the cabinet places our country at the global forefront of high-tech and science. Limburg is the logical place to help shape this leading position. Not only because of the suitability of our soil, but also because we are accustomed to working together internationally and to connecting science and business.”

At the 12th ET symposium in Budapest on 7–8 June, the ET scientific collaboration was officially born – a crucial step in the project’s journey, said ad interim spokesperson Michele Punturo of the INFN: “We were a scientific community, today we are a scientific collaboration, that is, a structured and organised system that works following shared rules to achieve the common goal: the realisation of a large European research infrastructure that will allow us to maintain scientific and technological leadership in this promising field of fundamental physics research.”

In January, the ET was granted status as a CERN recognised experiment (RE43), with a collaboration agreement on vacuum technology already in place and a further agreement concerning cryogenics at an advanced stage.

The post Limbering up for the Einstein Telescope appeared first on CERN Courier.

]]>
News Preparations for a next-generation gravitational-wave observatory in Europe gather pace, with a conditional allocation of €42 million from the Dutch government. https://cerncourier.com/wp-content/uploads/2022/06/CCJulAug22_NA-ET.jpg
Flying high with silicon photomultipliers https://cerncourier.com/a/flying-high-with-silicon-photomultipliers/ Wed, 25 May 2022 07:45:59 +0000 https://preview-courier.web.cern.ch/?p=100404 Silicon photomultipliers offer many advantages over traditional tube devices, but further R&D is needed to understand their performance under radiation damage.

The post Flying high with silicon photomultipliers appeared first on CERN Courier.

]]>
sipm_2

The ever maturing technology of silicon photomultipliers (SiPMs) has a range of advantages over traditional photomultiplier tubes (PMTs). As such, SiPMs are quickly replacing PMTs in a range of physics experiments. The technology is already included in the LHCb SciFi tracker and is foreseen to be used in CMS’ HGCAL, as well as in detectors at proposed future colliders. For these applications the important advantages of SiPMs over PMTs are their higher photo-detection efficiencies (by roughly a factor of two), their lower operating voltage (30-70 V compared to kV’s) and their small size, which allows them to be integrated in compact calorimeters. For space-based instruments — such as the POLAR-2 gamma-ray mission, which aims to use 6400 SiPM channels (see image) — a further advantage is the lack of a glass window, which gives SiPMs the mechanical robustness required during launch. There is, however, a disadvantage with SiPMs: dark current, which flows when the device is not illuminated and is greatly aggravated after exposure to radiation.

In order to strengthen the community and make progress on this technological issue, a dedicated workshop was held at CERN in a hybrid format from 25 to 29 April. Organized by the University of Geneva and funded by the Swiss National Science Foundation, the event attracted around 100 experts from academia and industry. The participants included experts in silicon radiation damage from the University of Hamburg who showed both the complexity of the problem and the need for further studies. Whereas the non-ionizing energy loss concept used to predict radiation damage in silicon is linearly correlated to the degradation of semiconductor devices in a radiation field, this appears to be violated for SiPMs. Instead, dedicated measurements for different types of SiPMs in a variety of radiation fields are required to understand the types of damage and their consequences on the SiPMs’ performance. Several such measurements, performed using both proton and neutron beams, were presented at the April workshop, while plans were made to coordinate such efforts in the future, for example by performing tests of one type of SiPMs at different facilities followed by identical analysis of the irradiated samples. In addition, an online platform to discuss upcoming results was established.

The lack of a glass window gives SiPMs the mechanical robustness required during launch

The damage sustained by radiation manifests itself mainly in the form of an increased dark current. As presented at the workshop, this increase can cause a vicious cycle because the increased current can cause self-heating, which further increases the highly temperature-dependent dark current. These issues are of great importance for future space missions as they influence the power budget, causing the scientific performance to degrade over time. Data from the first SiPM based in-orbit detectors, such as the SIRI mission by the US Naval Research Lab, the Chinese-led GECAM and GRID detectors and the Japanese-Czech GRBAlpha payload, were presented. It is clear that although SiPMs have advantages over PMTs, the radiation, which is highly dependent on the satellite’s orbit, can cause a significant degradation in performance that limits low-earth orbit missions to several years in space. Based on these results, a future Moon mission has decided against the use of SiPMs and reverted to PMTs.

Solutions to radiation damage in SiPMs were also discussed at length. These are mainly in the form of speeding up the annealing of the damage by exposing SiPMs to hotter environments for short periods. Additionally, cooling of the SiPM during data taking will not only decrease the dark current directly, but could also reduce the radiation damage itself, although further research on this topic is required.

Overall, the workshop indicated significant further studies are required to predict the impact of radiation damage on future experiments.

The post Flying high with silicon photomultipliers appeared first on CERN Courier.

]]>
Meeting report Silicon photomultipliers offer many advantages over traditional tube devices, but further R&D is needed to understand their performance under radiation damage. https://cerncourier.com/wp-content/uploads/2022/05/sipm_1.jpg
VELO’s voyage into the unknown https://cerncourier.com/a/velos-voyage-into-the-unknown/ Mon, 02 May 2022 08:33:49 +0000 https://preview-courier.web.cern.ch/?p=99229 LHCb's all-new VELO detector will extend the collaboration's capabilities to search for new physics at Run 3.

The post VELO’s voyage into the unknown appeared first on CERN Courier.

]]>
Marvellous modules

The first 10 years of the LHC have cemented the Standard Model (SM) as the correct theory of known fundamental particle interactions. But unexplained phenomena such as the cosmological matter–antimatter asymmetry, neutrino masses and dark matter strongly suggest the existence of new physics beyond the current direct reach of the LHC. As a dedicated heavy-flavour physics experiment, LHCb is ideally placed to allow physicists to look beyond this horizon. 

Measurements of the subtle effects that new particles can have on SM processes are fully complementary to searches for the direct production of new particles in high-energy collisions. As-yet unknown particles could contribute to the mixing and decay of beauty and charm hadrons, for example, leading to departures from the SM in decay rates, CP-violating asymmetries and other measurements. Rare processes for which the SM contribution occurs through loop diagrams are particularly promising for potential discoveries. Several anomalies recently reported by LHCb in such processes suggest that the cherished SM principle of lepton-flavour universality is under strain, leading to speculation that the discovery of new physics may not be far off.

Unique precision

In addition to precise theoretical predictions, flavour-physics measurements demand vast datasets and specialised detector and data-processing technology. To this end, the LHCb collaboration is soon to start taking data with an almost entirely new detector that will allow at least 50 fb–1 of data to be accumulated during Run 3 and Run 4, compared to 10 fb–1 from Run 1 and Run 2. This will enable many observables, in particular the flavour anomalies, to be measured with a precision unattainable at competing experiments. 

To allow LHCb to run at an instantaneous luminosity 10 times higher than during Run 2, much of the detector system and its readout electronics have been replaced, while a flexible full-software trigger system running at 40 MHz allows the experiment to maintain or even improve trigger efficiencies despite the larger interaction rate. During Long Shutdown 2, upgraded ring-imaging Cherenkov detectors and a brand new “SciFi” (scintillating fibre) tracker have been installed. A major part of LHCb’s metamorphosis – in process at the time of writing – is the installation of a new Vertex Locator (VELO) at the heart of the experiment. 

The VELO encircles the LHCb interaction point, where it contributes to triggering, tracking and vertexing. Its principal task is to pick out short-lived charm and beauty hadrons from the multitude of other particles produced by the colliding proton beams. Thanks to its close position to the interaction point and high granularity, the VELO can measure the decay time of B mesons with a precision of about 50 fs. 

Microcooling

The original VELO was based on silicon-strip detectors. Its upgraded version employs silicon pixel detectors to cope with the increased occupancies at higher luminosities and to stream complete events at 40 MHz, with an expected torrent of up to 3 Tb/s flowing from the VELO at full luminosity. A total of 52 silicon pixel detector modules, each with a sensitive surface of about 25 cm2, are mounted in two detector halves located on either side of the LHC beams and perpendicular to the beam direction (see “Marvellous modules” image). An important feature of the LHCb VELO is that it moves. During injection of LHC protons, the detectors are parked at a safe distance of 3 cm from the beams. But once stable beams are declared, the two halves are moved inward such that the detector sensors effectively enclose the beam. At that point the sensitive elements will be as close as 5.1 mm to the beams (compared to 8.2 mm previously), which is much closer than any of the other large LHC detectors and vital for the identification and reconstruction of charm- and beauty-hadron decays. 

The VELO’s close proximity to the interaction point requires a high radiation tolerance. This led the collaboration to opt for silicon-hybrid pixel detectors, which consist of a 200 μm-thick “p-on-n” pixel sensor bump-bonded to a 200 μm-thick readout chip with binary pixel readout. The CERN/Nikhef-designed “VeloPix” ASIC stems from the Medipix family and was specially developed for LHCb. It is capable of handling up to 900 million hits per second per chip, while withstanding the intense radiation environment. The data are routed through the vacuum via low-mass flex cables engineered by the University of Santiago de Compostela, then make the jump to atmosphere through a high-speed vacuum interface designed by Moscow State University engineers, which is connected to an optical board developed by the University of Glasgow. The data are then carried by optical fibres with the rest of the LHCb data to the event builder, trigger farm and disk buffers contained in modular containers in the LHCb experimental area.

The VELO modules were constructed at two production sites: Nikhef and the University of Manchester, where all the building blocks were delivered from the many institutes involved and assembled together over a period of about 1.5 years. After an extensive quality-assurance programme to assess the mechanical, electrical and thermal performance of each module, they were shipped in batches to the University of Liverpool to be mounted into the VELO halves. Finally, after population with modules, each half of the VELO detector was transported to CERN for installation in the LHCb experiment. The first half was installed on 2 March, and the second is being assembled.

Microchannel cooling

Keeping the VELO cool to prevent thermal runaway and minimise the effects of radiation damage was a major design challenge. The active elements in a VELO module consist of 12 front-end ASICs (VeloPix) and two control ASICs (GBTX), with a nominal power consumption of about 1.56 kW for each VELO half. The large radiation dose experienced by the silicon sensors is distributed highly non-uniformly and concentrated in the region closest to the beams, with a peak dose 60% higher than that experienced by the other LHC tracking detectors. Since the sensors are bump-bonded to the VeloPix chips, they are in direct contact with the ASICs, which are the main source of heat. The detector is also operated under vacuum, making heat removal especially difficult. These challenging requirements led LHCb to adopt microchannel cooling with evaporative CO2 as the coolant (see “Microcooling” image). 

Keeping the VELO cool to prevent thermal runaway and minimise the effects of radiation damage was a major design challenge

The circulation of coolant in microscopic channels embedded within a silicon wafer is an emergent technology, first implemented at CERN by the NA62 experiment. The VELO upgrade combines this with the use of bi-phase (liquid-to-gas) CO2, as used by LHCb in previous runs, in a single innovative system. The LHCb microchannel cooling plates were produced at CERN in collaboration with the University of Oxford. The bare plates were fabricated by CEA-Leti (Grenoble, France) by atomic-bonding two silicon wafers together, one with 120 × 200 μm trenches etched into it, for an overall thickness of 500 μm. This approach allows the design of a channel pattern to ensure a very homogeneous flow directly under the heat sources. The coolant is circulated inside the channels through exit and entry slits that are etched directly into the silicon after the bonding step. The cooling is so effective that it is possible to sustain an overhang of 5 mm closest to the beam, thus reducing the amount of material before the first measured points on each track. The use of microchannels to cool electronics is being investigated both for future LHCb upgrades and several other future detectors.

Module assembly and support

The microchannel plate serves as the core of the mechanical support for all the active components. The silicon sensors, already bump-bonded to their ASICs to form a tile, are precisely positioned with respect to the base and glued to the microchannel plate with a precision of 30 μm. The thickness of the glue layer is around 80 µm to produce low thermal gradients across the sensor. The front-end ASICs are then wire-bonded to custom-designed kapton–copper circuit boards, which are also attached to the microchannel substrate. The ASICs’ placement requires a precision of about 100 µm, such that the length and shape of the 420 wire-bonds are consistent along the tile. High-voltage, ultra-high-speed data links and all electrical services are designed and attached in such a way to produce a precise and lightweight detector (a VELO module weighs only 300 g) and therefore minimise the material in the LHCb acceptance.

Every step in the assembly of a module was followed by checks to ensure that the quality met the requirements. These included: metrology to assess the placement and attachment precision of the active components; mechanical tests to verify the effects of the thermal stress induced by temperature gradients; characterisation of the current-voltage behaviour of the silicon sensors; thermal performance measurements; and electrical tests to check the response of the pixel matrix. The results were then uploaded to a database, both to keep a record of all the measurements carried out and to run tests that assign a grade for each module. This allowed for continuous cross-checks between the two assembly sites. To quantify the effectiveness of the cooling design, the change in temperature on each ASIC as a function of the power consumption was measured. The LHCb modules have demonstrated thermal-figure-of-merit values as low as 2–3 K cm2 W–1. This performance surpasses what is possible with, for example, mono-phase microchannel cooling or integrated-pipe solutions. 

RF boxes

The delicate VELO modules are mounted onto two precision-machined bases, each housed within a hood (one for each side) that provides isolation from the atmosphere. The complex monolithic hoods were machined from one-tonne billets of aluminium to provide the vacuum tightness and the mechanical performance required. The hood and base system is also articulated to allow the detector to be retracted during injection and to be centred accurately around the collision point during stable beams. Pipes and cables for the electrical and cooling services are designed to absorb the approximately 3 cm motion of each VELO half without transferring any force to the modules, to be radiation tolerant, and to survive flexing thousands of times. 

Following the completion of each detector half, performance measurements of each module were compared with those taken at the production sites. Further tests ensured there are no leaks in the high-pressure cooling system or the vacuum volumes, in addition to safety checks that guarantee the long-term performance of the detector. A final set of measurements checks the alignment of the detector along the beam direction, which is extremely difficult once the VELO is installed. Before installation, the detectors are cooled close to their –30°C operating temperature and the position of the tips of the modules measured with a precision of 5 µm. Once complete, each half-tonne detector half is packed for transport into a frame designed to damp-out and monitor vibrations during its 1400 km journey by road from Liverpool to CERN.

RF boxes

One of the most intriguing technological challenges of the VELO upgrade was the design and manufacture of the RF boxes that separate the two detector halves from the primary beam vacuum, shielding the sensitive detectors from RF radiation generated by the beams and guiding the beam mirror currents to minimise wake-fields. The sides of the boxes facing the beams need to be as thin as possible to minimise the impact of particle scattering, yet at the same time they must be vacuum-tight. A further challenge was to design the structures such that they do not touch the silicon sensors even under pressure differences. Whereas the RF boxes of LHCb’s previous VELO were made from 300 μm-thick hot-pressed deformed sheets of aluminium foils welded together, the more complicated layout of the new VELO required them to be machined from solid blocks of small grain-sized forged aluminium. This highly specialised procedure was developed and carried out at Nikhef using a precision five-axis milling machine (see “RF boxes” image).

The VELO upgrade reflects the dedication and work of more than 150 people at 13 institutes over many years

In early prototypes, micro-enclosures led to small vacuum leaks when machining thin layers. A 3D forging technique, performed by block manufacturer Loire Industrie (France), reduced the porosity of the casts sufficiently to eliminate this problem. To form the very thin sides of a box, the inside of the block was milled first. It was then positioned on an aluminium mould. The 1 mm space between box and mould was filled with heated liquid wax, which forms a strong and stable bond at room temperature. The remaining material was then machined until a sturdy flange and box with a wall about 250 μm thick remained, or just over 1% of the original 325 kg block. To further minimise the thickness in the region closest to the beams, a procedure was developed at CERN to remove more material with a chemical agent, leaving a final wall with a thickness between 150 and 200 μm. The final step was the application of a Torlon coating on the inside for electrical insulation to the sensors, and a non-evaporable getter coating on the outside to improve the beam vacuum. The two boxes were installed in the vacuum tank in spring 2021, in advance of the insertion of the VELO modules.

Let collisions commence 

LHCb’s original VELO played a pivotal role in the experiment’s flavour-physics programme. This includes the 2019 discovery of CP violation in the charm sector, numerous matter–antimatter asymmetry measurements and rare-decay searches, and the recent hints of lepton non-universality in B decays. The upgraded VELO detector – in conjunction with the new software trigger, the RICH and SciFi detectors, and other upgrades – will extend LHCb’s capabilities to search for physics beyond the SM. It will remain in place for the start of High-Luminosity LHC operations in Run 4, contributing to the full exploitation of the LHC’s physics potential.

Proposed 15 years ago, with a technical design report published in 2013 and full approval the following year, the VELO upgrade reflects the dedication and work of more than 150 people at 13 institutes over many years. The device is now in final construction. One half is installed and is undergoing commissioning in LHCb, while the other is being assembled, and will be delivered to CERN for installation during a dedicated machine stop during May. The assembly and installation has been made considerably more challenging by COVID-19-related travel and working restrictions, with final efforts taking place around the clock to meet the tight LHC schedule. Everyone in the LHCb collaboration is therefore looking forward to seeing the first data from the new detectors and continuing the success of the LHC’s world-leading flavour-physics programme.

The post VELO’s voyage into the unknown appeared first on CERN Courier.

]]>
Feature LHCb's all-new VELO detector will extend the collaboration's capabilities to search for new physics at Run 3. https://cerncourier.com/wp-content/uploads/2022/04/CCMayJun22_VELO_feature.jpg
Spotlight on FCC physics https://cerncourier.com/a/spotlight-on-fcc-physics/ Mon, 14 Mar 2022 13:11:44 +0000 https://preview-courier.web.cern.ch/?p=97952 The 5th FCC Physics Workshop saw advances in the physics capabilities and detector R&D for the proposed Future Circular Collider.

The post Spotlight on FCC physics appeared first on CERN Courier.

]]>
Ten years after the discovery of a Standard Model-like Higgs boson at the LHC, particle physicists face profound questions lying at the intersection of particle physics, cosmology and astrophysics. A visionary new research infrastructure at CERN, the proposed Future Circular Collider (FCC), would create opportunities to either answer them or refine our present understanding. The latest activities towards the ambitious FCC physics programme were the focus of the 5th FCC Physics Workshop, co-organised with the University of Liverpool as an online event from 7 to 11 February. It was the largest such workshop to date, with more than 650 registrants, and welcomed a wide community geographically and thematically, including members of other “Higgs factory” and future projects.

The overall FCC programme – comprising an electron-positron Higgs and electroweak factory (FCC-ee) as a first stage followed by a high-energy proton-proton collider (FCC-hh) – combines the two key strategies of high-energy physics. FCC-ee offers a unique set of precision measurements to be confronted with testable predictions and opens the possibility for exploration at the intensity frontier, while FCC-hh would enable further precision and the continuation of open exploration at the energy frontier. The February workshop saw advances in our understanding of the physics potential of FCC-ee, and discussions of the possibilities provided at FCC-hh and at a possible FCC-eh facility.

The overall FCC programme combines the two key strategies of high-energy physics: precision measurements at the intensity frontier and the open exploration at the energy frontier

The proposed R&D efforts for the FCC align with the requests of the 2020 update of the European strategy for particle physics and the recently published accelerator and detector R&D roadmaps established by the Laboratory Directors Group and ECFA. Key activities of the FCC feasibility study, including the development of a regional implementation scenario in collaboration with the CERN host states, were presented.

Over the past several months, a new baseline scenario for a 91 km-circumference layout has been established, balancing the optimisation of the machine performance, physics output and territorial constraints. In addition, work is ongoing to develop a sustainable operational model for FCC taking into account human and financial resources and striving to minimise its environmental impact. Ongoing testing and prototyping work on key FCC-ee technologies will demonstrate the technical feasibility of this machine, while parallel R&D developments on high-field magnets pave the way to FCC-hh.

Physics programme
A central element of the overall FCC physics programme is the precise study of the Higgs sector. FCC-ee would provide model-independent measurements of the Higgs width and its coupling to Standard Model particles, in many cases with sub-percent precision and qualitatively different to the measurements possible at the LHC and HL-LHC. The FCC-hh stage has unique capabilities for measuring the Higgs-boson self-interactions, profiting from previous measurements at FCC-ee. The full FCC programme thus allows the reconstruction of the Higgs potential, which could give unique insights into some of the most fundamental puzzles in modern cosmology, including the breaking of electroweak symmetry and the evolution of the universe in the first picoseconds after the Big Bang.

Presentations and discussions throughout the week showed the impressive breadth of the FCC programme, extending far beyond the Higgs factory alone. The large integrated luminosity to be accumulated by FCC-ee at the Z-pole enables high-precision electroweak measurements and an ambitious flavour-physics programme. While the latter is still in the early phase of development, it is clear that the number of B mesons and tau-lepton pairs produced at FCC-ee significantly surpasses those at Belle II, making FCC-ee the flavour factory of the 2040s. Ongoing studies are also revealing its potential for studying interactions and decays of heavy-flavour hadrons and tau leptons, which may provide access to new phenomena including lepton-flavour universality-violating processes. Similarly, the capabilities of FCC-ee to study beyond-the-Standard Model signatures such as heavy neutral leptons have come into further focus. Interleaved presentations on FCC-ee, FCC-hh and FCC-eh physics also further intensified the connections between the lepton- and hadron-collider communities.

The impressive potential of the full FCC programme is also inspiring theoretical work. This ranges from overarching studies on our understanding of naturalness, to concrete strategies to improve the precision of calculations to match the precision of the experimental programme.

The physics thrusts of the FCC-ee programme inform an evaluation of the run plan, which will be influenced by technical considerations on the accelerator side as well as by physics needs and the overall attractiveness and timeliness of the different energy stages (ranging from the Z pole at 91 GeV to the tt threshold at 365 GeV). In particular, the possibility for a direct measurement of the electron Yukawa coupling by extensive operation at the Higgs pole (125 GeV) raises unrivaled challenges, which will be further explored within the FCC feasibility study. The main challenge here is to reduce the spread in the centre-of-mass energy by a factor of around ten while maintaining the high luminosity, requiring a monochromatisation scheme long theorised but never applied in practice.

CLD_iso_view

Detectors status and plan
Designing detectors to meet the physics requirements of FCC-ee physics calls for a strong R&D programme. Concrete detector concepts for FCC-ee were discussed, helping to establish a coherent set of requirements to fully benefit from the statistics and the broad variety of physics channels available.

The primary experimental challenge at FCC-ee is how to deal with the extremely high instantaneous luminosities. Conditions are the most demanding at the Z pole, with the luminosity surpassing 1036 cm-2s-1 and the rate of physics events exceeding 100 kHz. Since collisions are continuous, it is not possible to employ “power pulsing” of the front-end electronics as has been developed for detector concepts at linear colliders. Instead, there is a focus on the development of fast, low-power detector components and electronics, and on efficient and lightweight solutions for powering and cooling. With the enormous data samples expected at FCC-ee, statistical uncertainties will in general be tiny (about a factor of 500 smaller than at LEP). The experimental challenge will be to minimise systematic effects towards the same level.

The mind-boggling integrated luminosities delivered by FCC-ee would allow Standard Model particles – in particular the W, Z and Higgs bosons and the top quark, but also the b and c quarks and the tau lepton – to be studied with unprecedented precision. The expected number of Z bosons produced (5×1012) is more than five orders of magnitude larger than the number collected at LEP, and more than three orders of magnitude larger than that envisioned at a linear collider. The high-precision measurements and the observation of rare processes made possible by these large data samples will open opportunities for new-physics discoveries, including the direct observation of very weakly-coupled particles such as heavy-neutral leptons, which are promising candidates to explain the baryon asymmetry of the universe.

With overlapping requirements, designs for FCC-ee can follow the example of detectors proposed for linear colliders.

The detectors that will be located at two (possibly four) FCC-ee interaction points must be designed to fully profit from the extraordinary statistics. Detector concepts under study feature: a 2 T solenoidal magnetic field (limited in strength to avoid blow-up of the low-emittance beams crossing at 30 mrad); a small-pitch, thin-layers vertex detector providing an excellent impact-parameter resolution for lifetime measurements; a highly transparent tracking system providing a superior momentum resolution; a finely segmented calorimeter system with excellent energy resolution for electrons and photons, isolated hadrons and jets; and a muon system. To fully exploit the heavy-flavour possibilities, at least one of the detector systems will need efficient particle-identification capabilities allowing π/K separation over a wide momentum range, for which there are ongoing R&D efforts on compact, light RICH detectors.

With overlapping requirements, designs for FCC-ee can follow the example of detectors proposed for linear colliders. The CLIC-inspired CLD concept – featuring a silicon-pixel vertex detector and a silicon tracker followed by a 3D-imaging, highly granular calorimeter system (a silicon-tungsten ECAL and a scintillator-steel HCAL) surrounded by a superconducting solenoid and muon chambers interleaved with a steel return yoke – is being adapted to the FCC-ee experimental environment. Further engineering effort is needed to make it compatible with the continuous-beam operation at FCC-ee. Detector optimisation studies are being facilitated by the robust existing software framework which has been recently integrated into the FCC study.

FCC Curved silicon

The IDEA (International Detector for Electron-positron Accelerator) concept, specifically developed for a circular electron-positron collider, brings in alternative technological solutions. It includes a five-layer vertex detector surrounded by a drift chamber, enclosed in a single-layer silicon “wrapper”. The distinctive element of the He-based drift chamber is its high transparency. Indeed, the material budget of the full tracking system, including the vertex detector and the wrapper, amounts to only about 5% (10%) of a radiation length in the barrel (forward) direction. The drift chamber promises superior particle-identification capabilities via the use of a cluster-counting technique that is currently under test-beam study. In the baseline design, a thin low-mass solenoid is placed inside a monolithic, 2 m-deep, dual-readout fibre calorimeter. An alternative (more expensive) design also features a finely segmented crystal ECAL placed immediately inside the solenoid, providing an excellent energy resolution for electrons and photons.

FCC feedthrough_test_setup

Recently, work has started on a third FCC-ee detector concept comprising: a silicon vertex detector; a light tracker (drift chamber or full-silicon device); a thin, low-mass solenoid; a highly-granular noble liquid-based ECAL; a scintillator-iron HCAL; and a muon system. The current baseline ECAL design is based on lead/steel absorbers and active liquid-argon, but a more compact option based on tungsten and liquid-krypton is an interesting option. The concept design is currently being implemented inside the FCC software framework.

All detector concepts are under evolution and there is ample room for further innovative concepts and ideas.

Closing remarks
Circular colliders reach higher luminosities than linear machines because the same particle bunches are used over many turns, while detectors can be installed at several interaction points. The FCC-ee programme greatly benefits from the possibility of having four interaction points to allow the collection of more data, systematic robustness and better physics coverage — especially for very rare processes that could offer hints as to where new physics could lie. In addition, the same tunnel can be used for an energy-frontier hadron collider at a later stage.

The FCC feasibility study will be submitted by 2025, informing the next update of the European strategy for particle physics. Such a machine could start operation at CERN within a few years after the full exploitation of the HL-LHC in around 2040. CERN, together with its international partners, therefore has the opportunity to lead the way for a post-LHC research infrastructure that will provide a multi-decade research programme exploring some of the most fundamental questions in physics. The geographical distribution of participants in the 5th FCC physics workshop testifies to the global attractiveness of the project. In addition, the ongoing physics and engineering efforts, the cooperation with the host states, the support from the European physics community and the global cooperation to tackle the open challenges of this endeavour, are reassuring for the next steps of the FCC feasibility study.

The post Spotlight on FCC physics appeared first on CERN Courier.

]]>
Meeting report The 5th FCC Physics Workshop saw advances in the physics capabilities and detector R&D for the proposed Future Circular Collider. https://cerncourier.com/wp-content/uploads/2022/03/FCCee-CERNCourier.jpg
Celebrating 20 years of n_TOF https://cerncourier.com/a/celebrating-20-years-of-n_tof/ Mon, 07 Feb 2022 14:08:48 +0000 https://preview-courier.web.cern.ch/?p=97260 The hybrid event highlighted the ongoing achievements of CERN's n_TOF facility and its nuclear science and applications.

The post Celebrating 20 years of n_TOF appeared first on CERN Courier.

]]>
n_TOF

The Neutron Time Of Flight (n_TOF) facility at CERN, a project proposed by former Director General Carlo Rubbia in the late 1990s, started operations in 2001. Its many achievements during the past two decades, and future plans in neutron science worldwide, were the subject of a one-day hybrid event NSTAPP – Neutrons in Science, Technology and Applications organised by the n_TOF collaboration at CERN on 22 November.

At n_TOF, a 20 GeV/c proton beam from the Proton Synchrotron (PS) strikes an actively cooled pure-lead  neutron spallation target. The generated neutrons are water-moderated to produce a spectrum that covers 11 orders of magnitude in energy from GeV down to meV. At the beginning, n_TOF was equipped with a single experimental station, located 185 m downstream from the spallation target. In 2014, a major upgrade saw the construction and operation of a new experimental test area located 20 m above the production target to allow measurements of very low-mass samples. Last year, during Long Shutdown 2, a new third-generation, nitrogen-cooled spallation target was installed and successfully commissioned to prolong the experiment’s lifetime by ten years. At the same time, a new close-to-target irradiation and experimental station called NEAR was added to perform activation measurements relevant nuclear astrophysics and measurements in collaboration with the R2E (Radiation to Electronics) project that are difficult at other facilities.

Advancing technology

During 20 years of activities, the n_TOF collaboration has carried out more than 100 experiments with considerable impact on nuclear astrophysics, advanced nuclear technologies and applied nuclear sciences, including novel medical applications. Understanding the origin of the chemical elements through the slow-neutron-capture process has been a particular highlight. The high instantaneous neutron flux, which is only available at n_TOF thanks to the short proton pulse delivered by the PS, provided key reaction rates relevant to big-bang nucleosynthesis and stellar evolution (the former attempting to explain the discrepancy between the predicted and existing amount of lithium by investigating 7Be creation and destruction, and the latter determining the chemical history of our galaxy).

Basic nuclear data are also essential for the development of nuclear-energy technology. It was this consideration that motivated Rubbia to propose a spallation neutron source at CERN in the first place, prompting a series of accurate neutron cross-section measurements on minor actinides and fission products. Neutron reaction processes on thorium, neptunium, americium, curium, in addition to minor isotopes of uranium and plutonium, have been all measured at n_TOF. These measurements provide the nuclear data necessary for the development of advanced nuclear systems, such as the increase of safety margins in existing nuclear plants as well as to enable generation-IV reactors and accelerator-driven systems, or even enabling new fuel cycles which reduce the amount of long-lived nuclear species.

Basic nuclear data are also essential for the development of nuclear-energy technology

Contributions from external laboratories, such as J-PARC (Japan), the Chinese Spallation Neutron Source (China), SARAF (Israel), GELINA (Belgium), GANIL (France) and Los Alamos (US), highlighted synergies in the measurement of neutron-induced capture, fission and light-charged-particle reactions for nuclear astrophysics, advanced nuclear technologies, and medical applications.  Moreover, technologies developed at CERN have also influenced the creation of two startups, Transmutex and Newcleo. The former focuses on accelerator-driven systems for energy production, for which the first physics validation was executed at the FEAT and TARC experiments at the CERN PS in 1999, while the latter plans to develop critical reactors based on liquid lead.

With the recent technical upgrades and the exciting physics programme in different fields, such as experiments focusing on the breaking of isospin symmetry in neutron-neutron scattering and pursuing its core experimental activities, the n_TOF facility has a bright future ahead.

The post Celebrating 20 years of n_TOF appeared first on CERN Courier.

]]>
Meeting report The hybrid event highlighted the ongoing achievements of CERN's n_TOF facility and its nuclear science and applications. https://cerncourier.com/wp-content/uploads/2022/02/n_TOF-feature-image.jpg
Plotting a course to ALICE 3 https://cerncourier.com/a/plotting-a-course-to-alice-3-4/ Tue, 11 Jan 2022 10:53:20 +0000 https://preview-courier.web.cern.ch/?p=96960 Preparations are under way for a next-generation heavy-ion experiment for Run 5 of the LHC and beyond.

The post Plotting a course to ALICE 3 appeared first on CERN Courier.

]]>
ALICE 3 layout

The ALICE detector has undergone significant overhauls during Long Shutdown 2 to prepare for the higher luminosities expected during Run 3 and 4 of the LHC, starting this year. Further upgrades of the inner tracking system and the addition of a new forward calorimeter are being planned for the next long shutdown, ahead of Run 4. A series of physics questions will still remain  inaccessible with Run 3 and 4, and major improvements in the detector performance and an ability to collect an even greater integrated luminosity are needed to address them in Run 5 and beyond. The ideas for a heavy-ion programme for Run 5 and 6 are part of the European strategy for particle physics. At the beginning of 2020, the ALICE collaboration formed dedicated working groups to work out the physics case, the physics performance, and a detector concept for a next-generation heavy-ion experiment called “ALICE 3”.

To advance the project further, the ALICE collaboration organised a hybrid workshop on October 18 and 19, attracting more than 300 participants. Invited speakers on theory and experimental topics reviewed relevant physics questions for the 2030s, and members of the ALICE collaboration presented detector plans and physics performance studies for ALICE 3. Two key areas are the understanding how thermal equilibrium is approached in the quark-gluon plasma (QGP) and the precise measurement of its temperature evolution.

Restoring chiral symmetry

Heavy charm and beauty quarks are ideal probes to understand how thermal equilibrium is approached in the QGP, since they are produced early in the collision and are traceable throughout the evolution of the system. Measurements of azimuthal distributions of charm and beauty hadrons, as well as charm-hadron pairs, are particularly sensitive to the interactions between heavy quarks and the QGP.  In heavy-ion collisions, heavy charm quarks are abundantly produced  and can hadronise into rare multi-charm baryons. The production yield of such particles is expected to be strongly enhanced compared to proton-proton collisions because the free propagation of charm quarks in the deconfined plasma allows the combination of quarks from different initial scatterings.

Electromagnetic radiation is a powerful probe of the temperature evolution of the QGP. Since real and virtual photons emitted throughout the evolution of the system are not affected by the strong interaction, differential measurements of dielectron pairs produced from virtual photons allow physicists to determine the temperature evolution in the plasma phase. Given the high temperature and density of the quark-gluon plasma, chiral symmetry is expected to be restored. ALICE 3 will allow us to study the mechanisms of chiral symmetry restoration from the imprint on the dielectron spectrum.

New specialised detectors are being considered to further extend the physics reach

To achieve the performance required for these measurements and the broader proposed ALICE 3 physics programme, a novel detector concept has been envisioned. At its core is a tracker based on silicon pixel sensors, covering a large  pseudo-rapidity range and installed within a new superconducting magnet system. To achieve the ultimate pointing resolution, a retractable high-resolution vertex detector is to be placed in the beampipe. The tracking is complemented by particle identification over the full acceptance, realised with different technologies, including silicon-based time-of-flight sensors. Further specialised detectors are being considered to further extend the physics reach.

ALICE 3 will exploit completely new detector components to significantly extend the detector capabilities and to fully exploit the physics potential of the LHC. The October workshop marked the start of the discussion of ALICE 3 with the community at large and of the review process with the LHC experiments committee.

The post Plotting a course to ALICE 3 appeared first on CERN Courier.

]]>
Meeting report Preparations are under way for a next-generation heavy-ion experiment for Run 5 of the LHC and beyond. https://cerncourier.com/wp-content/uploads/2022/01/Workshop-main-auditorium.png
Counting collisions precisely at CMS https://cerncourier.com/a/counting-collisions-precisely-at-cms/ Wed, 03 Nov 2021 13:06:56 +0000 https://preview-courier.web.cern.ch/?p=95700 Beyond the setting of new records, precise knowledge of the luminosity at particle colliders is vital for physics analyses.

The post Counting collisions precisely at CMS appeared first on CERN Courier.

]]>
The start of Run-2 physics

Year after year, particle physicists celebrate the luminosity records established at accelerators around the world. On 15 June 2020, for example, a new world record for the highest luminosity at a particle collider was claimed by SuperKEKB at the KEK laboratory in Tsukuba, Japan. Electron–positron collisions at the 3 km-circumference machine had reached an instantaneous luminosity of 2.22 × 1034 cm–2s–1 – surpassing the 27 km-circumference LHC’s record of 2.14 × 1034 cm–2s–1 set with proton–proton collisions in 2018. Within a year, SuperKEKB had celebrated a new record of 3.1 × 1034 cm–2s–1 (CERN Courier September/October 2021 p8).

Integrated proton–proton luminosity

Beyond the setting of new records, precise knowledge of the luminosity at particle colliders is vital for physics analyses. Luminosity is our “standard candle” in determining how many particles can be squeezed through a given space (per square centimetre) at a given time (per second); the more particles we can squeeze into a given space, the more likely they are to collide, and the quicker the experiments fill up their tapes with data. Multiplied by the cross section, the luminosity gives the rate at which physicists can expect a given process to happen, which is vital for searches for new phenomena and precision measurements alike. Luminosity milestones therefore mark the dawn of new eras, like the B-hadron or top-quark factories at SuperKEKB and LHC (see “High-energy data” figure). But what ensures we didn’t make an accidental blunder in calculating these luminosity record values?

Physics focus

Physicists working at the precision frontier need to infer with percent-or-less accuracy how many collisions are needed to reach a certain event rate. Even though we can produce particles at an unprecedented event rate at the LHC, however, their cross section is either too small (as in the case of Higgs-boson production processes) or impacted too much by theoretical uncertainty (for example in the case of Z-boson and top-quark production processes) to enable us to establish the primary event rate with a high level of confidence. The solution comes down to extracting one universal number: the absolute luminosity.

Schematic view of the CMS detector

The fundamental difference between quantum electrodynamics (QED) and chromodynamics (QCD) influences how luminosity is measured at different types of colliders. On the one hand, QED provides a straightforward path to high precision because the absolute rate of simple final states is calculable to very high accuracy. On the other, the complexity in QCD calculations shapes the luminosity determination at hadron colliders. In principle, the luminosity can be inferred by measuring the total number of interactions occurring in the experiment (i.e. the inelastic cross section) and normalising to the theoretical QCD prediction. This technique was used at the SppS and Tevatron colliders. A second technique, proposed by Simon van der Meer at the ISR (and generalised by Carlo Rubbia for the pp case), could not be applied to such single-ring colliders. However, this van der Meer-scan method is a natural choice at the double-ring RHIC and LHC colliders, and is described in the following.

Beam-separation-dependent event rate

Absolute calibration

The LHC-experiment collaborations perform a precise luminosity inference from data (“absolute calibration”) by relating the collision rate recorded by the subdetectors to the luminosity of the beams. With the implementation of multiple collisions per bunch crossing (“pileup”) and intense collision-induced radiation, which acts as a background source, dedicated luminosity-sensitive detector systems called luminometers also had to be developed (see “Luminometers” figure). To maximise the precision of the absolute calibration, beams with large transverse dimensions and relatively low intensities are delivered by the LHC operators during a dedicated machine preparatory session, usually held once a year and lasting for several hours. During these unconventional sessions, called van der Meer beam-separation scans, the beams are carefully displaced with respect to each other in discrete steps, horizontally and vertically, while observing the collision rate in the luminometers (see “Closing in” figure). This allows the effective width and height of the two-dimensional interaction region, and thus the beam’s transverse size, to be measured. Sources of systematic uncertainty are either common to all experiments and are estimated in situ, for example residual differences between the measured beam positions and those provided by the operational settings of the LHC magnets, or depend on the scatter between luminometers. A major challenge with this technique is therefore to ensure that the obtained absolute calibration as extracted under the specialised van der Meer conditions is still valid when the LHC operates at nominal pileup (see “Stability shines” figure).

Stepwise approach

Using such a stepwise approach, the CMS collaboration obtained a total systematic uncertainty of 1.2% in the luminosity estimate (36.3 fb–1) of proton–proton collisions in 2016 – one of the most precise luminosity measurements ever made at bunched-beam hadron colliders. Recently, taking into account correlations between the years 2015–2018, CMS further improved on its preliminary estimate for the proton–proton luminosity at higher collision energies of 13 TeV. The full Run-2 data sample corresponds to a cumulative (“integrated”) luminosity of 140 fb–1 with a total uncertainty of 1.6%, which is comparable to the preliminary estimate from the ATLAS experiment.

Ratio of luminosities between luminometers

In the coming years, in particular when the High-Luminosity LHC (HL-LHC) comes online, a similarly precise luminosity calibration will become increasingly important as the LHC pushes the precision frontier further. Under those conditions, which are expected to produce 3000 fb–1 of proton–proton data by the end of LHC operations in the late 2030s (see “Precision frontier” figure), the impact from (at least some of) the sources of uncertainty is expected to be larger due to the expected high pileup. However, they can be mitigated using techniques already established in Run 2 and/or are currently under deployment. Overall, the strategy for the HL-LHC should combine three different elements: maintenance and upgrades of existing detectors; development of new detectors; and adding dedicated readouts to other planned subdetectors for luminosity and beam monitoring data. This will allow us to meet the tight luminosity performance target ( 1%) while maintaining a good diversity of luminometers. 

Given that accurate knowledge of luminosity is a key ingredient of most physics analyses, experiments also release precision estimates for specialised data sets, for example using either proton–proton collisions at lower centre-of-mass energies or involving nuclear collisions at different per-nucleon centre-of-mass energies, as needed by the ALICE but also ATLAS, CMS and LHCb experiments. On top of the van der Meer method, the LHCb collaboration uniquely employs a “beam-gas imaging” technique in which vertices of interactions between beam particles and gas nuclei in the beam vacuum are used to measure the transverse size of the beams without the need to displace them. In all cases, and despite the fact that the experiments are located at different interaction points, their luminosity-related data are used in combination with input from the LHC beam instrumentation. Close collaboration among the experiments and LHC operators is therefore a key prerequisite for precise luminosity determination.

Protons versus electrons

Contrary to the approach at hadron colliders, the operation of the SuperKEKB accelerator with electron–positron collisions allows for an even more precise luminosity determination. Following well-known QED processes, the Belle II experiment recently reported an almost unprecedented precision of 0.7% for data collected during April–July 2018. Though electrons and positrons conceptually give the SuperKEKB team a slightly easier task, its new record for the highest luminosity set at a collider is thus well established. 

Expected uncertainties

SuperKEKB’s record is achieved thanks to a novel “crabbed waist” scheme, originally proposed by accelerator physicist Pantaleo Raimondi. In the coming years this will enable the luminosity of SuperKEKB is to be increased by a factor of almost 30 to reach its design target of 8 × 1035 cm–2s–1. The crabbed waist scheme, which works by squeezing the vertical height of the beams at the interaction point, is also envisaged for the proposed Future Circular Collider (FCC-ee) at CERN. It also differs from the “crab-crossing” technology, based on special radio­frequency cavities, which is now being implemented at CERN for the high-luminosity phase of the LHC. While the LHC has passed the luminosity crown to SuperKEKB, taken together, novel techniques and the precise evaluation of their outcome continue to push forward both the accelerator and related physics frontiers. 

The post Counting collisions precisely at CMS appeared first on CERN Courier.

]]>
Feature Beyond the setting of new records, precise knowledge of the luminosity at particle colliders is vital for physics analyses. https://cerncourier.com/wp-content/uploads/2021/10/CCNovDec21_CMS_frontis.jpg
Wheels in motion for ATLAS upgrade https://cerncourier.com/a/wheels-in-motion-for-atlas-upgrade/ Wed, 20 Oct 2021 10:13:17 +0000 https://preview-courier.web.cern.ch/?p=95675 New muon end-cap wheels currently being installed in the ATLAS detector will provide precision tracking and triggering at high rates for Run 3 and beyond.

The post Wheels in motion for ATLAS upgrade appeared first on CERN Courier.

]]>
The first of the ATLAS New Small Wheels

The Large Hadron Collider (LHC) complex is being upgraded to significantly extend its scientific reach. Following the ongoing 2019–2022 long shutdown, the LHC is expected to operate during Run 3 at close to its design of 7 TeV per beam and at luminosities more than double the original design. After the next shutdown, currently foreseen in 2025–2027, the High-Luminosity LHC (HL-LHC) will run at luminosities of 5–7 × 1034 cm–2s–1. This corresponds to 140–200 simultaneous interactions per LHC bunch crossing (“pileup”), which is three to four times the Run-3 expectation and up to eight times above the original LHC design value. The ATLAS experiment, like others at the LHC, is undergoing major upgrades for the new LHC era.

Coping with very high interaction rates while maintaining low transverse-momentum (pT) thresholds for triggering on electrons and muons from the targeted physics processes will be extremely challenging at the HL-LHC. Another issue for the ATLAS experiment is that the performance of its muon tracking chambers, particularly in the end-cap regions of the detector, degrades with increasing particle rates. If the original chambers were used for the HL-LHC, it would lead to a loss in the efficiency and resolution of muon reconstruction.

Pseudorapidity distribution of muon candidates

Muons are vital for efficiently triggering on, and thus precisely studying, processes in the electroweak sector such as Higgs, W and Z physics. It is therefore essential that the ATLAS detector cover as much volume as possible across the pseudorapidity distribution η = –ln tanθ/2, where θ is the angle with respect to the proton beam axis. In the central region of the detector, corresponding to a pseudorapidity |η| < 1, there is a good purity of muons originating from the proton collision point (see “Good muons” figure). In the end caps, |η| > 1.3, significant contributions, the so-called “fake” muon signals (see “Real or fake?” figure), arise from other sources. These include cavern backgrounds and muons produced in the halo of the LHC proton beams, both of which increase with larger instantaneous luminosities. Without modifications to the detector, the fake-muon trigger rates in the end caps would become unsustainable at the HL-LHC, requiring the muon pT thresholds in the Level-1 trigger to be raised substantially.

Sketch of a quarter section of ATLAS

To resolve these issues, the ATLAS collaboration decided, as part of its major Phase-I upgrade, to replace the existing ATLAS muon small wheels with the “New Small Wheels” (NSW), capable of reconstructing muon track segments locally with 1 mrad resolution for both the Level-1 trigger and for offline reconstruction. The NSW will allow low-pT thresholds to be maintained for the end-cap muon triggers even at the ultimate HL-LHC luminosity.

The low-pT region for leptons is of critical importance to the ATLAS physics programme. As an example, Higgs-boson production via vector-boson fusion (VBF) is a powerful channel for precision Higgs studies, and low-pT end-cap lepton triggers are crucial for selecting H → ττ events used to study Higgs-boson Yukawa couplings. Within the current tracking detector acceptance of |η| < 2.5, the fraction VBF of H → ττ events with the leading muons having pT above 25 GeV (typical Run-2 threshold) is 60%, while this fraction drops to 28% for a pT threshold of 40 GeV (expected typical HL-LHC threshold if no changes to the detectors are made). Maintaining, or even reducing, the muon pT threshold is critical for extending the ATLAS physics programme in higher luminosity LHC operation.

Frontier technologies

The ATLAS NSW is a set of precision tracking and trigger detectors able to work at high rates with excellent spatial and time resolution using two innovative technologies: MicroMegas (MM) and small-strip thin-gap chambers (sTGC). These detectors will provide the muon Level-1 trigger system with online track segments with good angular resolution to confirm that they originate from the interaction point, reducing triggers from fake muons. They will also have timing resolutions below the 25 ns interbunch time, enabling bunch-crossing identification. With the NSW, ATLAS will keep the full acceptance of its muon tracking system at the HL-LHC while maintaining a low Level-1 pT threshold of around 20 GeV.

MicroMegas detectors and small-strip thin-gap chambers

The ATLAS collaboration chose MM and sTGC technologies for the NSW after a detailed scrutiny of several available options. The idea was to build a robust and redundant system, using research-frontier and cost-effective technologies. Each NSW wheel has 16 sectors, with each sector containing four MM chambers and six sTGC chambers. Each sector, with a total surface area ranging from about 4 to 6 m2 , has eight sensitive planes of MM and eight of sTGC along the muon track direction. The 16 overall measurement planes allow for redundancy in the track reconstruction.

MM detectors were proposed in the 1990s in the framework of the Micro-Pattern Gaseous Detectors (MPGD) R&D programme including the RD51 project at CERN (see “Robust and redundant” figure, top). They profit from the development of photolithographic techniques for the design of high-granularity readout patterns and, in parallel, from the development of specialised front-end electronics with an increased number of channels. A dedicated R&D programme introduced, developed and realised the concept of resistive MM detectors. The main challenge for ATLAS was to scale the detectors from a few tens of cm in size to chambers of 2–3 m2 with a geometry under control at the level of tens of μm. This required additional R&D together with a very detailed mechanical design of the detectors. The resulting detectors represent the largest and most complex MPGD system ever built.

Thin-gap chambers have been used for triggering and to provide the azimuthal coordinate of muons in the ATLAS muon spectrometer end caps since the beginning of LHC operations, and were used previously in the OPAL experiment at LEP. The sTGC is an extension of established TGC technology to allow for precise online tracking that can be used both in the trigger and in offline muon tracking, with a strip pitch of 3.2 mm (see “Robust and redundant” figure, bottom).

A common readout front-end chip, named VMM, was developed for the readout of the MM strips and of the active elements of the sTGC (strips, pads and wires). This chip is a novel “amplifier-shaper-discriminator” front-end ASIC able to perform amplification and shaping, peak finding and digitisation of the detector signals. The overall system has about 2 million MM and 350,000 sTGC readout channels. The ATLAS trigger, using information from both detectors, will identify track segments pointing to the interaction region and share this information with the muon trigger.

International enterprise

The construction of the 128 MM and 192 sTGC chambers has been a truly international enterprise shared among several laboratories. The construction of the MM was shared among five construction consortia in France, Germany, Greece, Italy and Russia, with infrastructure and technical expertise inherited from the construction of the ATLAS Muon Spectrometer Monitored Drift Tube chambers. The construction of the sTGC was shared among five consortia located in Canada, Chile, China, Israel and Russia, including both institutes from the original TGC construction and new ones.

A key challenge in realising both technologies was the use of large-area circuit boards produced by industry. For the case of the MM, high-voltage instabilities observed since the construction of the first large-size prototypes were mostly due to the quality of the printed circuit boards. Two aspects in particular were investigated: the cleanliness of the surfaces, and the actual measured values of the board resistivity that were in many cases not large enough to prevent electrical discharges in the detector. For both problems, detailed mitigation protocols were developed and shared among the consortia: a cleaning protocol including polishing and washing of all the surfaces and a “passivation” procedure designed to mask detector regions with lower resistance where most of the discharges were observed to take place.

MicroMegas double-wedges and small-strip thin-gap chamber wedges

For the sTGC, the principal difficulty in the circuit-board production was maintaining mechanical tolerances and electrical integrity over the large areas. Considerable R&D and quality control were required before and during the board production, and when combined with X-ray measurements at CERN the sTGC layers are aligned to better than 100 μm.

Along with the chamber construction, several tests were carried out at the construction sites to evaluate the chamber quality. Some of the first full-size prototypes together with the first production chambers were exposed to test beams. All the sTGC chambers and a large fraction of the MM chambers were also tested at CERN’s GIF++ irradiation facility to evaluate their behaviour under a particle rate comparable to the one expected at the HL-LHC.

The integration of both MM and sTGC chambers to form the wheel sectors took place at CERN from 2018 to 2021. Four MM chambers form a double-wedge, assembled accounting for the severe alignment requirements, which is then equipped with all the necessary services and the final front-end electronics (see “Taking stock” image). The systems were fully tested in a dedicated cosmic-ray test stand to verify the functionality of the detector and to evaluate the detector efficiency. For the sTGCs, three chambers were glued to fibreglass frames using precision inserts on a granite table to form a wedge. After long-term high-voltage tests, the sTGC wedges were equipped with front-end electronics, cooling, and readout cables and fibres. All the sTGC chambers were tested with cosmic rays at the construction sites, and a few were also tested at CERN.

The first New Small Wheel

To form each sector, two sTGC wedges and one MM double-wedge were sandwiched together. The sectors were then precisely mounted on “spokes” installed on the large shielding disks that form the NSW wheels, along with a precision optical alignment system that allows the chamber positions to be tracked by ATLAS in real time (see “Revolutions” image). After completing final electrical, cooling and gas connections during 2020 and 2021, all sectors were commissioned and tested on the wheel. One unexpected problem encountered on the first sectors on wheel A was the presence of a noise level in the front-end electronics that was significantly higher than observed during integration. A large and ultimately successful effort was put in place to mitigate this new challenge, for example by improving the grounding and shielding, and adding filtering to the power supplies.

This final success follows more than a decade of research, design and construction by the ATLAS collaboration. The NSW initiative dates to early LHC operation, around 2010, and the technical design report was approved in 2013, with construction preparation starting soon afterwards. The impact of the COVID-19 pandemic on the NSW construction schedule was significant, mostly at the construction sites, where delays of up to a few months were accrued, but the project is now on schedule for completion during the current LHC shutdown.

The endgame

Prior to lowering the NSW into the ATLAS experimental cavern, other infrastructure was installed to prepare for detector operation. The service caverns were equipped with electronics racks, high-voltage and low-voltage power supplies, gas distribution systems, cooling infrastructure for electronics, as well as control and safety systems. Where possible, existing infrastructure from the previous ATLAS small wheels was repurposed for the NSW.

ATLAS is now close to the completion of its Phase-I upgrade goal of having both NSW-A and NSW-C installed for the start of Run 3

 

On 6 July, the first wheel, NSW-A, was shipped from Building 191 on the CERN site to LHC Point 1 and then, less than a week later, lowered into its position in ATLAS (see “In place” image). With the first NSW in its final position, the extensive campaign of connecting low voltage, high voltage, gas, readout fibres and electronics cooling was the next step. These connections were completed for NSW-A in July and August 2021, and an extensive commissioning programme is ongoing. In addition to powering both the chambers and the readout electronics, the integration of the NSW into the ATLAS controls and data-acquisition system is occurring at Point 1. NSW-A is planned to be fully integrated into ATLAS for the LHC pilot-beam run in October 2021, and then NSW-C will be lowered and installed.

Despite a tight schedule, ATLAS is now close to the completion of its Phase-I upgrade goal of having both NSW-A and NSW-C installed for the start of Run 3. The period up to February 2022 will be needed to complete commissioning and testing. Starting from March 2022, a very important “commissioning with beam” phase will be carried out to ensure stable collisions in Run 3. Even with the challenges of developing new technologies while working across a dozen countries during the COVID-19 pandemic, the ATLAS New Small Wheel upgrade will be ready for the exciting, new higher luminosities that will open up a novel era of LHC physics.

The post Wheels in motion for ATLAS upgrade appeared first on CERN Courier.

]]>
Feature New muon end-cap wheels currently being installed in the ATLAS detector will provide precision tracking and triggering at high rates for Run 3 and beyond. https://cerncourier.com/wp-content/uploads/2021/10/CCNovDec21_ATLAS_feature.jpg
Building the future of LHCb https://cerncourier.com/a/building-the-future-of-lhcb/ Thu, 02 Sep 2021 09:51:27 +0000 https://preview-courier.web.cern.ch/?p=93639 LHCb's brand-new “SciFi” tracker and upgraded ring-imaging Cherenkov detectors are vital for the higher LHC luminosities ahead.

The post Building the future of LHCb appeared first on CERN Courier.

]]>
Planes of LHCb’s SciFi tracker

It was once questioned whether it would be possible to successfully operate an asymmetric “forward” detector at a hadron collider. In such a high-occupancy environment, it is much harder to reconstruct decay vertices and tracks than it is at a lepton collider. Following its successes during LHC Run 1 and Run 2, however, LHCb has rewritten the forward-physics rulebook, and is now preparing to take on bigger challenges.

During Long Shutdown 2, which comes to an end early next year, the LHCb detector is being almost entirely rebuilt to allow data to be collected at a rate up to 10 times higher during Run 3 and Run 4. This will improve the precision of numerous world-best results, such as constraints on the angles of the CKM triangle, while further scrutinising intriguing results in B-meson decays, which hint at departures from the Standard Model. 

LHCb’s successive detector layers

At the core of the LHCb upgrade project are new detectors capable of sustaining an instantaneous luminosity up to five times that seen at Run 2, and which enable a pioneering software-only trigger that will enable LHCb to process signal data in an upgraded computing farm at the frenetic rate of 40 MHz. The vertex locator (VELO) will be replaced with a pixel version, the upstream silicon-strip tracker will be replaced with a lighter version (the UT) located closer to the beamline, and the electronics for LHCb’s muon stations and calorimeters are being upgraded for 40 MHz readout. 

Recently, three further detector systems key to dealing with the higher occupancies ahead were lowered into the LHCb cavern for installation: the upgraded ring-imaging Cherenkov detectors RICH1 and RICH2 for sharper particle identification, and the brand new “SciFi” (scintillating fibre) tracker. 

SciFi tracking

The components of LHCb’s SciFi tracker may not seem futuristic at first glance. Its core elements are constructed from what is essentially paper, plastic, some carbon fibre and glue. However, its materials components conceal advanced technologies which, when coupled together, produce a very light and uniform, high-performance detector that is needed to cope with the higher number of particle tracks expected during Run 3.

Located behind the LHCb magnet (see “Asymmetric anatomy” image), the SciFi represents a challenge, not only due to its complexity, but also because the technology – plastic scintillating fibres and silicon photomultiplier arrays – has never been used for such a large area in such a harsh radiation environment. Many of the underlying technologies have been pushed to the extreme during the past decade to allow the SciFi to successfully operate under LHC conditions in an affordable and effective way. 

Scintillating-fibre mat production

More than 11,000 km of 0.25 mm-diameter polystyrene fibre was delivered to CERN before undergoing meticulous quality checks. Excessive diameter variations were removed to prevent disruptions of the closely packed fibre matrix produced during the winding procedure, and clear improvements from the early batches to the production phase were made by working closely with the industrial manufacturer. From the raw fibres, nearly 1400 multi-layered fibre mats were wound in four of the LHCb collaboration’s institutes (see “SciFi spools” image), before being cut and bonded in modules, tested, and shipped to CERN where they were assembled with the cold boxes. The SciFi tracker contains 128 stiff and robust 5 × 0.5 m2 modules made of eight mats bonded with two fire-resistant honeycomb and carbon-fibre panels, along with some mechanics and a light-injection system. In total, the design produces nearly 320 m2 of detector surface over the 12 layers of the tracking stations. 

The scintillating fibres emit photons at blue-green wavelengths when a particle interacts with them. Secondary scintillator dyes added to the polystyrene amplify the light and shift it to longer wavelengths so it can be read out by custom-made silicon photomultipliers (SiPMs). SiPMs have become a strong alternative to conventional photomultiplier tubes in recent years, due to their smaller channel sizes, easier operation and insensitivity to magnetic fields. This makes them ideal to read out the higher number of channels necessary to identify separate but nearby tracks in LHCb during Run 3. 

The width of the SiPM channels, 0.25 mm, is designed to match that of the fibres. Though they need not align perfectly, this provides a better separation power for tracking than the previously used 5 mm gas straw tubes in the outer regions of the detector, while providing a similar performance to the silicon-strip tracker. The tiny channel size results in over 524,288 SiPM channels to collect light from 130 m of fibre-mat edges. A custom ASIC, called the PACIFIC, outputs two bits per channel based on three signal-amplitude thresholds. A field-programmable gate array (FPGA) assigned to each SiPM then groups these signals into clusters, where the location of each cluster is sent to the computing farm. Despite clustering and noise suppression, this still results in an enormous data rate of 20 Tb/s – nearly half of the total data bandwidth of the upgraded LHCb detector.

One of the key factors in the success of LHCb’s flavour-physics programme is its ability to identify charged particles

LHCb’s SciFi tracker is the first large-scale use of SiPMs for tracking, and takes advantage of improvements in the technology in the 10 years since the SciFi was proposed. The photon-detection efficiency of SiPMs has nearly doubled thanks to improvements in the design and production of the underlying pixel structures, while the probability of crosstalk between the pixels (which creates multiple fake signals by causing a single pixel to randomly fire without incident light following radiation damage) has been reduced from more than 20% to a few percent by the introduction of microscopic trenches between the pixels. The dark-single-pixel firing rate can also be reduced by cooling the SiPM. Together, these two methods greatly reduce the number of fake-signal clusters such that the tracker can effectively function after several years of operation in the LHCb cavern. 

RICH2 photon detector plane

The LHCb collaboration assembled commercial SiPMs on flex cables and bonded them in groups of 16 to a 0.5 m-long 3D-printed titanium cooling bar to form precisely assembled photodetection units for the SciFi modules. By circulating a coolant at a temperature of –50 °C through the cold bar, the dark-noise rate was reduced by a factor of 60. Furthermore, in a first for a CERN experiment, it was decided to use a new single-phase liquid coolant called Novec-649 from 3M for its non-toxic properties and low greenhouse warming potential (GWP = 1). Historically, C6F14 – which has a GWP = 7400 – was the thermo-transfer fluid of choice. Although several challenges had to be faced in learning how to work with the new fluid, wider use of Novec-649 and similar products could contribute significantly to the reduction of CERN’s carbon footprint. Additionally, since the narrow envelope of the tracking stations precludes the use of standard foam insulation of the coolant lines, a significant engineering effort has been required to vacuum insulate the 48 transfer lines from the 24 rows of SiPMs and 256 cold-bars where leaks are possible at every connection. 

To date, LHCb collaborators have tirelessly assembled and tested nearly half of the SciFi tracker above ground, where only two defective channels out of the 262,144 tested in the full signal chain were unrecoverable. Four out of 12 “C-frames” containing the fibre modules (see “Tracking tall” image) are now installed and waiting to be connected and commissioned, with a further two installed in mid-July. The remaining six will be completed and installed before the start of operations early next year.

New riches

One of the key factors in the success of LHCb’s flavour-physics programme is its ability to identify charged particles, which reduces the background in selected final states and assists in the flavour tagging of b quarks. Two ring-imaging Cherenkov (RICH) detectors, RICH1 and RICH2, located upstream and downstream of the LHCb magnet 1 and 10 m away from the collision point, provide excellent particle identification over a very wide momentum range. They comprise a large volume of fluorocarbon gas (the radiator), in which photons are emitted by charged particles travelling at speeds higher than the speed of light in the gas; spherical and flat mirrors to focus and reflect this Cherenkov light; and two photon-detector planes where the Cherenkov rings are detected and read out by the front-end electronics.

The original RICH detectors are currently being refurbished to cope with the more challenging data-taking conditions of Run 3, requiring a variety of technological challenges to be overcome. The photon detection system, for example, has been redesigned to adapt to the highly non-uniform occupancy expected in the RICH system, running from an unprecedented peak occupancy of ~35% in the central region of RICH1 down to 5% in the peripheral region of RICH2. Two types of 64-channel multi-anode photomultiplier tubes (MaPMTs) have been selected for the task which, thanks to their exceptional quantum efficiency in the relevant wavelength range, are capable of detecting single photons while providing excellent spatial resolution and very low background noise. These are key requirements to allow pattern-recognition algorithms to reconstruct Cherenkov rings even in the high-occupancy region. 

Completed SciFi C-frames

More than 3000 MaPMT units, for a total of 196,608 channels, are needed to fully instrument both upgraded RICH detectors. The already large active area (83%) of the devices has been maximised by arranging the units in a compact and modular “elementary cell” containing a custom-developed, radiation-hard eight-channel ASIC called the Claro chip, which is able to digitise the MaPMT signal at a rate of 40 MHz. The readout is controlled by FPGAs connected to around 170 channels each. The prompt nature of Cherenkov radiation combined with the performance of the new opto-electronics chain will allow the RICH systems to operate within the LHC’s 25 ns time window, dictated by the bunch-crossing period, while applying a time-gate of less than 6 ns to provide background rejection.

To keep the new RICHes as compact as possible, the hosting mechanics has been designed to provide both structural support and active cooling. Recent manufacturing techniques have enabled us to drill two 6 mm-diameter ducts over a length of 1.5 m into the spine of the support, through which a coolant (the more environmentally friendly Novec649, as in the SciFi tracker) is circulated. Each element of the opto-electronics chain has been produced and fully validated within a dedicated quality-assurance programme, allowing the position of the photon detectors and their operating conditions to be fine-tuned across the RICH detectors. In February, the first photon-detector plane of RICH2 (see “RICH2 to go” image) became the first active element of the LHCb upgrade to be installed in the cavern. The two planes of RICH2, located at the sides of the beampipe, were commissioned in early summer and will see first Cherenkov light during an LHC beam test in October. 

RICH1 spherical mirrors

RICH1 presents an even bigger challenge. To reduce the number of photons in the hottest region, its optics have been redesigned to spread the Cherenkov rings over a larger surface. The spatial envelope of RICH1 is also constrained by its magnetic shield, demanding even more compact mechanics for the photon-detector planes. To accommodate the new design of RICH1, a new gas enclosure for the radiator is needed. A volume of 3.8 m3 of C4F10 is enclosed in an aluminium structure directly fastened to the VELO tank on one side and sealed with a low-mass window on the other, with particular effort placed on building a leak-less system to limit potential environmental impact. Installing these fragile components in a very limited space has been a delicate process, and the last element to complete the gas-enclosure sealing was installed at the beginning of June.

The optical system is the final element of the RICH1 mechanics. The ~2 m2 spherical mirrors placed inside the gas enclosure are made of carbon fibre composite to limit the material budget (see “Cherenkov curves” image), while the two 1.3 m2 planes of flat mirrors are made of borosilicate glass for high optical quality. All the mirror segments are individually coated, glued on supports and finally aligned before installation in the detector. The full RICH1 installation is expected to be completed in the autumn, followed by the challenging commissioning phase to tune the operating parameters to be ready for Run 3.

Surpassing expectations

In its first 10 years of operations, the LHCb experiment has already surpassed expectations. It has enabled physicists to make numerous important measurements in the heavy-flavour sector, including the first observation of the rare decay B0s µ+µ, precise measurements of quark-mixing parameters, the discovery of CP violation in the charm sector, and the observation of more than 50 new hadrons including tetraquark and pentaquark states. However, many crucial measurements are currently statistically limited, including those underpinning the so-called flavour anomalies (see Bs decays remain anomalous). Together with the tracker, trigger and other upgrades taking place during LS2, the new SciFi and revamped RICH detectors will put LHCb in prime position to explore these and other searches for new physics for the next 10 years and beyond.

The post Building the future of LHCb appeared first on CERN Courier.

]]>
Feature LHCb's brand-new “SciFi” tracker and upgraded ring-imaging Cherenkov detectors are vital for the higher LHC luminosities ahead. https://cerncourier.com/wp-content/uploads/2021/08/CCSepOct21_LHCb_frontis.jpg
CERN to provide two DUNE cryostats https://cerncourier.com/a/cern-to-provide-two-dune-cryostats/ Wed, 18 Aug 2021 11:18:12 +0000 https://preview-courier.web.cern.ch/?p=93713 The laboratory has agreed to supply a second enormous liquid-argon tank for the US-based neutrino experiment's time-projection chambers.

The post CERN to provide two DUNE cryostats appeared first on CERN Courier.

]]>
DUNE

The Deep Underground Neutrino Experiment (DUNE) in the US is set to replicate that marvel of model-making, the ship-in-a-bottle, on an impressive scale. More than 3000 tonnes of steel and other components for DUNE’s four giant detector modules, or cryostats, must be lowered 1.5 km through narrow shafts beneath the Sanford Lab in South Dakota, before being assembled into four 66 × 19 × 18 m3 containers. And the maritime theme is more than a metaphor: to realise DUNE’s massive cryostats, each of which will keep 17.5 kt of liquid argon (LAr) at a temperature of –200°, CERN is working closely with the liquefied natural gas (LNG) shipping industry.

Since it was established in 2013, CERN’s Neutrino Platform has enabled significant European participation in long-baseline neutrino experiments in the US and Japan. For DUNE, which will beam neutrinos 1300 km through the Earth’s crust from Fermilab to Sanford, CERN has built and operated two large-scale prototypes for DUNE’s LAr time-projection chambers (TPCs). All aspects of the detectors have been validated. The “ProtoDUNE” detectors’ cryostats will now pave the way for the Neutrino Platform team to design and engineer cryostats that are 20 times bigger. CERN had already committed to build the first of these giant modules. In June, following approval from the CERN Council, the organisation also agreed to provide a second.

Scaling up

Weighing more than 70,000 tonnes, DUNE will be the largest ever deployment of LAr technology, which serves as both target and tracker for neutrino interactions, and was proposed by Carlo Rubbia in 1977. The first large-scale LAr TPC – ICARUS, which was refurbished at CERN and shipped to Fermilab’s short-baseline neutrino facility in 2017 – is a mere twentieth of the size of a single DUNE module.

Scaling LAr technology to industrial levels presents several challenges, explains Marzio Nessi, who leads CERN’s Neutrino Platform. Typical cryostats are carved from big chunks of welded steel, which does not lend itself to a modular design. Insulation is another challenge. In smaller setups, a vacuum installation comprising two stiff walls would be used. But at the scale of DUNE, the cryostats will deform by tens of cm when cooled from room temperature, potentially imperilling the integrity of instrumentation, and leading CERN to use an active foam with an ingenious membrane design.

The nice idea from the liquefied-natural-gas industry is to have an internal membrane which can deform like a spring

Marzio Nessi

“The nice idea from the LNG industry is that they have found a way to have an internal membrane, which can deform like a spring, as a function of the thermal conditions. It’s a really beautiful thing,” says Nessi. “We are collaborating with French LNG firm GTT because there is a reciprocal interest for them to optimise the process. They never went to LAr temperatures like these, so we are both learning from each other and have built a fruitful ongoing collaboration.”

Having passed all internal reviews at CERN and in the US, the first cryostat is now ready for procurement. Several different industries across CERN’s member states and beyond are involved, with delivery and installation at Sanford Lab expected to start in 2024. The cryostat is only one aspect of the ProtoDUNE project: instrumentation, readout, high-voltage supply and many other aspects of detector design have been optimised through more than five years of R&D. Two technologies were trialled at the Neutrino Platform: single- and dual-phase LAr TPCs. The single-phase design has been selected as the design for the first full-size DUNE module. The Neutrino Platform team is now qualifying a hybrid single/dual-phase version based on a vertical drift, which may prove to be simpler, more cost effective and easier-to-install.

Step change

In parallel with efforts towards the US neutrino programme, CERN has developed the BabyMIND magnetic spectrometer, which sandwiches magnetised iron and scintillator to detect relatively low-energy muon neutrinos, and participates in the T2K experiment, which sends neutrinos 295 km from Japan’s J-PARC accelerator facility to the Super-Kamiokande detector. CERN will contribute to the upgrade of T2K’s near detector, and a proposal has been made for a new water Cherenkov test-beam experiment at CERN, to later be placed about 1 km from the neutrino beam source of the Hyper Kamiokande experiment . Excavation of underground caverns for Hyper Kamiokande and DUNE has already begun.

DUNE and Hyper-Kamiokande, along with short-baseline experiments and major non-accelerator detectors such as JUNO in China, will enable high-precision neutrino-oscillation measurements to tackle questions such as leptonic CP violation, the neutrino mass hierarchy, and hints of additional “sterile” neutrinos, as well as a slew of questions in multi-messenger astronomy. Entering operation towards the end of the decade, Hyper-Kamiokande and DUNE will mark a step-change in the scale of neutrino experiments, demanding a global approach.

“The Neutrino Platform has become one of the key projects at CERN after the LHC,” says Nessi. “The whole thing is a wonderful example – even a prototype – for the global participation and international collaboration that will be essential as the field strives to build ever more ambitious projects like a future collider.”

The post CERN to provide two DUNE cryostats appeared first on CERN Courier.

]]>
News The laboratory has agreed to supply a second enormous liquid-argon tank for the US-based neutrino experiment's time-projection chambers. https://cerncourier.com/wp-content/uploads/2021/08/Oct-08-2017_0_2-2.jpg
Long-lived particles gather interest https://cerncourier.com/a/long-lived-particles-gather-interest/ Wed, 21 Jul 2021 08:48:46 +0000 https://preview-courier.web.cern.ch/?p=93435 The long-lived particle community marked five years of stretching the limits of searches for new physics with its ninth and best-attended workshop yet.

The post Long-lived particles gather interest appeared first on CERN Courier.

]]>
From 25 to 28 May, the long-lived particle (LLP) community marked five years of stretching the limits of searches for new physics with its ninth and best-attended workshop yet, with more than 300 registered participants.

LLP9 played host to six new results, three each from ATLAS and CMS. These included a remarkable new ATLAS paper searching for stopped particles – beyond-the-Standard Model (BSM) LLPs that can be produced in a proton–proton collision and then get stuck in the detector before decaying minutes, days or weeks later. Good hypothetical examples are the so-called gluino R-hadrons that occur in supersymmetric models. Also featured was a new CMS search for displaced di-muon resonances using “data scouting” – a unique method of increasing the number of potential signal events kept at the trigger level by reducing the event information that is retained. Both experiments presented new results searching for the Higgs boson decaying to LLPs (see “LLP candidate” figure).

Long-lived particles can also be produced in a collision inside ATLAS, CMS or LHCb and live long enough to drift entirely outside of the detector volume. To ensure that this discovery avenue is also covered for the future of the LHC’s operation, there is a rich set of dedicated LLP detectors either approved or proposed, and LLP9 featured updates from MoEDAL, FASER, MATHUSLA, CODEX-b, MilliQan, FACET and SND@LHC, as well as a presentation about the proposed forward physics facility for the High-Luminosity LHC (HL-LHC).

Reinterpreting machine learning

The liveliest parts of any LLP community workshop are the brainstorming and hands-on working-group sessions. LLP9 included multiple vibrant discussions and working sessions, including on heavy neutral leptons and the ability of physicists who are not members of experimental collaborations to be able to re-interpret LLP searches – a key issue for the LLP community. At LLP9, participants examined the challenges inherent in re-interpreting LLP results that use machine learning techniques, by now a common feature of particle-physics analyses. For example, boosted decision trees (BDTs) and neural networks (NNs) can be quite powerful for either object identification or event-level discrimination in LLP searches, but it’s not entirely clear how best to give theorists access to the full original BDT or NN used internally by the experiments.

LLP searches at the LHC often must also grapple with background sources that are negligible for the majority of searches for prompt objects. These backgrounds – such as cosmic muons, beam-induced backgrounds, beam-halo effects and cavern backgrounds – are reasonably well-understood for Run 2 and Run 3, but little study has been performed for the upcoming HL-LHC, and LLP9 featured a brainstorming session about what such non-standard backgrounds might look like in the future.

Also looking to the future, two very forward-thinking working-group sessions were held on LLPs at a potential future muon collider and at the proposed Future Circular Collider (FCC). Hadron collisions at ~100 TeV in FCC-hh would open up completely unprecedented discovery potential, including for LLPs, but it’s unclear how to optimise detector designs for both LLPs and the full slate of prompt searches.

Simulating dark showers is a longstanding challenge

Finally, LLP9 hosted an in-depth working-group session dedicated to the simulation of “dark showers”, in collaboration with the organisers of the dark-showers study group connected to the Snowmass process, which is currently shaping the future of US particle physics. Dark showers are a generic and poorly understood feature of a potential BSM dark sector with similarities to QCD, which could have its own “dark hadronisation” rules. Simulating dark showers is a longstanding challenge. More than 50 participants joined for a hands-on demonstration of simulation tools and a discussion of the dark-showers Pythia module, highlighting the growing interest in this subject in the LLP community.

LLP9 was raucous and stimulating, and identified multiple new avenues of research. LLPX, the tenth workshop in the series, will be held in November this year.

The post Long-lived particles gather interest appeared first on CERN Courier.

]]>
Meeting report The long-lived particle community marked five years of stretching the limits of searches for new physics with its ninth and best-attended workshop yet. https://cerncourier.com/wp-content/uploads/2021/07/CMS-LLPs-1000.jpg
Resistive Gaseous Detectors: Designs, Performance, and Perspectives https://cerncourier.com/a/resistive-gaseous-detectors-designs-performance-and-perspectives/ Fri, 16 Jul 2021 13:24:48 +0000 https://preview-courier.web.cern.ch/?p=93335 This new book by Marcello Abbrescia, Vladimir Peskov and Paulo Fonte covers operational principles, latest achievements and a growing list of applications.

The post Resistive Gaseous Detectors: Designs, Performance, and Perspectives appeared first on CERN Courier.

]]>
The first truly resistive gaseous detector was invented by Rinaldo Santonico and Roberto Cardarelli in 1981. A kind of parallel-plate detector with electrodes made of resistive materials such as Bakelite and thin-float glass, the design is sometimes also known as a resistive-plate chamber (RPC). Resistive gaseous detectors use electronegative gases and electric fields that typically exceed 10 kV/cm. When a charged particle is incident in the gas gap, the working operational gas is ionised, and primary electrons cause an avalanche as a result of the high electric field. The induced charge is then obtained on the readout pad as a signal. RPCs have several unique and important practical features, combining good spatial resolution with a time resolution comparable to that of scintillators. They are therefore well suited for fast spacetime particle tracking, as a cost-effective way to instrument large volumes of a detector, for example in muon systems at collider experiments.

Resistive gaseous detectors use electronegative gases and electric fields which typically exceed 10 kV/cm

Resistive Gaseous Detectors: Designs, Performance, and Perspectives, a new book by Marcello Abbrescia, Vladimir Peskov and Paulo Fonte, covers the basic principles of their operation, historical development, the latest achievements and their growing applications in various fields from hadron colliders to astrophysics. This book is not only a summary of numerous scientific publications on many different examples of RPCs, but also a detailed description of their design, operation and performance.

Resistive Gaseous Detectors

The book has nine chapters. The operational principle of gaseous detectors and some of their limitations, most notably the efficiency drop in a high-particle-rate environment, are described. This is followed by a history of parallel-plate detectors, the first classical Bakelite RPC, double-gap RPCs and glass-electrode multi-gap timing RPCs. A modern design of double-gap RPCs and examples for the muon systems like those at ATLAS and CMS at the LHC, the STAR detector at the Relativistic Heavy-Ion Collider at Brookhaven and the multi-gap timing RPC for the time-of-flight system of the HADES experiment at GSI are detailed. Advanced designs with new materials for electrodes for high-rate detectors are then introduced, and ageing and longevity are elaborated upon. A new generation of gaseous detectors with resistive electrodes that can be made with microelectronic technology is then introduced: these large-area electrodes can easily be manufactured while still achieving high spatial resolutions up to 12 microns.

Homeland security

The final chapter covers applications outside particle physics such as those in medicine exploiting positron-emission tomography. For homeland security, RPCs can be used in muon-scattering tomography with cosmic-ray muons to scan spent nuclear fuel containers without opening them, or to quickly scan incoming cargo trucks without disrupting the traffic of logistics. A key subject not covered in detail, however, is the need to search for environmentally friendly alternatives to gases with high global-warming potential, which are often needed in resistive gaseous detectors at present to achieve stable and sustained operation (CERN Courier July/August 2021 p20).

Abbrescia, Peskov and Fonte’s book will be useful to graduates specialising in high-energy physics, astronomy, astrophysics, medical physics and radiation measurements in general for undergraduate students and teachers.

The post Resistive Gaseous Detectors: Designs, Performance, and Perspectives appeared first on CERN Courier.

]]>
Review This new book by Marcello Abbrescia, Vladimir Peskov and Paulo Fonte covers operational principles, latest achievements and a growing list of applications. https://cerncourier.com/wp-content/uploads/2021/07/201902-071_03.jpg
Particle Detectors – Fundamentals and Applications https://cerncourier.com/a/particle-detectors-fundamentals-and-applications/ Sat, 10 Jul 2021 09:12:43 +0000 https://preview-courier.web.cern.ch/?p=92981 Kolanoski and Wermes' new book is a reference for lectures on experimental methods for postgraduate students, writes our reviewer.

The post Particle Detectors – Fundamentals and Applications appeared first on CERN Courier.

]]>
Particle Detectors – Fundamentals and Applications

Throughout the history of nuclear, particle and astroparticle physics, novel detector concepts have paved the way to new insights and new particles, and will continue to do so in the future. To help train the next generation of innovators, noted experimental particle physicists Hermann Kolanoski (Humboldt University Berlin and DESY) and Norbert Wermes (University of Bonn) have written a comprehensive textbook on particle detectors. The authors use their broad experience in collider and underground particle-physics experiments, astroparticle physics experiments and medical-imaging applications to confidently cover the spectrum of experimental methods in impressive detail.

Particle Detectors – Fundamentals and Applications combines in a single volume the syllabus also found in two well-known textbooks covering slightly different aspects of detectors: Techniques for Nuclear and Particle Physics Experiments by W R Leo and Detectors for Particle Radiation by Konrad Kleinknecht. Kolanoski and Wermes’ book supersedes them both by being more up-to-date and comprehensive. It is more detailed than Particle Detectors by Claus Grupen and Boris Shwartz – another excellent and recently published textbook with a similar scope – and will probably attract a slightly more advanced population of physics students and researchers. This new text promises to become a particle-physics analogue of the legendary experimental-nuclear-physics textbook Radiation Detection and Measurement by Glenn Knoll.

The book begins with a comprehensive warm-up chapter on the interaction of charged particles and photons with matter, going well beyond a typical textbook level. This is followed by a very interesting discussion of the transport of charge carriers in media in magnetic and electric fields, and – a welcome novelty – signal formation, using the method of “weighting fields”. The main body of the book is devoted first to gaseous, semiconductor, Cherenkov and transition-radiation detectors, and then to detector systems for tracking, particle identification and calorimetry, and the detection of cosmic rays, neutrinos and exotic matter. Final chapters on electronics readout, triggering and data acquisition complete the picture. 

Particle Detectors – Fundamentals and Applications is best considered a reference for lectures on experimental methods in particle and nuclear physics for postgraduate-level students. The book is easy to read, and conceptual discussions are well supported by numerous examples, plots and illustrations of excellent quality. Kolanoski and Wermes have undoubtedly written a gem of a book, with value for any experimental particle physicist, be they a master’s student, PhD student or accomplished researcher looking for detector details outside of their expertise.

The post Particle Detectors – Fundamentals and Applications appeared first on CERN Courier.

]]>
Review Kolanoski and Wermes' new book is a reference for lectures on experimental methods for postgraduate students, writes our reviewer. https://cerncourier.com/wp-content/uploads/2021/06/CCJulAug21_REV_Particle_feature.jpg
Tracking the rise of pixel detectors https://cerncourier.com/a/tracking-the-rise-of-pixel-detectors/ Fri, 02 Jul 2021 07:47:13 +0000 https://preview-courier.web.cern.ch/?p=92848 Silicon pixel detectors for particle tracking have blossomed into a vast array of beautiful creations that have driven numerous discoveries, with no signs of the advances slowing down.

The post Tracking the rise of pixel detectors appeared first on CERN Courier.

]]>
Pixel detectors have their roots in photography. Up until 50 years ago, every camera contained a roll of film on which images were photochemically recorded with each exposure, after which the completed roll was sent to be “developed” to finally produce eagerly awaited prints a week or so later. For decades, film also played a big part in particle tracking, with nuclear emulsions, cloud chambers and bubble chambers. The silicon chip, first unveiled to the world in 1961, was to change this picture forever.

During the past 40 years, silicon sensors have transformed particle tracking in high-energy physics experiments

By the 1970s, new designs of silicon chips were invented that consisted of a 2D array of charge-collection sites or “picture elements” (pixels) below the surface of the silicon. During the exposure time, an image focused on the surface generated electron–hole pairs via the photoelectric effect in the underlying silicon, with the electrons collected as signal information in the pixels. These chips came in two forms: the charge-coupled device (CCD) and the monolithic active pixel sensor (MAPS) – more commonly known commercially as the CMOS image sensor (CIS). Willard Boyle and George Smith of Bell Labs in the US were awarded the Nobel Prize for Physics in 2009 for inventing the CCD. 

Central and forward pixel detector

In a CCD, the charge signals are sequentially transferred to a single on-chip output circuit by applying voltage pulses to the overlying electrode array that defines the pixel structure. At the output circuit the charge is converted to a voltage signal to enable the chip to interface with external circuitry. In the case of the MAPS, each pixel has its own charge-integrating detection circuitry and a voltage signal is again sequentially read out from each by on-chip switching or “scanning” circuitry. Both architectures followed rapid development paths, and within a couple of decades had completely displaced photographic film in cameras. 

For the consumer camera market, CCDs had the initial lead, which passed to MAPS by about 1995. For scientific imaging, CCDs are preferred for most astronomical applications (most recently the 3.2 Gpixel optical camera for the Vera Rubin Observatory), while MAPS are the preferred option for fast imaging such as super-resolution microscopy, cryoelectron microscopy and pioneering studies of protein dynamics at X-ray free-electron lasers. Recent CMOS imagers with very small, low-capacitance pixels achieve sufficiently low noise to detect single electrons. A third member of the family is the hybrid pixel detector, which is MAPS-like in that the signals are read out by scanning circuitry, but in which the charges are generated in a separate silicon layer that is connected, pixel by pixel, to a readout integrated circuit (ROIC). 

During the past 40 years, these devices (along with their silicon-microstrip counterparts, to be described in a later issue) have transformed particle tracking in high-energy physics experiments. The evolution of these device types is intertwined to such an extent that any attempt at historical accuracy, or who really invented what, would be beyond the capacity of this author, for which I humbly apologise. Space constraints have also led to a focus on the detectors themselves, while ignoring the exciting work in ROIC development, cooling systems, mechanical supports, not to mention the advanced software for device simulation, the simulation of physics performance, and so forth. 

CCD design inspiration

The early developments in CCD detectors were disregarded by the particle-detector community. This is because gaseous drift chambers, with a precision of around 100 μm, were thought to be adequate for all tracking applications. However, the 1974 prediction by Gaillard, Lee and Rosner that particles containing charm quarks “might have lifetimes measurable in emulsions”, followed by the discovery of charm in 1975, set the world of particle-physics instrumentation ablaze. Many groups with large budgets tried to develop or upgrade existing types of detectors to meet the challenge: bubble chambers became holographic; drift chambers and streamer chambers were pressurised; silicon microstrips became finer-pitched, etc. 

Pixel architectures

A CCD, MAPS and hybrid chip

Illustrations of a CCD (left), MAPS (middle) and hybrid chip (right). The first two typically contain 1 k × 1 k pixels, up to 4 k × 4 k or beyond by “stitching”, with an active layer thickness (depleted) of about 20 µm and a highly doped bulk layer back-thinned to around 100 µm, enabling a low-mass tracker, even potentially bent into cylinders round the beampipe. 

The CCD (where I is the imaging area, R the readout register, TG the transfer gate, CD the collection diode, and S, D, G the source, drain and gate of the sense transistor) is pixellised in the I direction by conducting gates. Signal charges are shifted in this direction by manipulating the gate voltages so that the image is shifted down, one row at a time. Charges from the bottom row are tipped into the linear readout register, within which they are transferred, all together in the orthogonal direction, towards the output node. As each signal charge reaches the output node, it modulates the voltage on the gate of the output transistor; this is sensed, and transmitted off-chip as an analog signal. 

In a MAPS chip, pixellisation is implemented by orthogonal channel stops and signal charges are sensed in-pixel by a tiny front-end transistor. Within a depth of about 1 µm below the surface, each pixel contains complex CMOS electronics. The simplest readout is “rolling shutter”, in which peripheral logic along the chip edge addresses rows in turn, and analogue signals are transmitted by column lines to peripheral logic at the bottom of the imaging area. Unlike in a CCD, the signal charges never move from their “parent” pixel. 

In the hybrid chip, like a MAPS, signals are read out by scanning circuitry. However, the charges are generated in a separate silicon layer that is connected, pixel by pixel, to a readout integrated circuit. Bump-bonding interconnection technology is used to keep up with pixel miniaturisation. 

The ACCMOR Collaboration (Amsterdam, CERN, Cracow, Munich, Oxford, RAL) had built a powerful multi-particle spectrometer, operating at CERN’s Super Proton Synchrotron, to search for hadronic production of the recently-discovered charm particles, and make the first measurements of their lifetimes. We in the RAL group picked up the idea of CCDs from astronomers at the University of Cambridge, who were beginning to see deeper into space than was possible with photographic film (see left figure in “Pixel architectures” panel). The brilliant CCD developers in David Burt’s team at the EEV Company in Chelmsford (now Teledyne e2v) suggested designs that we could try for particle detection, notably to use epitaxial silicon wafers with an active-layer thickness of about 20 μm. At a collaboration meeting in Cracow in 1978, we demonstrated via simulations that just two postage-stamp-sized CCDs, placed 1 and 2 cm beyond a thin target, could cover the whole spectrometer aperture and might be able to deliver high-quality topological reconstruction of the decays of charm particles with expected lifetimes of around 10–13 s. 

We still had to demonstrate that these detectors could be made efficient for particle detection. With a small telescope comprising three CCDs in the T6 beam from CERN’s Proton Synchrotron we established a hit efficiency of more than 99%, a track measurement precision of 4.5 μm in x and y, and two-track resolution of 40 μm. Nothing like this had been seen before in an electronic detector. Downstream of us, in the same week, a Yale group led by Bill Willis obtained signals from a small liquid-argon calorimeter. A bottle of champagne was shared! 

It was then a simple step to add two CCDs to the ACCMOR spectrometer and start looking for charm particles. During 1984, on the initial shift, we found our first candidate (see “First charm” figure), which, after adding the information from the downstream microstrips, drift chambers (with two large aperture magnets for momentum measurement), plus a beautiful assembly of Cherenkov hodoscopes from the Munich group, proved to be a D+ K+π+π event. 

Vertex detector

It was more challenging to develop a CCD-based vertex detector for the SLAC Large Detector (SLD) at the SLAC Linear Collider (SLC), which became operational in 1989. The level of background radiation required a 25 mm-radius beam pipe, and the physics demanded large solid-angle coverage, as in all general-purpose collider detectors. The physics case for SLD had been boosted by the discovery in 1983 that the lifetime of particles containing b quarks was longer than for charm, in contrast to the theoretical expectation of being much shorter. So the case for deploying high-quality vertex detectors at SLC and LEP, which were under construction to study Z0 decays, was indeed compelling (see “Vertexing” figure). All four LEP experiments employed a silicon-microstrip vertex detector.

Early in the silicon vertex-detector programme, e2V perfected the art of “stitching” reticles limited to an area of 2 × 2 cm2, to make large CCDs (8 × 1.6 cm2 for SLD). This enabled us to make a high-performance vertex detector that operated from 1996 until SLD shut down in 1998, and which delivered a cornucopia of heavy-flavour physics from Z0 decays (see “Pioneering pixels” figure). During this time, the LEP beam pipe, limited by background to 54 mm radius, permitted its experiments’ microstrip-based vertex detectors to do pioneering b physics. But it had reduced capability for the more elusive charm, which was shorter lived and left fewer decay tracks. 

Between LEP with its much higher luminosity and SLD with its small beam pipe, state-of-the-art vertex detector and highly polarised electron beam, the study of Z0 decays yielded rich physics. Highlights included very detailed studies of an enormous sample of gluon jets from Z0 b b g events, with cleanly tagged b jets at LEP, and Ac, the parity-violation parameter in the coupling of the Z0 to c-quarks, at SLD. However, the most exciting discovery of that era was the top quark at Fermilab, in which the SVX microstrip detector of the CDF detector played an essential part (see “Top detector” figure). This triggered a paradigm shift. Before then, vertex detectors were an “optional extra” in experiments; afterwards, they became obligatory in every energy frontier detector system. 

Hybrid devices

While CCDs pioneered the use of silicon pixels for precision tracking, their use was restricted by two serious limitations: poor radiation tolerance and long readout time (tens of ms due to the need to transfer the charge signals pixel by pixel through a single output circuit). There was clearly a need for pixel detectors in more demanding environments, and this led to the development of hybrid pixel detectors. The idea was simple: reduce the strip length of well-developed microstrip technology to equal its width, and you had your pixel sensor. However, microstrip detectors were read out at one end by ASIC (application-specific integrated circuit) chips having their channel pitch matched to that of the strips. For hybrid pixels, the ASIC readout required a front-end circuit for each pixel, resulting in modules with the sensor chip facing the readout chip, with electrical connections made by metal bump-bonds (see right figure in “Pixel architectures” panel). The use of relatively thick sensor layers (compared to CCDs) compensated for the higher node capacitance associated with the hybrid front-end circuit.

The first charm decay

Although the idea was simple, its implementation involved a long and challenging programme of engineering at the cutting edge of technology. This had begun by about 1988, when Erik Heijne and colleagues in the CERN microelectronics group had the idea to fit full nuclear-pulse processing electronics in every pixel of the readout chip, with additional circuitry such as digitisation, local memory and pattern recognition on the chip periphery. With a 3 μm feature size, they were obliged to begin with relatively large pixels (75 × 500 μm), and only about 80 transistors per pixel. They initiated the RD19 collaboration, which eventually grew to 150 participants, with many pioneering developments over a decade, leading to successful detectors in at least three experiments: WA97 in the Omega Spectrometer; NA57; and forward tracking in DELPHI. As the RD19 programme developed, the steady reduction in feature size permitted the use of in-pixel discriminators and fast shapers that enhanced the noise performance, even at high rates. This would be essential for operation of large hybrid pixel systems in harsh environments, such as ATLAS and CMS at the LHC. RD19 initiated a programme of radiation hardness by design (enclosed-gate transistors, guard rings, etc), which was further developed and broadly disseminated by the CERN microelectronics group. These design techniques are now used universally across the LHC detector systems. There is still much to be learned, and advances to a smaller feature size bring new opportunities but also surprises and challenges. 

The advantages of the hybrid approach include the ability to choose almost any commercial CMOS process and combine it with the sensor best adapted to the application. This can deliver optimal speed of parallel processing, and radiation hardness as good as can be engineered in the two component chips. The disadvantages include a complex and expensive assembly procedure, high power dissipation due to large node capacitance, and more material than is desirable for a tracking system. Thanks to the sustained efforts of many experts, an impressive collection of hybrid pixel tracking detectors has been brought to completion in a number of detector facilities. As vertex detectors, their greatest triumph has been in the inferno at the heart of ATLAS and CMS where, for example, they were key to the recent measurement of the branching ratio for H  b b . 

Facing up to the challenge

The high-luminosity upgrade to the LHC (HL-LHC) is placing severe demands on ATLAS and CMS, none more so than developing even more powerful hybrid vertex detectors to accommodate a “pileup” level of 200 events per bunch crossing. For the sensors, a 3D variant invented by Sherwood Parker has adequate radiation hardness, and may provide a more secure option than the traditional planar pixels, but this question is still open. 3D pixels have already proved themselves in ATLAS, for the insertable B layer (IBL), where the signal charge is drifted transversally within the pixel to a narrow column of n-type silicon that runs through the thickness of the sensor. But for HL-LHC, the innermost pixels need to be at least five times smaller in area than the IBL, putting extreme pressure on the readout chip. The RD53 collaboration led by CERN has worked for years on the development of an ASIC using 65 nm feature size, which enables the huge amount of radiation-resistant electronics to fit within the pixel area, reaching the limit of 50 × 50 μm2. Assembling these delicate modules, and dealing with the thermal stresses associated with the power dissipation in the warm ASICs mechanically coupled to the cold sensor chips, is still a challenge. These pixel tracking systems (comprising five layers of barrel and forward trackers) will amount to about 6 Gpixels – seven times larger than before. Beyond the fifth layer, conditions are sufficiently relaxed that microstrip tracking will still be adequate. 

SLD vertex detector, ATLAS pixel detector and simulated tracks

The latest experiment to upgrade from strips to pixels is LHCb, which has an impressive track record of b and charm physics. Its adventurous Vertex Locator (VELO) detector has 26 disks along the beamline, equipped with orthogonally oriented r and ϕ microstrips, starting from inside the beampipe about 8 mm from the LHC beam axis. LHCb has collected the world’s largest sample of charmed hadrons, and with the VELO has made a number of world-leading measurements including the discovery of CP violation in charm. LHCb is now statistics-limited for many rare decays and will ramp up its event samples with a major upgrade implemented in two stages (see State-of-the-art-tracking for high luminosities).

For the first upgrade, due to begin operation early next year, the luminosity will increase by a factor of up to five, and the additional pattern recognition challenge will be addressed by a new pixel detector incorporating 55 μm pixels and installed even closer (5.1 mm) to the beam axis. The pixel detector uses evaporative CO2 microchannel cooling to allow operation under vacuum. LHCb will double its efficiency by removing the hardware trigger and reading out the data at the beam-crossing frequency of 40 MHz. The new “VeloPix” readout chip will achieve this with readout speeds of up to 20 Gb/s, and the software trigger will select heavy-flavour events based on full event reconstruction. For the second upgrade, due to begin in about 2032, the luminosity will be increased by a further factor of 7.5, allowing LHCb to eventually accumulate 10 times its current statistics. Under these conditions, there will be, on average, 40 interactions per beam crossing, which the collaboration plans to resolve by enhanced timing precision (around 20 ps) in the VELO pixels. The upgrade will require both an enhanced sensor and readout chip. This is an adventurous long-term R&D programme, and LHCb retain a fallback option with timing layers downstream of the VELO, if required. 

Monolithic active pixels

Being monolithic, the architecture of MAPS is very similar to that of CCDs (see middle figure in “Pixel architectures” panel). The fundamental difference is that in a CCD, the signal charge is transported physically through some centimetres of silicon to a single charge-sensing circuit in the corner of the chip, while in a MAPS the communication between the signal charge and the outside world is via in-pixel electronics, with metal tracks to the edge of the chip. The MAPS architecture looked very promising from the beginning, as a route to solving the problems of both CCDs and hybrid pixels. With respect to CCDs, the radiation tolerance could be greatly increased by sensing the signal charge within its own pixel, instead of transporting it over thousands of pixels. The readout speed could also be dramatically increased by in-pixel amplitude discrimination, followed by sparse readout of only the hit pixels. With respect to hybrid pixel modules, the expense and complications of bump-bonded assemblies could be eliminated, and the tiny node capacitance opened the possibility of much thinner active layers than were needed with hybrids.

A slice through the imaging region of a stacked Sony CMOS image sensor

MAPS have emerged as an attractive option for a number of future tracking systems. They offer small pixels where needed (notably for inner-layer vertex detectors) and thin layers throughout the detector volume, thereby minimising multiple scattering and photon conversion, both in barrels and endcaps. Excess material in the forward region of tracking systems such as time-projection and drift chambers, with their heavy endplate structures, has in the past led to poor track reconstruction efficiency, loss of tracks due to secondary interactions, and excess photon conversions. In colliders at the energy frontier (whether pp or e+e), however, interesting events for physics are often multi-jet, so there are nearly always one or more jets in the forward region. 

The first MAPS devices contained little more than a collection diode, a front-end transistor operated as a source follower, reset transistor and addressing logic. They needed only relaxed charge-collection time, so diffusive collection sufficed. Sherwood Parker’s group demonstrated their capability for particle tracking in 1991, with devices processed in the Centre for Integrated Studies at Stanford, operating in a Fermilab test beam. In the decades since, advances in the density of CMOS digital electronics have enabled designers to pack more and more electronics into each pixel. For fast operation, the active volume below the collection diode needs to be depleted, including in the corners of the pixels, to avoid loss of tracking efficiency. 

The Strasbourg group led by Marc Winter has a long and distinguished record of MAPS development. As well as highly appreciated telescopes in test beams at DESY for general use, the group supplied its MIMOSA-28 devices for the first MAPS-based vertex detector: a 356 Mpixel two-layer barrel system for the STAR experiment at Brookhaven’s Relativistic Heavy Ion Collider. Operational for a three-year physics run starting in 2014, this detector enhanced the capability to look into the quark–gluon plasma, the extremely hot form of matter that characterised the birth of the universe. 

Advances in the density of CMOS digital electronics have enabled designers to pack more and more electronics into each pixel

An ingenious MAPS variant developed by the Semiconductor Laboratory of the Max Planck Society – the Depleted P-channel FET (DEPFET) – is also serving as a high-performance vertex detector in the Belle II detector at SuperKEKB in Japan, part of which is already operating. In the DEPFET, the signal charge drifts to a “virtual gate” located in a buried channel deeper than the current flowing in the sense transistor. As Belle II pushes to even higher luminosity, it is not yet clear which technology will deliver the required radiation hardness. 

The small collection electrode of the standard MAPS pixel presents a challenge in terms of radiation hardness, since it is not easy to preserve full depletion after high levels of bulk damage. An important initiative to overcome this was initiated in 2007 by Ivan Perić of KIT, in which the collection electrode is expanded to cover most of the pixel area, below the level of the CMOS electronics, so the charge-collection path is much reduced. Impressive further developments have been made by groups at Bonn University and elsewhere. This approach has achieved high radiation resistance with the ATLASpix prototypes, for instance. However, the standard MAPS approach with small collection electrode may be tunable to achieve the required radiation resistance, while preserving the advantages of superior noise performance due to the much lower sensor capacitance. Both approaches have strong backing from talented design groups, but the eventual outcome is unclear. 

Advanced MAPS

Advanced MAPS devices were proposed for detectors at the International Linear Collider (ILC). In 2008 Konstantin Stefanov of the Open University suggested that MAPS chips could provide an overall tracking system of about 30 Gpixels with performance far beyond the baseline options at the time, which were silicon microstrips and a gaseous time-projection chamber. This development was shelved due to delays to the ILC, but the dream has become a reality in the MAPS-based tracking system for the ALICE detector at the LHC, which builds on the impressive ALPIDE chip development by Walter Snoeys and his collaborators. The ALICE ITS-2 system, with 12.5 Gpixels, sets the record for any pixel system (see ALICE tracks new territories). This beautiful tracker has operated smoothly on cosmic rays and is now being installed in the overall ALICE detector. The group is already pushing to upgrade the three central layers using wafer-scale stitching and curved sensors to significantly reduce the material budget. At the 2021 International Workshop on Future Linear Colliders held in March, the SiD concept group announced that they will switch to a MAPS-based tracking system. R&D for vertexing at the ILC is also being revived, including the possibility of CCDs making a comeback with advanced designs from the KEK group led by Yasuhiro Sugimoto.

Bert Gonzalez with the SVX microstrip vertex detector

The most ambitious goal for MAPS-based detectors is for the inner-layer barrels at ATLAS and CMS, during the second phase of the HL-LHC era, where smaller pixels would provide important advantages for physics. At the start of high-luminosity operation, these layers will be equipped with hybrid pixels of 25 × 100 μm2 and 150 μm active thickness, the pixel area being limited by the readout chip, which is based on a 65 nm technology node. Encouraging work led by the CERN ATLAS and microelectronics groups and the Bonn group is underway, and could result in a MAPS option of 25 × 25 μm2, requiring an active-layer thickness of only about 20 μm, using a 28 nm technology node. The improvement in tracking precision could be accompanied by a substantial reduction in power dissipation. The four-times greater pixel density would be more than offset by the reduction in operating voltage, plus the much smaller node capacitance. This route could provide greatly enhanced vertex detector performance at a time when the hybrid detectors will be coming to the end of their lives due to radiation damage. However, this is not yet guaranteed, and an evolution to stacked devices may be necessary. A great advantage of moving to monolithic or stacked devices is that the complex processes are then in the hands of commercial foundries that routinely turn out thousands of 12 inch wafers per week. 

High-speed and stacked

During HL-LHC operations there is a need for ultra-fast tracking devices to ameliorate the pileup problems in ATLAS, CMS and LHCb. Designs with a timing precision of tens of picoseconds are advancing rapidly – initially low-gain avalanche diodes, pioneered by groups from Torino, Barcelona and UCSC, followed by other ultra-fast silicon pixel devices. There is a growing list of applications for these devices. For example, ATLAS will have a layer adjacent to the electromagnetic calorimeter in the forward region, where the pileup problems will be severe, and where coarse granularity (~1 mm pixels) is sufficient. LHCb is more ambitious for its stage-two upgrade, as already mentioned. There are several experiments in which such detectors have potential for particle identification, notably π/K separation by time-of-flight up to a momentum limit that depends on the scale of the tracking system, typically 8 GeV/c.

Monolithic and hybrid pixel detectors answer many of the needs for particle tracking systems now and in the future. But there remain challenges, for example the innermost layers at ATLAS and CMS. In order to deliver the required vertexing capability for efficient, cleanly separated b and charm identification, we need pixels of dimensions about 25 × 25 μm, four times below the current goals for HL-LHC. They should also be thinner, down to say 20 μm, to preserve precision for oblique tracks. 

A Fermilab/BNL stacked pixel detector

Solutions to these problems, and similar challenges in the much bigger market of X-ray imaging, are coming into view with stacked devices, in which layers of CMOS-processed silicon are stacked and interconnected. The processing technique, in which wafers are bonded face-to-face, with electrical contacts made by direct-bond interconnects and through-silicon vias, is now a mature technology and is in the hands of leading companies such as Sony and Samsung. The CMOS imaging chips for phone cameras must be one of the most spectacular examples of modern engineering (see “Up close” figure). 

Commercial CMOS image sensor development is a major growth area, with approximately 3000 patents per year. In future these developers, advancing to smaller-node chips, will add artificial intelligence, for example to take a number of frames of fast-moving subjects and deliver the best one to the user. Imagers under development for the automotive industry include those that will operate in the short-wavelength infrared region, where silicon is still sensitive. In this region, rain and fog are transparent, so a driverless car equipped with the technology will be able to travel effortlessly in the worst weather conditions. 

While we developers of pixel imagers for science have not kept up with the evolution of stacked devices, several academic groups have over the past 15 years taken brave initiatives in this direction, most impressively a Fermilab/BNL collaboration led by Ron Lipton, Ray Yarema and Grzegorz Deptuch. This work was done before the technical requirements could be serviced by a single technology node, so they had to work with a variety of pioneering companies in concert with excellent in-house facilities. Their achievements culminated in three working prototypes, two for particle tracking and one for X-ray imaging, namely a beautiful three-tier stack comprising a thick sensor (for efficient X-ray detection), an analogue tier and a digital tier (see “Stacking for physics” figure). 

Technology nodes

12 inch silicon wafers

The relatively recent term “technology node” embraces a number of aspects of commercial integrated circuit (IC) production. First and foremost is the feature size, which originally meant the minimum line width that could be produced by photolithography, for example the length of a transistor gate. With the introduction of novel transistor designs (notably the FinFET), this term has been generalised to indicate the functional density of transistors that is achievable. At the start of the silicon-tracker story, in the late 1970s, the feature size was about 3 µm. The current state-of-the-art is 5 nm, and the downward Moore’s law trend is continuing steadily, although such narrow lines would of course be far beyond the reach of photolithography. There are other aspects of ICs that are included in the description of any technology node. One is whether they support stitching, which means the production of larger chips by step-and-repeat of reticles, enabling the production of single devices of sizes 10 × 10 cm2 and beyond, in principle up to the wafer scale (which these days is a diameter of 200 or 300 mm, evolving soon to 450 mm). Another is whether they support wafer stacking, which is the production of multi-layer sandwiches of thinned devices using various interconnect technologies such as through-silicon vias and direct-bond interconnects. A third aspect is whether they can be used for imaging devices, which implies optimised control of dark current and noise. For particle tracking, the most advanced technology nodes are unaffordable (the development cost of a single 5 nm ASIC is typically about $500 million, so it needs a large market). However, other features that are desirable and becoming essential for our needs (imaging capability, stitching and stacking) are widely available and less expensive. For example, Global Foundries, which produces 3.5 million wafers per annum, offers these capabilities at their 32 and 14 nm nodes.

For the HL-LHC inner layers, one could imagine a stacked chip comprising a thin sensor layer (with excellent noise performance enabled by an on-chip front-end circuit for each pixel), followed by one or more logic layers. Depending on the technology node, one should be able to fit all the logic (building on the functionality of the RD53 chip) in one or two layers of 25 × 25 μm pixels. The overall thickness could be 20 μm for the imaging layer, and 6 μm per logic layer, with a bottom layer sufficiently thick (~100 μm) to give the necessary mechanical stability to the relatively large stitched chips. The resulting device would still be thin enough for a high-quality vertex detector, and the thin planar sensor-layer pixels including front-end electronics would be amenable to full depletion up to the 10-year HL-LHC radiation dose.

There are groups in Japan (at KEK led by Yasuo Arai, and at RIKEN led by Takaki Hatsui) that have excellent track records for developing silicon-on-insulator devices for particle tracking and for X-ray detection, respectively. The RIKEN group is now believed to be collaborating with Sony to develop stacked devices for X-ray imaging. Given Sony’s impressive achievements in visible-light imaging, this promises to be extremely interesting. There are many applications (for example at ITER) where radiation-resistant X-ray imaging will be of crucial importance, so this is an area in which stacked devices may well own the future. 

Outlook 

The story of frontier pixel detectors is a bit like that of an art form – say cubism. With well-defined beginnings 50 years ago, it has blossomed into a vast array of beautiful creations. The international community of designers see few boundaries to their art, being sustained by the availability of stitched devices to cover large-area tracking systems, and moving into the third dimension to create the most advanced pixels, which are obligatory for some exciting physics goals. 

Face-to-face wafer bonding is now a commercially mature technology

Just like the attribute of vision in the natural world, which started as a microscopic light-sensitive spot on the surface of a unicellular protozoan, and eventually reached one of its many pinnacles in the eye of an eagle, with its amazing “stacked” data processing behind the retina, silicon pixel devices are guaranteed to continue evolving to meet the diverse needs of science and technology. Will they one day be swept away, like photographic film or bubble chambers? This seems unthinkable at present, but history shows there’s always room for a new idea. 

The post Tracking the rise of pixel detectors appeared first on CERN Courier.

]]>
Feature Silicon pixel detectors for particle tracking have blossomed into a vast array of beautiful creations that have driven numerous discoveries, with no signs of the advances slowing down. https://cerncourier.com/wp-content/uploads/2021/06/CCJulAug21_PIXEL_feature.jpg
State-of-the-art tracking for high luminosities https://cerncourier.com/a/state-of-the-art-tracking-for-high-luminosities/ Mon, 28 Jun 2021 08:27:56 +0000 https://preview-courier.web.cern.ch/?p=92840 The tracking systems of the ATLAS, LHCb and CMS experiments are undergoing complete replacements to prepare for the extreme operating conditions of future LHC runs.

The post State-of-the-art tracking for high luminosities appeared first on CERN Courier.

]]>
CMS tracker being installed

Towards the CMS phase-2 pixel detector

The original silicon pixel detector for CMS – comprising three barrel layers and two endcap disks – was designed for a maximum instantaneous luminosity of 1034 cm–2 s–1 and a maximum average pile-up of 25. Following LHC upgrades in 2013–2014, it was replaced with an upgraded system (the CMS Phase-1 pixel detector) in 2017 to cope with higher instantaneous luminosities. With a lower mass and an additional barrel layer and endcap disk, it was an evolutionary upgrade maintaining the well-tested key features of the original detector while enabling higher-rate capability, improved radiation tolerance and more robust tracking. During Long Shutdown 2, maintenance work on the Phase-1 device included the installation of a new innermost layer (see “Present and future” image) to enable the delivery of high-quality data until the end of LHC Run 3. 

During the next long shutdown, scheduled for 2025, the entire tracker detector will be replaced in preparation for the High-Luminosity LHC (HL-LHC). This Phase-2 pixel detector will need to cope with a pile-up and hit rate eight times higher than before, and with a trigger rate and radiation dose 7.5 and 10 times higher, respectively. To meet these extreme requirements, the CMS collaboration, in partnership with ATLAS via the RD53 collaboration, is developing a next-generation hybrid-pixel chip utilising 65 nm CMOS technology. The overall system is much bigger than the Phase-1 device (~5 m2 compared to 1.75 m2) with vastly more read-out channels (~2 billion compared to 120 million). With six-times smaller pixels, increased detection coverage, reduced material budget, a new readout chip to enable a lower detection threshold, and a design that continues to allow easy installation and removal, the state-of-the-art Phase-2 pixel detector will serve CMS well into the HL-LHC era. 

LHCbs all-new VELO takes shape

VELO modules being assembled

LHCb’s Vertex Locator (VELO) has played a pivotal role in the experiment’s flavour-physics programme. Contributing to triggering, tracking and vertexing, and with a geometry optimised for particles traveling close to the beam direction, its 46 orthogonal silicon-strip half-disks have enabled the collaboration to pursue major results. These include the 2019 discovery of CP violation in charm using the world’s largest reconstructed samples of charm decays, a host of matter–antimatter asymmetry measurements and rare-decay searches, and the recent hints of lepton non-universality in B decays.

Placing the sensors as close as possible to the primary proton–proton interactions requires the whole VELO system to sit inside the LHC vacuum pipe (separated from the primary vacuum by a 1.1 m-long thin-walled “RF foil”), and a mechanical system to move the disks out of harm’s way during the injection and stabilisation of the beams. After more than a decade of service witnessing the passage of some 1026 protons, the original VELO is now being replaced with a new one to prepare for a factor-five increase in luminosity for LHCb in LHC Run 3. 

A silicon wafer and inspecting the upgraded RF foil

The entirety of the new VELO will be read out at a rate of 40 MHz, requiring a huge data bandwidth: up to 20 Gbits/s for the hottest ASICs, and 3 Tbit/s in total. Cooling using the minimum of material is another major challenge. The upgraded VELO will be kept at –20° via the novel technique of evaporative CO2 circulating in 120 × 200 µm channels within a silicon substrate (see “Fine structure” image, left). The harsh radiation environment also demands a special ASIC, the VeloPix, which has been developed with the CERN Medipix group and will allow the detector to operate a much more efficient trigger. To cope with increased occupancies at higher luminosity, the original silicon strips have been replaced with pixels. The new sensors (in the form of rectangles rather than disks) will be located even closer to the interaction point (5.1 mm versus the previous 8.2 mm for the first measured point), which requires the RF foil to sit just 3.5 mm from the beam and 0.9 mm from the sensors. The production of the foil was a huge technical achievement. It was machined from a solid-forged aluminium block with 98% of the material removed and the final shape machined to a thickness of 250 µm, with further chemical etching taking it to just 100 µm (see “Fine structure” image, right).

Around half of the VELO-module production is complete, with the work shared between labs in the UK and the Netherlands (see “In production” image). Assembly of the 52 modules into the “hood”, which provides cooling, services and vacuum, is now under way, with installation in LHCb scheduled to start in August. The VELO Upgrade I is expected to serve LHCb throughout Run 3 and Run 4. Looking further to the future, the next upgrade will require the detector to operate with a huge jump in luminosity, where vertexing will pose a significant challenge. Proposals under consideration include a new “4D” pixel detector with time-stamp information per hit, which could conceivably be achieved by moving to a smaller CMOS node. At this stage, however, the collaboration is actively investigating all options, with detailed technical design reports expected towards the middle of the decade.

ATLAS ITk pixel detector and 3D silicon sensor

ATLAS ITk pixel detector on track

The ATLAS collaboration upgraded its original pixel detector in 2014, adding an innermost layer to create a four-layer device. The new layer contained a much smaller pitch, 3D sensors at large angles and CO2 cooling, and the pixel tracker will continue to serve ATLAS throughout LHC Run 3. Like CMS, the collaboration has long been working towards the replacement of the full inner tracker during the next long shutdown expected in 2025, in preparation for HL-LHC operations. The innermost layers of this state-of-the-art all-silicon tracker, called the ITk, will be built from pixel detectors with an area almost 10 times larger than that of the current device. With 13 m2 of active silicon across five barrel layers and two end caps, the pixel detector will contribute to precision tracking up to a pseudorapidity |η| = 4, with the innermost two layers expected to be replaced a few years into the HL-LHC era, and the outermost layers designed to last the lifetime of the project. Most of the detector will use planar silicon sensors, with 3D sensors (which are more radiation-hard and less power-hungry) in the innermost layer. Like the CMS Phase-2 pixel upgrade, the sensors will be read out by new chips being developed by the RD53 collaboration, with support structures made of low-mass carbon materials and cooling provided by evaporative CO2 flowing in thin-walled pipes. The device will have a total of 5.1 Gpixels (55 times more than the current one), and the very high expected HL-LHC data rates, especially in the innermost layers, will require the development of new technologies for high-bandwidth transmission and handling. The ITk pixel detector is now in the final stages of R&D and moving into production. After that, the final stages of integrating the subdetectors assembled in ATLAS institutes worldwide will take place on the surface at CERN before final installation underground.

The post State-of-the-art tracking for high luminosities appeared first on CERN Courier.

]]>
Feature The tracking systems of the ATLAS, LHCb and CMS experiments are undergoing complete replacements to prepare for the extreme operating conditions of future LHC runs. https://cerncourier.com/wp-content/uploads/2021/06/CCJulAug21_VELO_feature.jpg
ALICE tracks new territory https://cerncourier.com/a/alice-tracks-new-territory/ Mon, 07 Jun 2021 10:18:25 +0000 https://preview-courier.web.cern.ch/?p=92610 The recently installed, upgraded ALICE inner tracking system is the largest pixel detector ever built and the first at the LHC to use monolithic active pixel sensors.

The post ALICE tracks new territory appeared first on CERN Courier.

]]>
ALICE ITS Inner Barrel installation

In the coming decade, the study of nucleus–nucleus, proton–nucleus and proton–proton collisions at the LHC will offer rich opportunities for a deeper exploration of the quark–gluon plasma (QGP). An expected 10-fold increase in the number of lead–lead (Pb–Pb) collisions should both increase the precision of measurements of known probes of the QGP medium as well as give access to new ones. By focusing on rare probes down to very low transverse momentum, such as heavy-flavour particles, quarkonium states, real and virtual photons, as well the study of jet quenching and exotic heavy nuclear states, very large data samples will be required.

To seize these opportunities, the ALICE collaboration has undertaken a major upgrade of its detectors to increase the event readout, online data processing and recording capabilities by nearly two orders of magnitude (CERN Courier January/February 2019 p25). This will allow Pb–Pb minimum-bias events to be recorded at rates in excess of 50 kHz, which is the expected Pb–Pb interaction rate at the LHC in Run 3, as well as proton–lead (p–Pb) and proton–proton (pp) collisions at rates of about 500 kHz and 1 MHz, respectively. In addition, the upgrade will improve the ability of the ALICE detector to distinguish secondary vertices of particle decays from the interaction vertex and to track very low transverse-momentum particles, allowing measurements of heavy-flavour hadrons and low-mass dileptons with unprecedented precision and down to zero transverse momentum.

High impact

These ambitious physics goals have motivated the development of an entirely new inner tracking system, ITS2. Starting from LHC Run 3 next year, the ITS2 will allow pp and Pb–Pb collisions to be read out 100 and 1000 times more quickly than was possible in previous runs, offering superior ability to measure particles at low transverse momenta (see “High impact” figure). Moreover, the inner three layers of the ITS2 feature a material budget three times lower than the original detector, which is also important for improving the tracking performance at low transverse momentum.

With its 10 m2 of active silicon area and nearly 13 billion pixels, the ITS2 is the largest pixel detector ever built. It is also the first detector at the LHC to use monolithic active pixel sensors (MAPS), instead of the more conventional and well-established hybrid pixels and silicon microstrips.

Change of scale

The particle sensors and the associated read-out electronics used for vertexing and tracking detection systems in particle-physics experiments have very demanding requirements in terms of granularity, material thickness, readout speed and radiation hardness. The development of sensors based on silicon-semiconductor technology and read-out integrated circuits based on CMOS technology revolutionised the implementation of such detection systems. The development of silicon microstrips, already successfully used at the Large Electron-Positron (LEP) collider, and, later, the development of hybrid pixel detectors, enabled the construction of tracking and vertexing detectors that meet the extreme requirements – in terms of particle rates and radiation hardness – set by the LHC. As a result, silicon microstrip and pixel sensors are at the heart of the particle-tracking systems in most particle-physics experiments today.

Nevertheless, compromises exist in the implementation of this technology. Perhaps the most significant is the interface between the sensor and the readout electronics, which are typically separate components. To go beyond these limitations and construct detection systems with higher granularity and less material thickness requires the development of new technology. The optimal way to achieve this is to integrate both sensor and readout electronics to create a single detection device. This is the approach taken with CMOS active pixel sensors (APSs). Over the past 20 years, extensive R&D has been carried out on CMOS APSs, making this a viable option for vertexing and tracking detection systems in particle and nuclear physics, although their performance in terms of radiation hardness is not yet at the level of hybrid pixel detectors.

ALPIDE, which is the result of an intensive R&D effort, is the building block of the ALICE ITS2

The first large-scale application of CMOS APS technology in a collider experiment was the STAR PXL detector at Brookhaven’s Relativistic Heavy-Ion Collider in 2014 (CERN Courier October 2015 p6). The ALICE ITS2 has benefitted from significant R&D since then, in particular concerning the development of a more advanced CMOS imaging sensor, named ALPIDE, with a minimum feature size of 180 nm. This has led to a significant improvement in the field of MAPS for single-particle detection, reaching unprecedented performance in terms of signal/noise ratio, spatial resolution, material budget and readout speed.

ALPIDE sensors

ALPIDE, which is the result of an intensive R&D effort carried out by ALICE over the past eight years, is the building block of the ALICE ITS2. The chip is 15 × 30 mm2 in area and contains more than half a million pixels organised in 1024 columns and 512 rows. Its very low power consumption (< 40 mW/cm2) and excellent spatial resolution (~5 μm) are perfect for the inner tracker of ALICE.

ALPIDE journeys

In ALPIDE the sensitive volume is a 25 μm-thick layer of high-resistivity p-type silicon (> 1 kΩ cm) grown epitaxially on top of a standard (low-resistivity) CMOS wafer (see “ALPIDE journeys” figure). The electric charge generated by particles traversing the sensitive volume is collected by an array of n–p diodes reverse-biased with a positive potential (~1 V) applied on the n-well electrode and a negative potential (down to a minimum of –6 V) applied to the substrate (backside). The possibility of varying the reverse-bias voltage in the range 1 to 7 V allows control over the size of the depleted volume (the fraction of the sensitive volume where the charge is collected by drift due to the presence of an electric field) and, correspondingly, the charge-collection time. Measurements carried out on sensors with characteristics identical to ALPIDE have shown an average charge-collection time consistently below 15 ns for a typical reverse-bias voltage of 4 V. Applying reverse substrate bias to the ALPIDE sensor also increases the tolerance to non-ionising energy loss to well beyond 1013 1 MeV neq/cm2, which is largely sufficient to meet ALICE’s requirements.

Another important feature of ALPIDE is the use of a p-well to shield the full CMOS circuitry from the epitaxial layer. Only the n-well collection electrode is not shielded. The deep p-well prevents all other n-wells – which contain circuitry – from collecting signal charge from the epitaxial layer, and therefore allows the use of full CMOS and consequently more complex readout circuitry in the pixel. ALICE is the first experiment where this has been used to implement a MAPS with a pixel front-end (amplifier and discriminator) and a sparsified readout within the pixel matrix similar to hybrid sensors. The low capacitance of the small collection electrode (about 2 × 2 μm2), combined with a circuit that performs sparsified readout within the matrix without a free-running clock, keeps the power consumption as low as 40 nW per pixel.

Cylindrical structure

ITS2 structure

The ITS2 consists of seven layers covering a radial extension from 22 to 430 mm with respect to the beamline (see “Cylindrical structure” figure). The innermost three layers form the inner barrel (IB), while the middle two and the outermost two layers form the outer barrel (OB). The radial position of each layer was optimised to achieve the best combined performance in terms of pointing resolution, momentum resolution and tracking efficiency in the expected high track-density environment of a Pb–Pb collision. It covers a pseudo-rapidity range |η| < 1.22 for 90% of the most luminous beam interaction region, extending over a total surface of 10 m2 and containing about 12.5 Gpixels with binary readout, and is operated at room temperature using water cooling.

ALICE ITS

Given the small size of the ALPIDE (4.5 cm2), sensors are tiled-up to form the basic detector unit, which is called a stave. It consists of a “space-frame” (a carbon-fibre mechanical support), a “cold plate” (a carbon ply embedding two cooling pipes) and a hybrid integrated circuit (HIC) assembly in which the ALPIDE chips are glued and electrically connected to a flexible printed circuit. An IB HIC and an OB HIC include one row of nine chips and two rows of seven chips, respectively. The HICs are glued to the mechanical support: 1 HIC for the IB and 8 or 14 HICs for the two innermost and two outermost layers of the OB, respectively (see “State of the art” figure).

Zero-suppressed hit data are transmitted from the staves to a system of about 200 readout boards located 7 m away from the detector. Data is transmitted serially with a bit-rate up to 1.2 Gb/s over more than 3800 twin-axial cables reaching an aggregate bandwidth of about 2 Tb/s. The readout boards aggregate data and re-transmit it over 768 optical-fibre links to the first-level processors of the combined online/offline (O2) computing farm. The data are then sequenced in frames, each containing the hit information of the collisions occurring in contiguous time intervals of constant duration, typically 22 μs.

The process and procedures to build the HICs and staves are rather complex and time-intensive. More than 10 construction sites distributed worldwide worked together to develop the assembly procedure and to build the components. More than 120 IB and 2500 OB HICs were built using a custom-made automatic module-assembly machine, implementing electrical testing, dimension measurement, integrity inspection and alignment for assembly. A total of 96 IB staves, enough to build two copies of the three IB layers, and a total of 160 OB staves, including 20% spares, have been assembled.

A large cleanroom was built at CERN for the full detector assembly and commissioning activities. Here the same backend system that will be used in the experiment was installed, including the powering system, cooling system, full readout and trigger chains. Staves were installed on the mechanical support structures to form layers, and the layers are assembled in half-barrels, IB (layers 0, 1 and 2) top and bottom and OB (layers 3, 4, 5 and 6) top and bottom. Each stave is then connected to power-supply and readout systems. The commissioning campaign started in May 2019 to fully characterise and calibrate all the detector components, and installations of both the OB and IB were completed in May this year.

Physics ahead

After nearly 10 years of R&D, the upgrade of the ALICE experimental apparatus – which includes an upgraded time projection chamber, a new muon forward tracker, a new fast-interaction trigger detector, forward diffraction detector, new readout electronics and an integrated online–offline computing system – is close to completion. Most of the new or upgraded detectors, including the ITS2, have already been installed in the experimental area and the global commissioning of the whole apparatus will be completed this year, well before the start of Run 3, which is scheduled for the spring of 2022.

The significant enhancements to the performance of the ALICE detector will enable the exploration of new phenomena

The significant enhancements to the performance of the ALICE detector will enable detailed, quantitative characterisation of the high-density, high-temperature phase of strongly interacting matter, together with the exploration of new phenomena. The ITS2 is at the core of this programme. With improved pointing resolution and tracking efficiency at low transverse momentum, it will enable the determination of the total production cross-section of the charm quark. This is fundamental for understanding the interplay between the production of charm quarks in the initial hard scattering, their energy loss in the QGP and possible in-medium thermal production. Moreover, the ITS2 will also make it possible to measure a larger number of different charmed and beauty hadrons, including baryons, opening the possibility for determining the heavy-flavour transport coefficients. A third area where the new ITS will have a major impact is the measurement of electron–positron pairs emitted as thermal radiation during all stages of the heavy-ion collision, which offer an insight into the bulk properties and space–time evolution of the QGP.

More in store

The full potential of the ALPIDE chip underpinning the ITS2 is yet to be fully exploited. For example, a variant of ALPIDE explored by ALICE based on an additional low-dose deep n-type implant to realise a planar junction in the epitaxial layer below the wells containing the CMOS circuitry results in a much faster charge collection and significantly improved radiation hardness, paving the way for sensors that are much more resistant to radiation.

Into the future

Further improvements to MAPS for high-energy physics detectors could come by exploiting the rapid progress in imaging for consumer applications. One of the features offered recently by CMOS imaging sensor technologies, called stitching, will enable a new generation of MAPS with an area up to the full wafer size. Moreover, the reduction in the sensor thickness to about 30–40 μm opens the door to large-area curved sensors, making it possible to build a cylindrical layer of silicon-only sensors with a further significant reduction in the material thickness. The ALICE collaboration is already preparing a new detector based on these concepts, which consists of three cylindrical layers based on curved wafer-scale stitched sensors (see “Into the future” figure). This new vertex detector will be installed during Long Shutdown 3 towards the middle of the decade, replacing the three innermost layers of the ITS2. With the first detection layer closer to the interaction point (from 23 to 18 mm) and a reduction in the material budget close to the interaction point by a factor of six, the new vertex detector will further improve the tracking precision and efficiency at low transverse momentum.

The technologies developed by ALICE for the ITS2 detector are now being used or considered for several other applications in high-energy physics, including the vertex detector of the sPHENIX experiment at RHIC, and the inner tracking system for the NICA MPD experiment at JINR. The technology is also being applied to areas outside of the field, including in medical and space applications. The Bergen pCT collaboration and INFN Padova’s iMPACT project, for example, are developing novel ALPIDE-based devices for clinical particle therapy to reconstruct 3D human body images. The HEPD02 detector for the Chinese–Italian CSES-02 mission, meanwhile, includes a charged-particle tracker made of three layers of ALPIDE sensors that represents a pioneering test for next-generation space missions. Driven by a desire to learn more about the fundamental laws of nature, it is clear that advanced silicon-tracker technology continues to make an impact on wider society, too.

The post ALICE tracks new territory appeared first on CERN Courier.

]]>
Feature The recently installed, upgraded ALICE inner tracking system is the largest pixel detector ever built and the first at the LHC to use monolithic active pixel sensors. https://cerncourier.com/wp-content/uploads/2021/06/ITS2-IB-OB-191.jpg
Collider neutrinos on the horizon https://cerncourier.com/a/collider-neutrinos-on-the-horizon/ Wed, 02 Jun 2021 15:57:55 +0000 https://preview-courier.web.cern.ch/?p=92559 SND@LHC and FASERv are set to make the first measurements of collider neutrinos, while opening new searches for physics beyond the Standard Model.

The post Collider neutrinos on the horizon appeared first on CERN Courier.

]]>
FASERv pilot-detector event displays

Think “neutrino detector” and images of giant installations come to mind, necessary to compensate for the vanishingly small interaction probability of neutrinos with matter. The extreme luminosity of proton-proton collisions at the LHC, however, produces a large neutrino flux in the forward direction, with energies leading to cross-sections high enough for neutrinos to be detected using a much more compact apparatus.

In March, the CERN research board approved the Scattering and Neutrino Detector (SND@LHC) for installation in an unused tunnel that links the LHC to the SPS, 480 m downstream from the ATLAS experiment. Designed to detect neutrinos produced in a hitherto unexplored pseudo-rapidity range (7.2 < ? < 8.6), the experiment will complement and extend the physics reach of the other LHC experiments — in particular FASERν, which was approved last year. Construction of FASERν, which is located in an unused service tunnel on the opposite side of ATLAS along the LHC beamline (covering |?|>9.1), was completed in March, while installation of SND@LHC is about to begin.

Both experiments will be able to detect neutrinos of all types, with SND@LHC positioned off the beamline to detect neutrinos produced at slightly larger angles. Expected to commence data-taking during LHC Run 3 in spring 2022, these latest additions to the LHC-experiment family are poised to make the first observations of collider neutrinos while opening new searches for feebly interacting particles and other new physics.

Neutrinos galore
SND@LHC will comprise 800 kg of tungsten plates interleaved with emulsion films and electronic tracker planes based on scintillating fibres. The emulsion acts as vertex detector with micron resolution while the tracker provides a time stamp, the two subdetectors acting as a sampling electromagnetic calorimeter. The target volume will be immediately followed by planes of scintillating bars interleaved with iron blocks serving as a hadron calorimeter, followed downstream by a muon-identification system.

SND layout

During its first phase of operation, SND@LHC is expected to collect an integrated luminosity of 150 fb-1, corresponding to more than 1000 high-energy neutrino interactions. Since electron neutrinos and antineutrinos are predominantly produced by charmed-hadron decays in the pseudorapidity range explored, the experiment will enable the gluon parton-density function to be constrained in an unexplored region of very small x. With projected statistical and systematic uncertainties of 30% and 22% in the ratio between ?e and ??, and about 10% for both uncertainties in the ratio between ?and ?? at high energies, the Run-3 data will also provide unique tests of lepton flavour universality with neutrinos, and have sensitivity in the search for feebly interacting particles via scattering signatures in the detector target.

“The angular range that SND@LHC will cover is currently unexplored,” says SND@LHC spokesperson Giovanni De Lellis. “And because a large fraction of the neutrinos produced in this range come from the decays of particles made of heavy quarks, these neutrinos can be used to study heavy-quark particle production in an angular range that the other LHC experiments can’t access. These measurements are also relevant for the prediction of very high-energy neutrinos produced in cosmic-ray interactions, so the experiment is also acting as a bridge between accelerator and astroparticle physics.”

A FASER first
FASERν is an addition to the Forward Search Experiment (FASER), which was approved in March 2019 to search for light and weakly interacting long-lived particles at solid angles beyond the reach of conventional collider detectors. Comprising a small and inexpensive stack of emulsion films and tungsten plates measuring 0.25 x 0.25 x 1.35 m and weighing 1.2 tonnes, FASERν is already undergoing tests. Smaller than SND, the detector is positioned on the beam-collision axis to maximise the neutrino flux, and should detect a total of around 20,000 muon neutrinos, 1300 electron neutrinos and 20 tau neutrinos in an unexplored energy regime at the TeV scale. This will allow measurements of the interaction cross-sections of all neutrino flavours, provide constraints on non-standard neutrino interactions, and improve measurements of proton parton-density functions in certain phase-space regions.

The final detector should do much better — it will be a hundred times bigger

Jamie Boyd

In May, based on an analysis of pilot emulsion data taken in 2018 using a target mass of just 10 kg, the FASERν team reported the detection of the first neutrino-interaction candidates, based on a measured 2.7σ excess of a neutrino-like signal above muon-induced backgrounds. The result paves the way for high-energy neutrino measurements at the LHC and future colliders, explains FASER co-spokesperson Jamie Boyd: “The final detector should do much better — it will be a hundred times bigger, be exposed to much more luminosity, have muon identification capability, and be able to link observed neutrino interactions in the emulsion to the FASER spectrometer. It is quite impressive that such a small and simple detector can detect neutrinos given that usual neutrino detectors have masses measured in kilotons.”

The post Collider neutrinos on the horizon appeared first on CERN Courier.

]]>
News SND@LHC and FASERv are set to make the first measurements of collider neutrinos, while opening new searches for physics beyond the Standard Model. https://cerncourier.com/wp-content/uploads/2021/06/3-2.jpg
Greening gaseous detectors https://cerncourier.com/a/greening-gaseous-detectors/ Fri, 28 May 2021 08:11:40 +0000 https://preview-courier.web.cern.ch/?p=92451 More than 200 experts participated in a workshop to study alternatives to the harmful chlorofluorocarbons which play an important role in traditional gas mixtures.

The post Greening gaseous detectors appeared first on CERN Courier.

]]>
Thanks to their large volumes and cost effectiveness, particle-physics experiments rely heavily on gaseous detectors. Unfortunately, environmentally harmful chlorofluorocarbons known as freons play an important role in traditional gas mixtures. To address this issue, more than 200 gas-detector experts participated in a workshop hosted online by CERN on 22 April to study the operational behaviour of novel gases and alternative gas mixtures.

Large gas molecules absorb energy in vibrational and rotational modes of excitation

Freon-based gases are essential to many detectors currently used at CERN, especially for tracking and triggering. Examples run from muon systems, ring-imaging Cherenkov (RICH) detectors and time-projection chambers (TPCs) to wire chambers, resistive-plate chambers (RPCs) and micro-pattern gas detectors (MPGDs). While the primary gas in the mixture is typically a noble gas, adding a “quencher” gas helps achieve a stable gas gain, well separated from the noise of the electronics. Large gas molecules such as freons absorb energy in relevant vibrational and rotational modes of excitation, thereby preventing secondary effects such as photon feedback and field emission. Extensive R&D is needed to reach the stringent performance required of each gas mixture.

The CMS muon system

CERN has developed several strategies to reduce greenhouse gas (GHG) emissions from particle detectors. As demonstrated by the ALICE experiment’s TPC, upgrading gas-recirculation systems can reduce GHGs by almost 100%. When it is not possible to recirculate all of the gas mixture, gas recuperation is an option – for example, the recuperation of CF4 by the CMS experiment’s cathode-stripchamber (CSC) muon detector and the LHCb experiment’s RICH-2 detector. A complex gas-recuperation system for the C2H2F4 (R134a) in RPC detectors is also under study, and physicists are exploring the use of commonplace gases. In the future, new silicon photomultipliers could reduce chromatic error and increase photon yield, potentially allowing CF4 to be replaced with CO2. Meanwhile, in LHCb’s RICH-1 detector, C4F10 could possibly be replaced with hydrocarbons like C4H10 if the flammability risk is addressed.

Eco-gases

Finally, alternative “eco-gases” are the subject of intense R&D. Eco-gases have a low global-warming potential because of their very limited stability in the atmosphere as they react with water or decompose in ultraviolet light. Unfortunately, these conditions are also present in gaseous detectors, potentially leading to detector aging. In addition to their stability, there is also the challenge of adapting current LHC detectors, given that access is difficult and many components cannot be replaced.

Roberto Guida (CERN), Davide Piccolo (Frascati), Rob Veenhof (Uludağ University) and Piet Verwilligen (Bari) convened workshop sessions at the April event. Groups from Turin, Frascati, Rome, CERN and GSI presented results based on the new hydro-fluoro-olefin (HFO) mixture with the addition of neutral gases such as helium and CO2 as a way of lowering the high working-point voltage. Despite challenges related to the larger signal charge and streamer probability, encouraging results have been obtained in test beams in the presence of LHClike background gamma rays. CMS’s CSC detector is an interesting example where HFO could replace CF4. In this case, its decomposition could even be a positive factor, however further studies are needed.

We now need to create a compendium of simulations and measurements for “green” gases in a similar way to the concerted effort in the 1990s and 2000s that proved indispensable to the design of the LHC detectors. To this end, the INRS-hosted LXCAT database enables the sharing and evaluation of data to model non-equilibrium low-temperature plasmas. Users can upload data on electron- and ion-scattering cross sections and compare “swarm” parameters. The ETH (Zürich), Aachen and HZDR (Dresden) groups illustrated measurements of transport parameters, opening possibilities of collaboration, while the Bari group sought feedback and collaboration on a proposal to precisely measure transport parameters for green gases in MPGDs using electron and laser beams.

Obtaining funding for this work can be difficult due to a lack of expected technological breakthroughs in low-energy plasma physics

Future challenges will be significant. The volumes of detector systems for the High-Luminosity LHC and the proposed Future Circular Collider, for example, range from 10 to 100 m3, posing a significant environmental threat in the case of leaks. Furthermore, since 2014 an EU “F-gas” regulation has come into force, with the aim of reducing sales to one-fifth by 2030. Given the environmental impact and the uncertain availability and price of freon-based gases, preparing a mitigation plan for future experiments is of fundamental importance to the high-energy-physics community, and the next generation of detectors must be completely designed around eco-mixtures. Although obtaining funding for this work can be difficult, for example due to a lack of expected technological breakthroughs in low-energy plasma physics, the workshop showed that a vibrant cadre of physicists is committed to taking the field forward. The next workshop will take place in 2022.

The post Greening gaseous detectors appeared first on CERN Courier.

]]>
Meeting report More than 200 experts participated in a workshop to study alternatives to the harmful chlorofluorocarbons which play an important role in traditional gas mixtures. https://cerncourier.com/wp-content/uploads/2021/05/CMS-muon-191.jpg
In search of WISPs https://cerncourier.com/a/in-search-of-wisps/ Thu, 04 Mar 2021 13:17:30 +0000 https://preview-courier.web.cern.ch/?p=91468 Experiments such as MADMAX, IAXO and ALPS II are expanding the search for axions and other weakly interacting ‘slim’ particles that could hail from far above the TeV scale.

The post In search of WISPs appeared first on CERN Courier.

]]>
The ALPS II experiment at DESY

The Standard Model (SM) cannot be the complete theory of particle physics. Neutrino masses evade it. No viable dark-matter candidate is contained within it. And under its auspices the electric dipole moment of the neutron, experimentally compatible with zero, requires the cancellation of two non-vanishing SM parameters that are seemingly unrelated – the strong-CP problem. The physics explaining these mysteries may well originate from new phenomena at energy scales inaccessible to any collider in the foreseeable future. Fortunately, models involving such scales can be probed today and in the next decade by a series of experiments dedicated to searching for very weakly interacting slim particles (WISPs).

WISPs are pseudo Nambu–Goldstone bosons (pNGBs) that arise automatically in extensions of the SM from global symmetries which are broken both spontaneously and explicitly. NGBs are best known for being “eaten” by the longitudinal degrees of freedom of the W and Z bosons in electroweak gauge-symmetry breaking, which underpins the Higgs mechanism, but theorists have also postulated a bevy of pNGBs that get their tiny masses by explicit symmetry breaking and are potentially discoverable as physical particles. Typical examples arising in theoretically well-motivated grand-unified theories are axions, flavons and majorons. Axions arise from a broken “Peccei–Quinn” symmetry and could potentially explain the strong-CP problem, while flavons and majorons arise from broken flavour and lepton symmetries.

The Morpurgo magnet

Being light and very weakly interacting, WISPs would be non-thermally produced in the early universe and thus remain non-relativistic during structure formation. Such particles would inevitably contribute to the dark matter of the universe. WISPs are now the target of a growing number and type of experimental searches that are complementary to new-physics searches at colliders.

Among theorists and experimentalists alike, the axion is probably the most popular WISP. Recently, massive efforts have been undertaken to improve the calculations of model-dependent relic-axion production in the early universe. This has led to a considerable broadening of the mass range compatible with the explanation of dark matter by axions. The axion could make up all of the dark matter in the universe for a symmetry-breaking scale fa between roughly 108 and 1019 GeV (the lower limit being imposed by astrophysical arguments, the upper one by the Planck scale), corresponding to axion masses from 10–13 eV to 10 meV. For other light pNGBs, generically dubbed axion-like particles (ALPs), the parameter range is even broader. With many plausible relic-ALP-production mechanisms proposed by theorists, experimentalists need to cover as much of the unexplored parameter range as possible.

Although the strengths of the interactions between axions or ALPs and SM particles are very weak, being inversely proportional to fa, several strategies for observing them are available. Limits and projected sensitivities span several orders of magnitude in the mass-coupling plane (see “The field of play” figure).

IAXO’s design profited greatly from experience with the ATLAS toroid

Since axions or ALPs can usually decay to two photons, an external static magnetic field can substitute one of the two photons and induce axion-to-photon conversion. Originally proposed by Pierre Sikivie, this inverse Primakoff effect can classically be described by adding source terms proportional to B and E to Maxwell’s equations. Practically, this means that inside a static homogeneous magnetic field the presence of an axion or ALP field induces electric-field oscillations – an effect readily exploited by many experiments searching for WISPs. Other processes exploited in some experimental searches and suspected to lead to axion production are their interactions with electrons, leading to axion bremsstrahlung, and their interactions with nucleons or nuclei, leading to nucleon-axion bremsstrahlung or oscillations of the electric dipole moment of the nuclei or nucleons.

The potential to make fundamental discoveries from small-scale experiments is a significant appeal of experimental WISP physics, however the most solidly theoretically motivated WISP parameter regions and physics questions require setups that go well beyond “table-top” dimensions. They target WISPs that flow through the galactic halo, shine from the Sun, or spring into existence when lasers pass through strong magnetic fields in the laboratory.

Dark-matter halo

Haloscopes target the detection of dark-matter WISPs in the halo of our galaxy, where non-relativistic cold-dark-matter axions or ALPs induce electric field oscillations as they pass through a magnetic field. The frequency of the oscillations corresponds to the axion mass, and the amplitude to B/fa. When limits or projections are given for these kinds of experiments, it is assumed that the particle under scrutiny homogeneously makes up all of the dark matter in the universe, introducing significant cosmological model dependence.

Axion–photon coupling versus axion mass plane

The furthest developed currently operating haloscopes are based on resonant enhancement of the axion-induced electric-field oscillations in tunable resonant cavities. Using this method, the presently running ADMX project at the University of Washington has the sensitivity to discover dark-matter axions with masses of a few µeV. Nuclear resonance methods could be sensitive to halo dark-matter axions with mass below 1 neV and “fuzzy” dark-matter ALPs down to 10–22 eV within the next decade, for example at the CASPEr experiments being developed at the University of Mainz and Boston University. Meanwhile, experiments based on classical LC circuits, such as ABRACADABRA at MIT, are being designed to measure ALP- or axion-induced magnetic field oscillations in the centre of a toroidal magnet. These could be sensitive in a mass range between 10 neV and 1 µeV.

ALPS II is the first laser-based setup to fully exploit resonance techniques

For dark-matter axions with masses up to approximately 50 µeV, promising developments in cavity technologies such as multiple matched cavities and superconducting or diel­ectric cavities are ongoing at several locations, including at CAPP in South Korea, the University of Western Australia, INFN Legnaro and the RADES detector, which has taken data as part of the CAST experiment at CERN. Above ~40 µeV, however, the cavity concept becomes more and more challenging, as sensitivity scales with the volume of the resonant cavity, which decreases dramatically with increasing mass (as roughly 1/ma3). To reach sensitivity at higher masses, in the region of a few hundred µeV, a novel “dielectric haloscope” is being developed by the MADMAX (Magnetized Disk and Mirror Axion experiment) collaboration for potential installation at DESY. It exploits the fact that static magnetic-field boundaries between media with different dielectric constants lead to tiny power emissions that compensate the discontinuity in the axion-induced electric fields in neighbouring media. If multiple surfaces are stacked in front of each other, this should lead to constructive interference, boosting the emitted power from the expected axion dark matter in the desired mass range to detectable levels. Other novel haloscope concepts, based on meta-materials (“plasma haloscopes”, for example) and topological insulators, are also currently being developed. These could have sensitivity to even higher axion masses, up to a few meV.

Staying in tune

In principle, axion-dark-matter detection should be relatively simple, given the very high number density of particles – approximately 3 × 1013 axions/cm3 for an axion mass of 10 µeV – and the well-established technique of resonant axion-to-photon conversion. But, as the axion mass is unknown, the experiments must be painstakingly tuned to each possible mass value in turn. After about 15 years of steady progress, the ADMX experiment has reached QCD-axion dark-matter sensitivity in the mass regime of a few µeV.

ADMX uses tunable microwave resonators inside a strong solenoidal magnetic field, and modern quantum sensors for readout. Unfortunately, however, this technology is not scalable to the higher axion-mass regions as preferred, for example, by cosmological models where Peccei–Quinn symmetry breaking happened after an inflationary phase of the universe. That’s where MADMAX comes in. The collaboration is working on the dielectric-haloscope concept – initiated and led by scientists at the Max Planck Institute for Physics in Munich – to investigate the mass region around 100 µeV.

Astrophysical hints

Globular clusters

Weakly interacting slim particles (WISPs) could be produced in hot astrophysical plasmas and transport energy out of stars, including the Sun, stellar remnants and other dense sources. Observed lifetimes and energy-loss rates can therefore probe their existence. For the axion, or an axion-like particle (ALP) with sub-MeV mass that couples to nucleons, the most stringent limit, fa > ~108 GeV, stems from the duration of the neutrino signal from the progenitor neutron star of Supernova 1987A.

Tantalisingly, there are stellar hints from observations of red giants, helium-burning stars, white dwarfs and pulsars that seem to indicate energy losses with slight excesses with respect to those expected from standard energy emission by neutrinos. These hints may be explained by axions with masses below 100 meV or sub-keV-mass ALPs with a coupling to both electrons and photons.

Other observations suggest that TeV photons from distant blazars are less absorbed than expected by standard interactions with extragalactic background light – the so-called transparency hint. This could be explained by the conversion of photons into ALPs in the magnetic field of the source, and back to photons in astrophysical magnetic fields. Interestingly, these would have about the same ALP–photon coupling strength as indicated by the observed stellar anomalies, though with a mass that is incompatible with both ALPs which can explain dark matter and with QCD axions (see “The field of play” figure).

MADMAX will use a huge ~9 T superconducting dipole magnet with a bore of about 1.35 m and a stored energy of roughly 480 MJ. Such a magnet has never been built before. The MADMAX collaboration teamed up with CEA-IRFU and Bilfinger-Noell and successfully worked out a conceptual design. First steps towards qualifying the conductor are under way. The plan is for the magnet to be installed at DESY inside the old iron yoke of the former HERA experiment H1. DESY is already preparing the required infrastructure, including the liquid-helium supply necessary to cool the magnet. R&D for the dielectric booster, with up to 80 adjustable 1.25 m2 disks, is in full swing.

A first prototype, containing a more modest 20 discs of 30 cm diameter, will be tested in the “Morpurgo” magnet at CERN during future accelerator shutdowns (see “Haloscope home” figure). With a peak field strength of 1.6 T, its dipole field will allow new ALP-dark-matter parameter regions to be probed, though the main purpose of the prototype is to demonstrate the operation of the booster system in cryogenic surroundings inside a magnetic field. The MADMAX collaboration is extremely happy to have found a suitable magnet at CERN for such tests. If sufficient funds can be acquired within the next two to three years for magnet construction, and provided that the prototype efforts at CERN are successful, MADMAX could start data taking at DESY in 2028.

While direct dark-matter search experiments like ADMX and MADMAX offer by far the highest sensitivity for axion searches, this is based on the assumption that the dark matter problem is solved by axions, and if no signal is discovered any claim of an exclusion limit must rely on specific cosmological assumptions. Therefore, other less model-dependent experiments, such as helioscopes or light shining through a wall (LSW) experiments, are extremely beneficial in addition to direct dark-matter searches.

Solar axions

In contrast to dark-matter axions or ALPs, those produced in the Sun or in the laboratory should have considerable momentum. Indeed, solar axions or ALPs should have energies of a few keV, corresponding to the temperature at which they are produced. These could be detected by helioscopes, which seek to use the inverse Primakoff effect to convert solar axions or ALPs into X-rays in a magnet pointed towards the Sun, as at the CERN Axion Solar Telescope (CAST) experiment. Helioscopes could cover the mass range compatible with the simplest axion models, in the vicinity of 10 meV, and could be sensitive to ALPs with masses below 1 eV without any tuning at all.

The CAST helioscope, which reused an LHC prototype dipole magnet, has driven this field in the past decade, and provides the most sensitive exclusion limits to date. Going beyond CAST calls for a much larger magnet. For the next-generation International Axion Observatory (IAXO) helioscope, CERN members of the international collaboration worked out a conceptual design for a 20 m-long toroidal magnet with eight 60 cm-diameter bores. IAXO’s design profited greatly from experience with the ATLAS toroid.

BabyIAXO helioscope

In the past three years, the collaboration, led by the University of Zaragoza, has been concentrating its activities on the BabyIAXO prototype in order to finesse the magnet concept, the X-ray telescopes necessary to focus photons from solar axion conversion and the low-background detectors. BabyIAXO will increase the signal-to-noise ratio of CAST by two orders of magnitude; IAXO by a further two orders of magnitude.

In December 2020 the directorates of CERN and DESY signed a collaboration agreement regarding BabyIAXO: CERN will provide the detailed design of the prototype magnet including its cryostat, while DESY will design and prepare the movable platform and infrastructure (see “Prototype” figure). BabyIAXO will be located at DESY in Hamburg. The collaboration hopes to attract the remaining funds for BabyIAXO so construction can begin in 2021 and first science runs could take place in 2025. The timeline for IAXO will depend strongly on experiences during the construction and operation of BabyIAXO, with first light potentially possible in 2028.

Light shining through a wall

In contrast to haloscopes, helioscopes do not rely on the assumption that all dark matter is made up by axions. But light-shining-through-wall (LSW) experiments are even less model dependent with respect to ALP production. Here, intense laser light could be converted to axions or ALPs inside a strong magnetic field by the Primakoff effect. Behind a light-impenetrable wall they would be re-converted to photons and detected at the same wavelength as the laser light. The disadvantage of LSW experiments is that they only reach sensitivity to ALPs with a mass up to a few hundred µeV with comparably high coupling to photons. However, this is sensitive enough to test the parameter range consistent with the transparency hint and parts of the mass range consistent with the stellar hints (see “Astrophysical hints” panel).

The Any Light Particle Search (ALPS II) at DESY follows this approach. By seeking to observe light shining through a wall, any ALPs would be generated in the experiment itself, removing the need to make assumptions about their production. ALPS II is based on 24 modified superconducting dipole magnets that have been straightened by brute-force deformation, following their former existence in the proton accelerator of the HERA complex. With the help of two 124 m-long high-finesse optical resonators, encompassed by the magnets on both sides of the wall, ALPS II is also the first laser-based setup to fully exploit resonance techniques. Two readout systems capable of measuring a 1064 nm photon flux down to a rate of 2 × 10–5 s–1 have been developed by the collaboration. Compared to the present best LSW limits provided by OSQAR at CERN, the signal-to-noise ratio will rise by no less than 12 orders of magnitude at ALPS II. Nevertheless, MADMAX would surpass ALPS II in the sensitivity for the axion-photon coupling strength by more than three orders of magnitude. This is the price to pay for a model-independent experiment – however, ALPS II principally targets not dark-matter candidates but ALPs indicated by astrophysical phenomena.

Tunelling ahead

The installation of the 24 dipole magnets in a straight section of the HERA tunnel was completed in 2020. Three clean rooms at both ends and in the centre of the experiment were also installed, and optics commissioning is under way. A first science run is expected for autumn 2021.

ALPS II

In the overlapping mass region up to 0.1 meV, the sensitivities of ALPS II and BabyIAXO are roughly equal. In the event of a discovery, this would provide a unique opportunity to study the new WISP. Excitingly, a similar case might be realised for IAXO: combining the optics and detectors of ALPS II with simplified versions of the dipole magnets being studied for FCC-hh would provide an LSW experiment with “IAXO sensitivity” regarding the axion-photon coupling, albeit in a reduced mass range. This has been outlined as the putative JURA (Joint Undertaking on Research for Axions) experiment in the context of the CERN-led Physics Beyond Colliders study.

The past decade has delivered significant developments in axion and ALP theory and phenomenology. This has been complemented by progress in experimental methods to cover a large fraction of the interesting axion and ALP parameter range. In close collaboration with universities and institutes across the globe, CERN, DESY and the Max Planck society will together pave the road to the exciting results that are expected this decade.

The post In search of WISPs appeared first on CERN Courier.

]]>
Feature Experiments such as MADMAX, IAXO and ALPS II are expanding the search for axions and other weakly interacting ‘slim’ particles that could hail from far above the TeV scale. https://cerncourier.com/wp-content/uploads/2021/02/CCMarApr21_WISPs_ALPS.jpg
Final stretch for LHC upgrades https://cerncourier.com/a/final-stretch-for-lhc-upgrades/ Wed, 16 Dec 2020 13:29:31 +0000 https://preview-courier.web.cern.ch/?p=90369 After two years of intense work, accelerator physicists are cooling the LHC to operational temperatures and eyeing the final stretch of the road to Run 3.

The post Final stretch for LHC upgrades appeared first on CERN Courier.

]]>
The second long shutdown of the LHC and its injector complex began two years ago, at the start of 2019. Since then, sweeping upgrades in the accelerator complex and key maintenance work have resulted in a rejuvenated accelerator complex with injectors fit for a decade or more of high-brightness beam production. With major detector upgrades proceeding in parallel, physicists are eyeing the final stretch of the road to Run 3 – which promises to deliver to the experiments an integrated luminosity twice that of Run 1 and Run 2 combined in less than three years of operations.

A large number of physicists, engineers and technicians have strived day-in day-out

Rende Steerenberg

The ceremonial key to the Super Proton Synchrotron (SPS) was handed over to SPS operations on 4 December, signalling the successful completion of the LHC Injectors Upgrade (LIU) programme. “The amazing accomplishment of delivering the machine keys with only a small delay is thanks to the hard work, dedication and flexibility of many,” says head of the operations group Rende Steerenberg, who emphasised the thoroughness with which special measures to ensure the safety of personnel during the COVID-19 pandemic were observed. “A large number of physicists, engineers and technicians strived day-in day-out to complete the upgrade and consolidation of the accelerator complex safely and efficiently following the spring lockdown.”

Super synchrotrons

Major changes to the SPS include the dismantling and remounting of its radio-frequency cavities, the installation of new power amplifiers, and the installation of state-of-the-art beam-control and beam-dump systems. First beam from the new Linac 4 was injected into the upgraded Proton Synchrotron Booster (PSB) on 7 December. The PSB will undergo a commissioning period before injecting beam into the Proton Synchrotron (PS) on 1 March. It will then be the turn of the PS to be commissioned, before sending beam to the SPS on 12 April.

Among many changes to the LHC, all 1232 dipole-magnet interconnections were opened and their electrical insulation consolidated, removing the limitation that prevented the LHC from reaching 7 TeV per beam during Run 2. The cryogenics team cooled the first of the LHC’s eight sectors to its 1.9 K operational temperature on 15 November, with five other sectors being cooled in parallel and the full machine set to be cold by spring. After handing over to the electrical quality-assurance team for the final electrical tests, powering tests and a long campaign of quench training will take place to enable the LHC magnets to support fields in excess of those required during Run 2, when the beam energy was 6.5 TeV. Test beams are due to circulate at the end of September 2021, just four months later than planned before the COVID-19 pandemic.

All 1232 dipole-magnet interconnections were opened

Detector work

In parallel to work on CERN’s accelerator infrastructure, experimental physicists are working hard to complete major upgrades to the detectors which anticipate the stringent requirements of triggering and reconstructing events at the upgraded LHC. The refurbishment of trigger electronics for the ATLAS detector’s liquid-argon calorimeter is progressing quickly and the construction of the muon detector’s two new “small wheels” is set to be completed by October 2021. With a complex upgrade of the CMS detector’s muon system now complete, a newly built beam pipe will soon be fitted in the cavern, followed by the refurbished pixel detector with a new inner layer; magnet upgrades and shielding consolidation will then follow. With ALICE’s time-projection chamber now reinstalled, work is underway to install the detector’s new muon forward tracker, and a new 10 GPixel inner-tracking system will be installed in the first quarter of 2021. Meanwhile, the next steps for a significant revamp to the LHCb detector are the mounting of new vertex-locator modules and the first sensitive detector parts of the new ring-imaging Cherenkov detector during the first months of 2021. Following the completion of the upgrade programmes, Run 3 of the LHC will begin in March 2022.

Accelerator infrastructure relating to earlier stages in the lives of LHC protons is already beginning to be recommissioned. Hydrogen ions from a local source have been transferred to the ELENA ring to commission the newly installed transfer lines to CERN’s antimatter experiments. A newly developed source has fed lead ions into Linac 3, which provides ions to the LHC’s physics experiments, while pre-irradiated targets have provided stable isotopes to the ISOLDE nuclear-physics facility. Many experiments at ISOLDE and the PS-SPS complex will be able to start taking data in summer 2021.

No changes have been made to the LHC schedule beyond 2022. Following the completion of Run 3, the third long shutdown will begin at the start of 2025 for the LHC, and in early 2026 for the injector chain, and will end in mid-2027. During this time the installation of the High-Luminosity LHC (HL-LHC) will be completed, adding major high-technology upgrades to CERN’s flagship machine. In concert with the programme of injector upgrades completed in LS2, these will allow the HL-LHC to deliver an order-of-magnitude greater integrated luminosity to the experiments than its predecessor.

The post Final stretch for LHC upgrades appeared first on CERN Courier.

]]>
News After two years of intense work, accelerator physicists are cooling the LHC to operational temperatures and eyeing the final stretch of the road to Run 3. https://cerncourier.com/wp-content/uploads/2020/12/CERN-PHOTO-202011-145-7.jpg
A long-lived paradigm shift https://cerncourier.com/a/a-long-lived-paradigm-shift/ Fri, 27 Nov 2020 12:50:36 +0000 https://preview-courier.web.cern.ch/?p=90136 Experimentalists and theorists met from 16 to 19 November for the eighth workshop of the LHC's long-lived particles community.

The post A long-lived paradigm shift appeared first on CERN Courier.

]]>
Searches for new physics at high-energy colliders traditionally target heavy new particles with short lifetimes. These searches determine detector design, data acquisition and analysis methods. However, there could be new long-lived particles (LLPs) which travel through the detectors without decaying, either because they are light or have small couplings. Searches for LLPs have been going on at the LHC since the start of data taking, and at previous colliders, but they are attracting increasing interest in recent times, more so in light of the lack of new particles discovered in more mainstream searches.

Detecting LLPs at the LHC experiments requires a paradigm shift with respect to the usual data-analysis and trigger strategies. To that end, more than 200 experimentalists and theorists met online from 16 to 19 November for the eighth workshop of the LHC LLP community.

Dark quarks would undergo fragmentation and hadronisation, resulting in “dark showers”

Strong theoretical motivations underpin searches for LLPs. For example, dark matter could be part of a larger dark sector, parallel to the Standard Model (SM), with new particles and interactions. If dark quarks could be produced at the LHC, they would undergo fragmentation and hadronisation in the dark sector resulting in characteristic “dark showers” — one of the focuses of the workshop. Collider signatures for dark showers depend on the fraction of unstable particles they contain and their lifetime, with a range of categories presenting their own analysis challenges: QCD-like jets, semi-visible jets, emerging jets, and displaced vertices with missing transverse energy. Delegates agreed on the importance of connecting collider-level searches for dark showers with astrophysical and cosmological scales. In a similar spirit of collaboration across communities, a joint session with the HEP Software Foundation focused on triggering and reconstruction software for dedicated LLP detectors.

Heavy neutral leptons

The discovery of heavy neutral leptons (HNLs) could address different open questions of the SM. For example, neutrinos are expected to be left-handed and massless in the SM, but oscillate between flavours as their wavefunction evolves, providing evidence for as-yet immeasurably small masses. One way to fix this problem is to complete the field pattern of the SM with right-handed HNLs. The number and other characteristics of HNLs depend on the model considered, but in many cases HNLs are long-lived and connect to other important questions of the SM, such as dark matter and the baryon asymmetry of the universe. There are many ongoing searches for HNLs at the LHC and many more proposed elsewhere. During the November workshop the discussion touched on different models and simulations, reviewing what is available and what is needed for the different signal benchmarks.

Another focus was the reinterpretation of previous LLP searches. Recasting public results is common practice at the LHC and a good way to increase physics impact, but reinterpreting LLP searches is more difficult than prompt searches due to the use of non-standard selections and analysis-specific objects.

 

The latest results from CERN experiments were presented. ATLAS reported the first LHC search for sleptons using displaced-lepton final states, greatly improving sensitivity compared to LEP. CMS presented a search for strongly interacting massive particles with trackless jets, and a search for long-lived particles decaying to jets with displaced vertices. LHCb reported searches for low -mass di-muon resonances and a search for heavy neutrinos in the decay of a W boson into two muons and a jet, and the NA62 experiment at CERN’s SPS presented a search for π0 decays to invisible particles. These results bring important new constraints on the properties and parameters of LLP models.

Dedicated detectors

A series of dedicated LLP detectors at CERN — including the Forward Physics Facility for the HL-LHC, the CMS forward detector, FASER, Codex-b and Codex-ß, MilliQan, MoEDAL-MAPP, MATHUSLA, ANUBIS, SND@LHC, and FORMOSA – are in different stages between proposal and operation. These additional detectors, located at various distances from the LHC experiments, have diverse strengths: some, like MilliQan, look for specific particles (milli-charged particles, in that case), whereas others, like Mathusla, offer a very low background environment in which to search for neutral LLPs. These complementary efforts will, in the near future, provide all the different pieces needed to build the most complete picture possible of a variety of LLP searches, from axion-like particles to exotic Higgs decays, potentially opening the door to a dark sector.

ATLAS reported the first LHC search for sleptons using displaced-lepton final states

The workshop featured a dedicated session on future colliders for the first time. Designing these experiments with LLPs in mind would radically boost discovery chances. Key considerations will be tracking and the tracking volume, timing information, trigger and DAQ, as well as potential additional instrumentation in tunnels or using the experimental caverns.

Together with the range of new results presented and many more in the pipeline, the 2020 LLP workshop was representative of a vibrant research community, constantly pushing the “lifetime frontier”.

The post A long-lived paradigm shift appeared first on CERN Courier.

]]>
Meeting report Experimentalists and theorists met from 16 to 19 November for the eighth workshop of the LHC's long-lived particles community. https://cerncourier.com/wp-content/uploads/2020/11/EXO-19-011_zoom2.png
How to find a Higgs boson https://cerncourier.com/a/how-to-find-a-higgs-boson/ Thu, 12 Nov 2020 10:50:37 +0000 https://preview-courier.web.cern.ch/?p=89971 Ivo van Vulpen’s popular book isn’t an airy pamphlet cashing in on the 2012 discovery, but a realistic representation of what it’s like to be a particle physicist.

The post How to find a Higgs boson appeared first on CERN Courier.

]]>
How to Find a Higgs Boson

Finding Higgs bosons can seem esoteric to the uninitiated. The spouse of a colleague of mine has such trouble describing what their partner does that they read from a card in the event that they are questioned on the subject. Do you experience similar difficulties in describing what you do to loved ones? If so, then Ivo van Vulpen’s book How to find a Higgs boson may provide you with an ideal gift opportunity.

Readers will feel like they are talking physics over a drink with van Vulpen, who is a lecturer at the University of Amsterdam and a member of the ATLAS collaboration. Originally published as De melodie van de natuur, the book’s Dutch origins are unmistakable. We read about Hans Lippershey’s lenses, Antonie van Leeuwenhoeck’s microbiology, Antonius van den Broek’s association of charge with the number of electrons in an atom, and even Erik Verlinde’s theory of gravity as an emergent entropic force. Though the Higgs is dangled at the end of chapters as a carrot to get the reader to keep reading, van Vulpen’s text isn’t an airy pamphlet cashing in on the 2012 discovery, but a realistic representation of what it’s like to be a particle physicist. When he counsels budding scientists to equip themselves better than the North Pole explorer who sets out with a Hugo Boss suit, a cheese slicer and a bicycle, he tells us as much about himself as about what it’s like to be a physicist.

Van Vulpen is a truth teller who isn’t afraid to dent the romantic image of serene progress orchestrated by a parade of geniuses. 9999 out of every 10,000 predictions from “formula whisperers” (theorists) turn out to be complete hogwash, he writes, in the English translation by David McKay. Sociological realities such as “mixed CMS–ATLAS” couples temper the physics, which is unabashedly challenging and unvarnished. The book boasts a particularly lucid and intelligible description of particle detectors for the general reader, and has a nice focus on applications. Particle accelerators are discussed in relation to the “colour X-rays” of the Medipix project. Spin in the context of MRI. Radioactivity with reference to locating blocked arteries. Antimatter in the context of PET scans. Key ideas are brought to life in cartoons by Serena Oggero, formerly of the LHCb collaboration.

The weak interaction is like a dog on an attometre-long chain.

Attentive readers will occasionally be frustrated. For example, despite a stated aim of the book being to fight “formulaphobia”, Bohr’s famous recipe for energy levels lacks the crucial minus sign just a few lines before a listing of –3.6 eV (as opposed to –13.6 eV) for the energy of the ground state. Van Vulpen compares the beauty seen by physicists in equations to the beauty glimpsed by musicians as they read sheet music, but then prints Einstein’s field equations with half the tensor indices missing. But to quibble about typos in the English translation would be to miss the point of the book, which is to allow readers “to impress friends over a drink,” and talk physics “next time you’re in a bar”. Van Vulpen’s writing is always entertaining, but never condescending. Filled with amusing but perceptive one-liners, the book is perfectly calibrated for readers who don’t usually enjoy science. Life in a civilisation that evolved before supernovas would have no cutlery, he observes. Neutrinos are the David Bowie of particles. The weak interaction is like a dog on an attometre-long chain.

This book could be the perfect gift for a curious spouse. But beware: fielding questions on the excellent last chapter, which takes in supersymmetry, SO(10), and millimetre-scale extra dimensions, may require some revision.

The post How to find a Higgs boson appeared first on CERN Courier.

]]>
Review Ivo van Vulpen’s popular book isn’t an airy pamphlet cashing in on the 2012 discovery, but a realistic representation of what it’s like to be a particle physicist. https://cerncourier.com/wp-content/uploads/2020/11/CCNovDec20_REV_Ivo_feature.jpg
Neutrinos for peace https://cerncourier.com/a/neutrinos-for-peace/ Tue, 10 Nov 2020 18:12:00 +0000 https://preview-courier.web.cern.ch/?p=89440 Detectors similar to those used to hunt for sterile neutrinos could help guard against the extraction of plutonium-239 for nuclear weapons, writes Patrick Huber.

The post Neutrinos for peace appeared first on CERN Courier.

]]>
The PROSPECT neutrino detector

The first nuclear-weapons test shook the desert in New Mexico 75 years ago. Weeks later, Hiroshima and Nagasaki were obliterated. So far, these two Japanese cities have been the only ones to suffer such a fate. Neutrinos can help to ensure that no other city has to be added to this dreadful list.

At the height of the arms race between the US and the USSR, stockpiles of nuclear weapons exceeded 50,000 warheads, with the majority being thermonuclear designs vastly more destructive than the fission bombs used in World War II. Significant reductions in global nuclear stockpiles followed the end of the Cold War, but the US and Russia still have about 12,500 nuclear weapons in total, and the other seven nuclear-armed nations have about 1500. Today, the politics of non-proliferation is once again tense and unpredictable. New nuclear security challenges have appeared, often from unexpected actors, as a result of leadership changes on both sides of the table. Nuclear arms races and the dissolution of arms-control treaties have yet again become a real possibility. A regional nuclear war involving just 1% of the global arsenal would cause a massive loss of life, trigger climate effects leading to crop failures and jeopardise the food supply of a billion people. Until we achieve global disarmament, nuclear non-proliferation efforts and arms control are still the most effective tools for nuclear security.

Not a bang but a whimper

The story of the neutrino is closely tied to nuclear weapons. The first serious proposal to detect the particle hypothesised by Pauli, put forward by Clyde Cowan and Frederick Reines in the early 1950s, was to use a nuclear explosion as the source (see “Daring experiment” figure). Inverse beta decay, whereby an electron-antineutrino strikes a free proton and transforms it into a neutron and a positron, was to be the detection reaction. The proposal was approved in 1952 as an addition to an already planned atmospheric nuclear-weapons test. However, while preparing for this experiment, Cowan and Reines realised that by capturing the neutron on a cadmium nucleus, and observing the delayed coincidence between the positron and this neutron, they could use the lower, but steady flux of neutrinos from a nuclear reactor instead (see “First detection” figure). This technique is still used today, but with gadolinium or lithium in place of cadmium.

Proposal to discover particles using a nuclear explosion

The P reactor at the Savannah River site at Oak Ridge National Laboratory, which had been built and used to make plutonium and tritium for nuclear weapons, eventually hosted the successful experiment to first detect the neutrino in 1956. Neutrino experiments testing the properties of the neutrino including oscillation searches continued there until 1988, when the P reactor was shut down.

Neutrinos are not produced in nuclear fission itself, but by the beta decays of neutron-rich fission fragments – on average about six per fission. In a typical reactor fuelled by natural uranium or low-enriched uranium, the reactor starts out with only uranium-235 as its fuel. During operation a significant number of neutrons are absorbed on uranium-238, which is far more abundant, leading to the formation of uranium-239, which after two beta decays becomes plutonium-239. Plutonium-239 eventually contributes to about 40% of the fissions, and hence energy production, in a commercial reactor. It is also the isotope used in nuclear weapons.

The dual-use nature of reactors is at the crux of nuclear non-proliferation. What distinguishes a plutonium-production reactor from a regular reactor producing electricity is whether it is operated in such a way that the plutonium can be taken out of the reactor core before it deteriorates and becomes difficult to use in weapons applications. A reactor with a low content of plutonium-239 makes more and higher energy neutrinos than one rich in plutonium-239.

Lev Mikaelyan and Alexander Borovoi, from the Kurchatov Institute in Moscow, realised that neutrino emissions can be used to infer the power and plutonium content of a reactor. In a series of trailblazing experiments at the Rovno nuclear power plant in the 1980s and early 1990s, their group demonstrated that a tonne-scale underground neutrino detector situated 10 to 20 metres from a reactor can indeed track its power and plutonium content.

The significant drawback of neutrino detectors in the 1980s was that they needed to be situated underground, beneath a substantial overburden of rock, to shield them from cosmic rays. This greatly limited potential deployment sites. There was a series of application-related experiments – notably the successful SONGS experiment conducted by researchers at Lawrence Livermore National Laboratory, which aimed to reduce cost and improve the robustness and remote operation of neutrino detectors – but all of these detectors still needed shielding.

From cadmium to gadolinium

Synergies with fundamental physics grew in the 1990s, when the evidence for neutrino oscillations was becoming impossible to ignore. With the range of potential oscillations frequencies narrowing, the Palo Verde and Chooz reactor experiments placed multi-tonne detectors about 1 km from nuclear reactors, and sought to measure the relatively small θ13 parameter of the neutrino mixing matrix, which expresses the mixing between electron neutrinos and the third neutrino mass eigenstate. Both experiments used large amounts of liquid organic scintillator doped with gadolinium. The goal was to tag antineutrino events by capturing the neutrons on gadolinium, rather than the cadmium used by Reines and Cowan. Gadolinium produces 8 MeV of gamma rays upon de-excitation after a neutron capture. As it has an enormous neutron-capture cross section, even small amounts greatly enhance an experiment’s ability to identify neutrons.

Delayed coincidence detection scheme

Eventually, neutrino oscillations became an accepted fact, redoubling the interest in measuring θ13. This resulted in three new experiments: Double Chooz in France, RENO in South Korea, and Daya Bay in China. Learning lessons from Palo Verde and Chooz, the experiments successfully measured θ13 more precisely than any other neutrino mixing parameter. A spin-off from the Double Chooz experiment was the Nucifer detector (see “Purpose driven” figure), which demonstrated the operation of a robust sub-tonne-scale detector designed with missions to monitor reactors in mind, in alignment with requirements formulated at a 2008 workshop held by the International Atomic Energy Agency (IAEA). However, Nucifer still needed a significant overburden.

In 2011, however, shortly before the experiments established that θ13 is not zero, fundamental research once again galvanised the development of detector technology for reactor monitoring. In the run-up to the Double Chooz experiment, a group at Saclay started to re-evaluate the predictions for reactor neutrino fluxes – then and now based on measurements at the Institut Laue-Langevin in the 1980s – and found to their surprise that the reactor flux prediction came out 6% higher than before. Given that all prior experiments were in agreement with the old flux predictions, neutrinos were missing. This “reactor-antineutrino anomaly” persists to this day. A sterile neutrino with a mass of about 1 eV would be a simple explanation. This mass range has been suggested by experiments with accelerator neutrinos, most notably LSND and MiniBooNE, though it conflicts with predictions that muon neutrinos should oscillate into such a sterile neutrino, which experiments such as MINOS+ have failed to confirm.

To directly observe the high-frequency oscillations of an eV-scale sterile neutrino you need to get within about 10 m of the reactor. At this distance, backgrounds from the operation of the reactor are often non-negligible, and no overburden is possible – the same conditions a detector on a safeguards mission would encounter.

From gadolinium to lithium

Around half a dozen experimental groups are chasing sterile neutrinos using small detectors close to reactors. Some of the most advanced designs use fine spatial segmentation to reject backgrounds, and replace gadolinium with lithium-6 as the nucleus to capture and tag neutrons. Lithium has the advantage that upon neutron capture it produces an alpha particle and a triton rather than a handful of photons, resulting in a very well localised tag. In a small detector this improves event containment and thus efficiency, and also helps constrain event topology.

Following the lithium and finely segmented technical paths, the PROSPECT collaboration and the CHANDLER collaboration (see “Rapid deployment” figure), in which I participate, independently reported the detection of a neutrino spectrum with minimal overburden and high detection efficiency in 2018. This is a major milestone in making non-proliferation applications a reality, since it is the first demonstration of the technology needed for tonne-scale detectors capable of monitoring the plutonium content of a nuclear reactor that could be universally deployed without the need for special site preparation.

The story of the neutrino is closely tied to nuclear weapons

The main difference between the two detectors is that PROSPECT, which reported its near-final sterile neutrino limit at the Neutrino 2020 conference, uses a traditional approach with liquid scintillator, whereas CHANDLER, currently an R&D project, uses plastic scintillator. The use of plastic scintillator allows the deployment time-frame to be shortened to less than 24 hours. On the other hand, liquid scintillator allows the exploitation of pulse-shape discrimination to reject cosmic-ray neutron backgrounds, allowing PROSPECT to achieve a much better signal-to-background ratio than any plastic detector to date. Active R&D is seeking to improve topological reconstruction in plastic detectors and imbue them with pulse-shape discrimination. In addition, a number of safeguard-specific detector R&D experiments have successfully detected reactor neutrinos using plastic scintillator in conjunction with gadolinium. In the UK, the VIDARR collaboration has seen neutrinos from the Wylfa reactor, and in Japan the PANDA collaboration successfully operated a truck-mounted detector.

In parallel to detector development, studies are being undertaken to understand how reactor monitoring with neutrinos would impact nuclear security and support non-proliferation objectives. Two very relevant situations being studied are the 2015 Iran Deal – the Joint Comprehensive Plan of Action (JCPOA) – and verification concepts for a future agreement with North Korea.

Nuclear diplomacy

One of the sticking points in negotiating the 2015 Iran deal was the future of the IR-40 reactor, which was being constructed at Arak, an industrial city in central Iran. The IR-40 was planned to be a 40 MW reactor fuelled by natural uranium and moderated with heavy water, with a stated purpose of isotope production for medical and scientific use. The choice of fuel and moderator is interesting, as it meshes with Iranian capabilities and would serve the stated purpose well and be cost effective, since no uranium enrichment is needed. Equally, however, if one were to design a plutonium-production reactor for a nascent weapons programme, this combination would be one of the top choices: it does not require uranium enrichment, and with the stated reactor power would result in the annual production of about 10 kg of rather pure plutonium-239. This matches the critical mass of a bare plutonium-239 sphere, and it is known that as little as 4 kg can be used to make an effective nuclear explosive. Within the JCPOA it was eventually agreed that the IR-40 could be redesigned, down-rated in power to 20 MW and the new core fuelled with 3.7% enriched fuel, reducing the annual plutonium production by a factor of six.

A spin off from Double Chooz

A 10 to 20 tonne neutrino detector 20 m from the reactor would be able to measure its plutonium content with a precision of 1 to 2 kg. This would be particularly relevant in the so-called N-th month scenario, which models a potential crisis in Iran based on events in North Korea in June 1994. During the 1994 crisis, which risked precipitating war with the US, the nuclear reactor at Yongbyon was shut down, and enough spent fuel rods removed to make several bombs. IAEA protocols were sternly tested. The organisation’s conventional safeguards for operating reactors consist of containment and surveillance – seals, for example, to prevent the unnoticed opening of the reactor, and cameras to record the movement of fuel, most crucially during reactor shutdowns. In the N-th month scenario, the IR-40 reactor, in its pre-JCPOA configuration (40 MW, rather than the renegotiated power of 20 MW), runs under full safeguards for N–1 months. In month N, a planned reactor shutdown takes place. At this point the reactor would contain 8 kg of weapons-grade plutonium. For unspecified reasons the safeguards are then interrupted. In month N+1, the reactor is restarted and full safeguards are restored. The question is: are the 8 kg of plutonium still in the reactor core, or has the core been replaced with fresh fuel and the 8 kg of plutonium illicitly diverted?

The disruption of safeguards could either be due to equipment failure – a more frequent event than one might assume – or due to events in the political realm ranging from a minor unpleasantness to a full-throttle dash for a nuclear weapon. Distinguishing the two scenarios would be a matter of utmost urgency. According to an analysis including realistic backgrounds extrapolated from the PROSPECT results, this could be done in 8 to 12 weeks with a neutrino detector.

Neutrino detectors could be effective in addressing the safeguard challenges presented by advanced reactors

No conventional non-neutrino technologies can match this performance without shutting the reactor down and sampling a significant fraction of the highly radioactive fuel. The conventional approach would be extremely disruptive to reactor operations and would put inspectors and plant operators at risk of radiation exposure. Even if the host country were to agree in principle, developing a safe plan and having all sides agree on its feasibility would take months at the very least, creating dangerous ambiguity in the interim and giving hardliners on both sides time to push for an escalation of the crisis. The conventional approach would also be significantly more expensive than a neutrino detector.

New negotiating gambit

The June 1994 crisis at Yongbyon still overshadows negotiations with North Korea, since, as far as North Korea is concerned, it discredited the IAEA. Both during the crisis, and subsequently, international attempts at non-proliferation failed to prevent North Korea from acquiring nuclear weapons – its first nuclear-weapons test took place in 2006 – or even to constrain its progress towards a small-scale operational nuclear force. New approaches are therefore needed, and recent attempts by the US to achieve progress on this issue prompted an international group of about 20 neutrino experts from Europe, the US, Russia, South Korea, China and Japan to develop specific deployment scenarios for neutrino detectors at the Yongbyon nuclear complex.

The main concern is the 5 MWe reactor, which, though named for its electrical power, has a thermal power of 20 MW. This gas-cooled graphite-moderated reactor, fuelled with natural uranium, has been the source of all of North Korea’s plutonium. The specifics of this reactor, and in particular its fuel cladding, which makes prolonged wet-storage of irradiated fuel impossible, represent such a proliferation risk that anything but a monitored shutdown prior to a complete dismantling appears inappropriate. To safeguard against the regime reneging on such a deal, were it to be agreed, a relatively modest tonne-scale neutrino detector right outside the reactor building could detect a powering up of this reactor within a day.

The MiniCHANDLER detector

North Korea is also constructing the Experimental Light Water Reactor at Yongbyon. A 150 MW water-moderated reactor running with low-enriched fuel, this reactor would not be particularly well suited to plutonium production. Its design is not dissimilar to much larger reactors used throughout the world to produce electricity, and it could help address the perennial lack of electricity that has limited the development and growth of the country’s economy. North Korea may wish to operate it indefinitely. A larger, 10 tonne neutrino detector could detect any irregularities during its refuelling – a tell-tale sign of a non-civilian use of the reactor – on a timescale of three months, which is within the goals set by the IAEA.

In a different scenario, wherein the goal would be to monitor a total shutdown of all reactors at Yongbyon, it would be feasible to bury a Daya-Bay-style 50 tonne single volume detector under the Yak-san, a mountain about 2 km outside of the perimeter of the nuclear installations (see “A different scenario” figure). The cost and deployment timescale would be more onerous than in the other scenarios.

In the case of longer distances between reactor and detector, detector masses must increase to compensate an inverse-square reduction in the reactor-neutrino flux. As cosmic-ray backgrounds remain constant, the detectors must be deployed deep underground, beneath an overburden of several 100 m of rock. To this end, the UK’s Science and Technology Facilities Council, the UK Atomic Weapons Establishment and the US Department of Energy, are funding the WATCHMAN collaboration to pursue the construction of a multi-kilo-tonne water-Cherenkov detector at the Boulby mine, 20 km from two reactors in Hartlepool, in the UK. The goal is to demonstrate the ability to monitor the operational status of the reactors, which have a combined power of 3000 MW. In a use-case context this would translate to excluding the operation of an undeclared 10 to 20 MW reactor within a radius of a few kilometres , but no safeguards scenario has emerged where this would give a unique advantage.

Inverse-square scaling eventually breaks down around 100 km, as at that distance the backgrounds caused by civilian reactors far outshine any undeclared small reactor almost anywhere in the northern hemisphere. Small signals also prevent the use of neutrino detectors for nuclear-explosion monitoring, or to confirm the origin of a suspicious seismic event as being nuclear, as conventional technologies are more feasible than the very large detectors that would be needed. A more promising future application of neutrino-detector technology is to meet the new challenges posed by advanced nuclear-reactor designs.

Advanced safeguards

The current safeguards regime relies on two key assumptions: that fuel comes in large, indivisible and individually identifiable units called “fuel assemblies”, and that power reactors need to be refuelled frequently. Most advanced reactor designs violate at least one of these design characteristics. Fuel may come in thousands of small pebbles or be molten, and its coolant may not be transparent, in contrast to current designs, where water is used as moderator, coolant and storage medium in the first years after discharge. Either way, counting and identification of the fuel by serial number may be impossible. And unlike current power reactors, which are refuelled on a 12-to-18-month cycle, allowing in-core fuel to be verified as well, advanced reactors may be refuelled only once in their lifetime.

Three 20 tonne neutrino detectors

Neutrino detectors would not be hampered by any of these novel features. Detailed simulations indicate that they could be effective in addressing the safeguard challenges presented by advanced reactors. Crucially, they would work in a very similar fashion for any of the new reactor designs.

In 2019 the US Department of Energy chartered and funded a study (which I co-chair) with the goal of determining the utility of the unique capabilities offered by neutrino detectors for nuclear security and energy applications. This study includes investigators from US national laboratories and academia more broadly, and will engage and interview nuclear security and policy experts within the Department of Energy, the State Department, NGOs, academia, and international agencies such as the IAEA. The results are expected early in 2021. They should provide a good understanding of where neutrinos can play a role in current and future monitoring and verification agreements, and may help to guide neutrino detectors towards their first real-world applications.

The idea of using neutrinos to monitor reactors has been around for about 40 years. Only very recently, however, as a result of a surge of interest in sterile neutrinos, has detector technology become available that would be practical in real-world scenarios such as the JCPOA or a new North Korean nuclear agreement. The most likely initial application will be near-field reactor monitoring with detectors inside the fence of the monitored facility as part of a regional nuclear deal. Such detectors will not be a panacea to all verification and monitoring needs, and can only be effective if there is a sincere political will on both sides, but they do offer more room for creative diplomacy, and a technology that is robust against the kinds of political failures which have derailed past agreements. 

The post Neutrinos for peace appeared first on CERN Courier.

]]>
Feature Detectors similar to those used to hunt for sterile neutrinos could help guard against the extraction of plutonium-239 for nuclear weapons, writes Patrick Huber. https://cerncourier.com/wp-content/uploads/2020/10/CCNovDec20_PROLIF_yale2.jpg
Tuning in to neutrinos https://cerncourier.com/a/tuning-in-to-neutrinos/ Tue, 07 Jul 2020 12:05:17 +0000 https://preview-courier.web.cern.ch/?p=87661 A new generation of accelerator and reactor experiments is opening an era of high-precision neutrino measurements.

The post Tuning in to neutrinos appeared first on CERN Courier.

]]>
DUNE’s dual-phase prototype detector

In traditional Balinese music, instruments are made in pairs, with one tuned slightly higher in frequency than its twin. The notes are indistinguishable to the human ear when played together, but the sound recedes and swells a couple of times each second, encouraging meditation. This is a beating effect: fast oscillations at the mean frequency inside a slowly oscillating envelope. Similar physics is at play in neutrino oscillations. Rather than sound intensity, it’s the probability to observe a neutrino with its initial flavour that oscillates. The difference is how long it takes for the interference to make itself felt. When Balinese musicians strike a pair of metallophones, the notes take just a handful of periods to drift out of phase. By contrast, it takes more than 1020 de Broglie wavelengths and hundreds of kilometres for neutrinos to oscillate in experiments like the planned mega-projects Hyper-Kamiokande and DUNE.

The zeitgeist began to shift to artificially produced neutrinos

Neutrino oscillations revealed a rare chink in the armour of the Standard Model: neutrinos are not massless, but are evolving superpositions of at least three mass eigenstates with distinct energies. A neutrino is therefore like three notes played together: frequencies so close, given the as-yet immeasurably small masses involved, that they are not just indistinguishable to the ear, but inseparable according to the uncertainty principle. As neutrinos are always ultra-relativistic, the energies of the mass eigenstates differ only due to tiny mass contributions of m2/2E. As the mass eigenstates propagate, phase differences develop between them proportional to squared-mass splittings Δm2. The sought-after oscillations range from a few metres to the diameter of Earth.

Orthogonal mixtures

The neutrino physics of the latter third of the 20th century was bookended by two anomalies that uncloaked these effects. In 1968 Ray Davis’s observation of a deficit of solar neutrinos prompted Bruno Pontecorvo to make public his conjecture that neutrinos might oscillate. Thirty years later, the Super-Kamiokande collaboration’s analysis of a deficit of atmospheric muon neutrinos from the other side of the planet posthumously vindicated the visionary Italian, and later Soviet, theorist’s speculation. Subsequent observations have revealed that electron, muon and tau neutrinos are orthogonal mixtures of mass eigenstates ν1 and ν2, separated by a small so-called solar splitting Δm221, and ν3, which is separated from that pair by a larger “atmospheric” splitting usually quantified by Δm232 (see “Little and large” figure). It is not yet known if ν3 is the lightest or the heaviest of the trio. This is called the mass-hierarchy problem.

A narrow splitting between neutrino mass eigenstates

“In the first two decades of the 21st century we have achieved a rather accurate picture of neutrino masses and mixings,” says theorist Pilar Hernández of the University of Valencia, “but the ordering of the neutrino states is unknown, the mass of the lightest state is unknown and we still do not know if the neutrino mixing matrix has imaginary entries, which could signal the breaking of CP symmetry,” she explains. “The very different mixing patterns in quarks and leptons could hint at a symmetry relating families, and a more accurate exploration of the lepton-mixing pattern and the neutrino ordering in future experiments will be essential to reveal any such symmetry pattern.”

Today, experiments designed to constrain neutrino mixing tend to dispense with astrophysical neutrinos in favour of more controllable accelerator and reactor sources. The experiments span more than four orders of magnitude in size and energy and fall into three groups (see “Not natural” figure). Much of the limelight is taken by experiments that are sensitive to the large mass splitting Δm232, which include both a cluster of current (such as T2K) and future (such as DUNE) accelerator-neutrino experiments with long baselines and high energies, and a high-performing trio of reactor-neutrino experiments (Daya Bay, RENO and Double Chooz) with a baseline of about a kilometre, operating just above the threshold for inverse beta decay. The second group is a beautiful pair of long-baseline reactor-neutrino experiments (KamLAND and the soon-to-be-commissioned JUNO), which join experiments with solar neutrinos in having sensitivity to the smaller squared-mass splitting Δm221. Finally, the third group is a host of short-baseline accelerator-neutrino experiments and very-short-baseline reactor neutrino experiments that are chasing tantalising hints of a fourth “sterile” neutrino (with no Standard-Model gauge interactions), which is split from the others by a squared-mass splitting of the order of 1 eV2.

Neutrino-oscillation experiments

Artificial sources

Experiments with artificial sources of neutrinos have a storied history, dating from the 1950s, when physicists toyed with the idea of detecting neutrinos created in the explosion of a nuclear bomb, and eventually observed them streaming from nuclear reactors. The 1960s saw the invention of the accelerator neutrino. Here, proton beams smashed into fixed targets to create a decaying debris of charged pions and their concomitant muon neutrinos. The 1970s transformed these neutrinos into beams by focusing the charged pions with magnetic horns, leading to the discovery of weak neutral currents and insights into the structure of nucleons. It was not until the turn of the century, however, that the zeitgeist of neutrino-oscillation studies began to shift from naturally to artificially produced neutrinos. Just a year after the publication of the Super-Kamiokande collaboration’s seminal 1998 paper on atmospheric–neutrino oscillations, Japanese experimenters trained a new accelerator-neutrino beam on the detector.

Operating from 1999 to 2006, the KEK-to-Kamioka (K2K) experiment sent a beam of muon neutrinos from the KEK laboratory in Tsukuba to the Super-Kamiokande detector, 250 km away under Mount Ikeno on the other side of Honshu. K2K confirmed that muon neutrinos “disappear” as a function of propagation distance over energy. The experiments together supported the hypothesis of an oscillation to tau neutrinos, which could not be directly detected at that energy. By increasing the beam energy well above the tau-lepton mass, the CERN Neutrinos to Gran Sasso (CNGS) project, which ran from 2006 to 2012, confirmed the oscillation to tau neutrinos by directly observing tau leptons in the OPERA detector. Meanwhile, the Main Injector Neutrino Oscillation Search (MINOS), which sent muon neutrinos from Fermilab to northern Minnesota from 2005 to 2012, made world-leading measurements of the parameters describing the oscillation.

With νμ→ ντ oscillations established, the next generation of experiments innovated in search of a subtler effect. T2K (K2K’s successor, with the beam now originating at J-PARC in Tokai) and NOvA (which analyses oscillations over the longer baseline of 810 km between Fermilab and Ash River, Minnesota) both have far detectors offset by a few degrees from the direction of the peak flux of the beams. This squeezes the phase space for the pion decays, resulting in an almost mono-energetic flux of neutrinos. Here, a quirk of the mixing conspires to make the musical analogy of a pair of metallophones particularly strong: to a good approximation, the muon neutrinos ring out with two frequencies of roughly equal amplitude, to yield an almost perfect disappearance of muon neutrinos – and maximum sensitivity to the appearance of electron neutrinos.

Testing CP symmetry

The three neutrino mass eigenstates mix to make electron, muon and tau neutrinos according to the Pontecorvo– Maki–Nakagawa–Sakata (PMNS) matrix, which describes three rotations and a complex phase δCP that can cause charge–parity (CP) violation – a question of paramount importance in the field due to its relevance to the unknown origin of the matter–antimatter asymmetry in the universe. Whatever the value of the complex phase, leptonic CP violation can only be observed if all three of the angles in the PMNS matrix are non-zero. Experiments with atmospheric and solar neutrinos demonstrated this for two of the angles. At the beginning of the last decade, short-baseline reactor-neutrino experiments in China (Daya Bay), Korea (RENO) and France (Double Chooz) were in a race with T2K to establish if the third angle, which leads to a coupling between ν3 and electrons, was also non-zero. In the reactor experiments this would be seen as a small deficit of electron antineutrinos a kilometre or so from the reactors; in T2K the smoking gun would be the appearance of a small number of electron neutrinos not present in the initial muon-neutrino-dominated beam.

After data taking was cut short by the great Sendai earthquake and tsunami of March 2011, T2K published evidence for the appearance of six electron-neutrino events, over the expected background of 1.5 ± 0.3 in the case of no coupling. Alongside a single tau-neutrino candidate in OPERA, these were the first neutrinos seen to appear in a detector with a new flavour, as previous signals had always registered a deficit of an expected flavour. In the closing days of the year, Double Chooz published evidence for 4121 electron–antineutrino events, under the expected tally for no coupling of 4344 ± 165, reinforcing T2K’s 2.5σ indication. Daya Bay and RENO put the matter to bed the following spring, with 5σ evidence apiece that the ν3-electron coupling was indeed non-zero. The key innovation for the reactor experiments was to minimise troublesome flux and interaction systematics by also placing detectors close to the reactors.

A visualisation of the Hyper-Kamiokande detector

Since then, T2K and NOvA, which began taking data in 2014, have been chasing leptonic CP violation – an analysis that is out of the reach of reactor experiments, as δCP does not affect disappearance probabilities. By switching the polarity of the magnetic horn, the experiments can compare the probabilities for the CP-mirror oscillations νμ→ νe and νμ→ νe directly. NOvA data are inconclusive at present. T2K data currently err towards near maximal CP violation in the vicinity of δCP = –π/2. The latest analysis, published in April, disfavours leptonic CP conservation (δCP = 0, ±π) at 2σ significance for all possible mixing parameter values. Statistical uncertainty is the biggest limiting factor.

Major upgrades planned for T2K next year target statistical, interaction-model and detector uncertainties. A substantial increase in beam intensity will be accompanied by a new fine-grained scintillating target for the ND280 near-detector complex, which will lower the energy threshold to reconstruct tracks. New transverse TPCs will improve ND280’s acceptance at high angles, yielding a better cancellation of systematic errors with the far detector, Super-Kamiokande, which is being upgraded by loading 0.01% gadolinium salts into the otherwise ultrapure water. As in reactor-neutrino detectors, this will provide a tag for antineutrino events, to improve sample purities in the search for leptonic CP violation.

T2K and NOvA both plan to roughly double their current data sets, and are working together on a joint fit, in a bid to better understand correlations between systematic uncertainties, and break degeneracies between measurements of CP violation and the mass hierarchy. If the CP-violating phase is indeed maximal, as suggested by the recent T2K result, the experiments may be able to exclude CP conservation with more than 99% confidence. “At this point we will be in a transition from a statistics-dominated to a systematics-dominated result,” says T2K spokesperson Atsuko Ichikawa of the University of Kyoto. “It is difficult to say, but our sensitivity will likely be limited at this stage by a convolution of neutrino-interaction and flux systematics.”

The next generation

Two long-baseline accelerator-neutrino experiments roughly an order of magnitude larger in cost and detector mass than T2K and NOvA have received green lights from the Japanese and US governments: Hyper-Kamiokande and DUNE. One of their primary missions is to resolve the question of leptonic CP violation.

Hyper-Kamiokande will adopt the same approach as T2K, but will benefit from major upgrades to the beam and the near and far detectors in addition to those currently underway in the present T2K upgrade. To improve the treatment of systematic errors, the suite of near detectors will be complemented by an ingenious new gadolinated water-Cherenkov detector at an intermediate baseline: by spanning a range of off-axis angles, it will drive down interaction-model systematics by exploiting previously neglected information on the how the flux varies as a function of the angle relative to the centre of the beam. Hyper-Kamiokande’s increased statistical reach will also be impressive. The power of the Japan Proton Accelerator Research Complex (J-PARC) beam will be increased from its current value of 0.5 MW up to 1.3 MW, and the new far detector will be filled with 260,000 tonnes of ultrapure water, yielding a fiducial volume 8.4 times larger than that of Super-Kamiokande. Procurement of the photo-multiplier tubes will begin this year, and the five-year-long excavation of the cavern has already begun. Data taking is scheduled to commence in 2027. “The expected precision on δCP is 10–20 degrees, depending on its true value,” says Hyper-Kamiokande international co-spokesperson Francesca di Lodovico of King’s College, London.

In the US, the Deep Underground Neutrino Experiment (DUNE) will exploit the liquid-argon–TPC technology first deployed on a large scale by ICARUS – OPERA’s sister detector in the CNGS project. The idea for the technology dates back to 1977, when Carlo Rubbia proposed using liquid rather than gaseous argon as a drift medium for ionisation electrons. Given liquid-argon’s higher density, such detectors can serve as both target and tracker, providing high-resolution 3D images of the interactions – an invaluable tool for reducing systematics related to the murky world of neutrino–nucleus interactions.

Spectacular performance

The technology is currently being developed in two prototype detectors at CERN. The first hones ICARUS’s single-phase approach. “The performance of the prototype has been absolutely spectacular, exceeding everyone’s expectations,” says DUNE co-spokesperson Ed Blucher of the University of Chicago. “After almost two years of operation, we are confident that the liquid–argon technology is ready to be deployed at the huge scale of the DUNE detectors.” In parallel, the second prototype is testing a newer dual-phase concept. In this design, ionisation charges drift through an additional layer of gaseous argon before reaching the readout plane. The signal can be amplified here, potentially easing noise requirements for the readout electronics, and increasing the maximum size of the detector. The dual-phase prototype was filled with argon in summer 2019 and is now recording tracks.

The evolution of the fraction of each flavour in the wavefunction of electron antineutrinos

The final detectors will have about twice the height and 10 to 20 times the footprint. Following the construction of an initial single-phase unit, the DUNE collaboration will likely pick a mix of liquid-argon technologies to complete their roster of four 10 kton far-detector modules, set to be installed a kilometre underground at the Sanford Underground Research Laboratory in Lead, South Dakota. Site preparation and pre-excavation activities began in 2017, and full excavation work is expected to begin soon, with the goal that data-taking begin during the second half of this decade. Work on the near-detector site and the “PIP-II” upgrade to Fermilab’s accelerator complex began last year.

Though similar to Hyper-Kamiokande at first glance, DUNE’s approach is distinct and complementary. With beam energy and baseline both four times greater, DUNE will have greater sensitivity to flavour-dependent coherent-forward-scattering with electrons in Earth’s crust – an effect that modifies oscillation probabilities differently depending on the mass hierarchy. With the Fermilab beam directed straight at the detector rather than off-axis, a broader range of neutrino energies will allow DUNE to observe the oscillation pattern from the first to the second oscillation maximum, and simultaneously fit all but the solar mixing parameters. And with detector, flux and interaction uncertainties all distinct, a joint analysis of both experiments’ data could break degeneracies and drive down systematics.

“If CP violation is maximal and the experiments collect data as anticipated, DUNE and Hyper-Kamiokande should both approach 5σ significance for the exclusion of leptonic CP conservation in about five years,” estimates DUNE co-spokesperson Stefan Söldner-Rembold of the University of Manchester, noting that the experiments will also be highly complementary for non-accelerator topics. The most striking example is supernova-burst neutrinos, he says, referring to a genre of neutrinos only observed once so far, during 15 seconds in 1987, when neutrinos from a supernova in the Large Magellanic Cloud passed through the Earth. “While DUNE is primarily sensitive to electron neutrinos, Hyper-Kamiokande will be sensitive to electron antineutrinos. The difference between the timing distributions of these samples encodes key information about the dynamics of the supernova explosion.” Hyper-Kamiokande spokesperson Masato Shiozawa of ICRR Tokyo also emphasises the broad scope of the physics programmes. “Our studies will also encompass proton decay, high-precision measurements of solar neutrinos, supernova-relic neutrinos, dark-matter searches, the possible detection of solar-flare neutrinos and neutrino geophysics.”

JUNO energy resolution

Half a century since Ray Davis and two co-authors published evidence for a 60% deficit in the flux of solar neutrinos compared to John Bahcall’s prediction, DUNE already boasts more than a thousand collaborators, and Hyper-Kamiokande’s detector mass is set to be 500 times greater than Davis’s tank of liquid tetrachloroethylene. If Ray Davis was the conductor who set the orchestra in motion, then these large experiments fill out the massed ranks of the violin section, poised to deliver what may well be the most stirring passage of the neutrino-oscillation symphony. But other sections of the orchestra also have important parts to play.

Mass hierarchy

The question of the neutrino mass hierarchy will soon be addressed by the Jiangmen Underground Neutrino Observatory (JUNO) experiment, which is currently under construction in China. The project is an evolution of the Daya Bay experiment, and will seek to measure a deficit of electron antineutrinos 53 km from the Yangjiang and Taishan nuclear-power plants. As the reactor neutrinos travel, the small kilometre-scale oscillation observed by Daya Bay will continue to undulate with the same wavelength, revealed in JUNO as “fast” oscillations on a slower and deeper first oscillation maximum due to the smaller solar mass splitting Δm221 (see “An oscillation within an oscillation” figure).

“JUNO can determine the neutrino mass hierarchy in an unambiguous and definite way, independent from the CP phase and matter effects, unlike other experiments using accelerator or atmospheric neutrinos,” says spokesperson Yifang Wang of the Chinese Academy of Sciences in Beijing. “In six years of data taking, the statistical significance will be higher than 3σ.”

JUNO has completed most of the digging of the underground laboratory, and equipment for the production and purification of liquid scintillator is being fabricated. A total of 18,000 20-inch photomultiplier tubes and 26,000 3-inch photomultiplier tubes have been delivered, and most of them have been tested and accepted, explains Wang. The installation of the detector is scheduled to begin next year. JUNO will arguably be at the vanguard of a precision era for the physics of neutrino oscillations, equipped to measure the mass splittings and the solar mixing parameters to better than 1% precision – an improvement of about one order of magnitude over previous results, and even better than the quark sector, claims Wang, somewhat provocatively. “JUNO’s capabilities for supernova-burst neutrinos, diffused supernova neutrinos and geoneutrinos are unprecedented, and it can be upgraded to be a world-best double-beta-decay detector once the mass hierarchy is measured.”

Excavation of the cavern for the JUNO experiment

With JUNO, Hyper-Kamiokande and DUNE now joining a growing ensemble of experiments, the unresolved leitmotifs of the three-neutrino paradigm may find resolution this decade, or soon after. But theory and experiment both hint, quite independently, that nature may have a scherzo twist in store before the grand finale.

A rich programme of short-baseline experiments promises to bolster or exclude experimental hints of a fourth sterile neutrino with a relatively large mixing with the electron neutrino that have dogged the field since the late 1990s. Four anomalies stack up as more or less consistent among themselves. The first, which emerged in the mid-1990s at Los Alamos’s Liquid Scintillator Neutrino Detector (LSND), is an excess of electron antineutrinos that is potentially consistent with oscillations involving a sterile neutrino at a mass splitting Δm2 1 eV2. Two other quite disparate anomalies since then – a few-percent deficit in the expected flux from nuclear reactors, and a deficit in the number of electron neutrinos from radioactive decays in liquid-gallium solar-neutrino detectors – could be explained in the same way. The fourth anomaly, from Fermilab’s MiniBooNE experiment, which sought to replicate the LSND effect at a longer baseline and a higher energy, is the most recent: a sizeable excess of both electron neutrinos and antineutrinos, though at a lower energy than expected. It’s important to note, however, that experiments including KARMEN, MINOS+ and IceCube have reported null searches for sterile neutrinos that fit the required description. Such a particle would also stand in tension with cosmology, notes phenomenologist Silvia Pascoli of Durham University, as models predict it would make too large a contribution to hot dark matter in the universe today, unless non-standard scenarios are invoked.

Three different types of experiment covering three orders of magnitude in baseline are now seeking to settle the sterile-neutrino question in the next decade. A smattering of reactor-neutrino experiments a mere 10 metres or so from the source will directly probe the reactor anomaly at Δm2 1 eV2. The data reported so far are intriguing. Korea’s NEOS experiment and Russia’s DANSS experiment report siren signals between 1 and 2 eV2, and NEUTRINO-4, also based in Russia, reports a seemingly outlandish signal, indicative of very large mixing, at 7 eV2. In parallel, J-PARC’s JSNS2 experiment is gearing up to try to reproduce the LSND effect using accelerator neutrinos at the same energy and baseline. Finally, Fermilab’s short-baseline programme will thoroughly address a notable weakness of both LSND and MiniBooNE: the lack of a near detector.

MiniBooNE detector

The Fermilab programme will combine three liquid-argon TPCs – a bespoke new short-baseline detector (SBND), the existing MicroBooNE detector, and the refurbished ICARUS detector – to resolve the LSND anomaly once and for all. SBND is currently under construction, MicroBooNE is operational, and ICARUS, removed from its berth at Gran Sasso and shipped to the US in 2017, has been installed at Fermilab, following work on the detector at CERN. “The short-baseline neutrino programme at Fermilab has made tremendous technical progress in the past year,” says ICARUS spokesperson and Nobel laureate Carlo Rubbia, noting that the detector will be commissioned as soon as circumstances allow, given the coronavirus pandemic. “Once both ICARUS and SBND are in operation, it will take less than three years with the nominal beam intensity to settle the question of whether neutrinos have an even more mysterious character than we thought.”

Muon neutrinos ring out with two frequencies of roughly equal amplitude, to yield almost perfect disappearance

Outside of the purview of oscillation experiments with artificially produced neutrinos, astrophysical observatories will scale a staggering energy range, from the PeV-scale neutrinos reported by IceCube at the South Pole, down, perhaps, to the few-hundred-μeV cosmic neutrino background sought by experiments such as PTOLEMY in the US. Meanwhile, the KATRIN experiment in Germany is zeroing in on the edges of beta-decay distributions to set an absolute scale for the mass of the peculiar mixture of mass eigenstates that make up an electron antineutrino (CERN Courier January/February 2020 p28). At the same time, a host of experiments are searching for neutrinoless double-beta decay – a process that can only occur if the neutrino is its own antiparticle. Discovering such a Majorana nature for the neutrino would turn the Standard Model on its head, and offer grist for the mill of theorists seeking to explain the tininess of neutrino masses, by balancing them against still-to-be-discovered heavy neutral leptons.

Indispensable input

According to Mikhail Shaposhnikov of the Swiss Federal Institute of Technology in Lausanne, current and future reactor- and accelerator-neutrino experiments will provide an indispensable input for understanding neutrino physics. And not in isolation. “To reach a complete picture, we also need to know the mechanism for neutrino-mass generation and its energy scale, and the most important question here is the scale of masses of new neutrino states: if lighter than a few GeV, these particles can be searched for at new experiments at the intensity frontier, such as SHiP, and at precision experiments looking for rare decays of mesons, such as Belle II, LHCb and NA62, while the heavier states may be accessible at ATLAS and CMS, and at future circular colliders,” explains Shaposhnikov. “These new particles can be the key in solving all the observational problems of the Standard Model, and require a consolidated effort of neutrino experiments, accelerator-based experiments and cosmological observations. Of course, it remains to be seen if this dream scenario can indeed be realised in the coming 20 years.”

 

• This article was updated on 6 July, to reflect results presented at Neutrino 2020

The post Tuning in to neutrinos appeared first on CERN Courier.

]]>
Feature A new generation of accelerator and reactor experiments is opening an era of high-precision neutrino measurements. https://cerncourier.com/wp-content/uploads/2020/07/CCJulAug20_NEUTRINOS_frontis.jpg
Sensing a passage through the unknown https://cerncourier.com/a/sensing-a-passage-through-the-unknown/ Tue, 07 Jul 2020 11:27:06 +0000 https://preview-courier.web.cern.ch/?p=87702 A global network of ultra-sensitive optical atomic magnetometers – GNOME – has begun its search for exotic fields beyond the Standard Model.

The post Sensing a passage through the unknown appeared first on CERN Courier.

]]>
Since the inception of the Standard Model (SM) of particle physics half a century ago, experiments of all shapes and sizes have put it to increasingly stringent tests. The largest and most well-known are collider experiments, which in particular have enabled the direct discovery of various SM particles. Another approach utilises the tools of atomic physics. The relentless improvement in the precision of tools and techniques of atomic physics, both experimental and theoretical, has led to the verification of the SM’s predictions with ever greater accuracy. Examples include measurements of atomic parity violation that reveal the effects of the Z boson on atomic states, and measurements of atomic energy levels that verify the predictions of quantum electrodynamics (QED). Precision atomic physics experiments also include a vast array of searches for effects predicted by theories beyond-the-SM (BSM), such as fifth forces and permanent electric dipole moments that violate parity- and time-reversal symmetry. These tests probe potentially subtle yet constant (or controllable) changes of atomic properties that can be revealed by averaging away noise and controlling systematic errors.

GNOME

But what if the glimpses of BSM physics that atomic spectroscopists have so painstakingly searched for over the past decades are not effects that persist over the many weeks or months of a typical measurement campaign, but rather transient events that occur only sporadically? For example, might not cataclysmic astrophysical events such as black-hole mergers or supernova explosions produce hypothetical ultralight bosonic fields impossible to generate in the laboratory? Or might not Earth occasionally pass through some invisible “cloud” of a substance (such as dark matter) produced in the early universe? Such transient phenomena could easily be missed by experimenters when data are averaged over long times to increase the signal-to-noise ratio.

Transient phenomena

Detecting such unconventional events represents several challenges. If a transient signal heralding new physics was observed with a single detector, it would be exceedingly difficult to confidently distinguish the exotic-physics signal from the many sources of noise that plague precision atomic physics measurements. However, if transient interactions occur over a global scale, a network of such detectors geographically distributed over Earth could search for specific patterns in the timing and amplitude of such signals that would be unlikely to occur randomly. By correlating the readouts of many detectors, local effects can be filtered away and exotic physics could be distinguished from mundane physics.

This idea forms the basis for the Global Network of Optical Magnetometers to search for Exotic physics (GNOME), an international collaboration involving 14 institutions from all over the world (see “Correlated” figure). Such an idea, like so many others in physics, is not entirely new. The same concept is at the heart of the worldwide network of interferometers used to observe gravitational waves (LIGO, Virgo, GEO, KAGRA, TAMA, CLIO), and the global network of proton-precession magnetometers used to monitor geomagnetic and solar activity. What distinguishes GNOME from other global sensor networks is that it is specifically dedicated to searching for signals from BSM physics that have evaded detection in earlier experiments.

Optical atomic magnetometer

GNOME is a growing network of more than a dozen optical atomic magnetometers, with stations in Europe, North America, Asia and Australia. The project was proposed in 2012 by a team of physicists from the University of California at Berkeley, Jagiellonian University, California State University – East Bay, and the Perimeter Institute. The network started taking preliminary data in 2013, with the first dedicated science-run beginning in 2017. With more data on the way, the GNOME collaboration, consisting of more than 50 scientists from around the world, is presently combing the data for signs of the unexpected, with its first results expected later this year.

Exotic-physics detectors

Optical atomic magnetometers (OAMs) are among the most sensitive devices for measuring magnetic fields. However, the atomic vapours that are the heart of GNOME’s OAMs are placed inside multi-layer shielding systems, reducing the effects of external magnetic fields by a factor of more than a million. Thus, in spite of using extremely sensitive magnetometers, GNOME sensors are largely insensitive to magnetic signals. The reasoning is that many BSM theories predict the existence of exotic fields that couple to atomic spins and would penetrate through magnetic shields largely unaffected. Since the OAM signal is proportional to the spin-dependent energy shift regardless of whether or not a magnetic field causes the energy shift, OAMs – even enclosed within magnetic shields – are sensitive to a broad class of exotic fields.

The OAM setup

The basic principle behind OAM operation (see “Optical rotation” figure) involves optically measuring spin-dependent energy shifts by controlling and monitoring an ensemble of atomic spins via angular momentum exchange between the atoms and light. The high efficiency of optical pumping and probing of atomic spin ensembles, along with a wide array of clever techniques to minimise atomic spin relaxation (even at high atomic vapour densities), have enabled OAMs to achieve sensitivities to spin-dependent energy shifts at levels well below 10–20 eV after only one second of integration. One of the 14 OAM installations, at California State University – East Bay, is shown in the “Benchtop physics” image.

However, one might wonder: do any of the theoretical scenarios suggesting the existence of exotic fields predict signals detectable by a magnetometer network while also evading all existing astrophysical and laboratory constraints? This is not a trivial requirement, since previous high-precision atomic spectroscopy experiments have established stringent limits on BSM physics. In fact, OAM techniques have been used by a number of research groups (including our own) over the past several decades to search for spin-dependent energy shifts caused by exotic fields sourced by nearby masses or polarised spins. Closely related work has ruled out vast areas of BSM parameter space by comparing measurements of hyperfine structure in simple hydrogen-like atoms to QED calculations. Furthermore, if exotic fields do exist and couple strongly enough to atomic spins, they could cause noticeable cooling of stars and affect the dynamics of supernovae. So far, all laboratory experiments have produced null results and all astrophysical observations are consistent with the SM. Thus if such exotic fields exist, their coupling to atomic spins must be extremely feeble.

Despite these constraints and requirements, theoretical scenarios both consistent with existing constraints and that predict effects measurable with GNOME do exist. Prime examples, and the present targets of the GNOME collaboration’s search efforts, are ultralight bosonic fields. A canonical example of an ultralight boson is the axion. The axion emerged from an elegant solution, proposed by Roberto Peccei and Helen Quinn in the late 1970s, to the strong–CP problem. The Peccei–Quinn mechanism explains the mystery of why the strong interaction, to the highest precision we can measure, respects the combined CP symmetry whereas quantum chromodynamics naturally accommodates CP violation at a level ten orders of magnitude larger than present constraints. If CP violation in the strong interaction can be described not by a constant term but rather by a dynamical (axion) field, it could be significantly suppressed by spontaneous symmetry breaking at a high energy scale. If the symmetry breaking scale is at the grand-unification-theory (GUT) scale (~1016 GeV), the axion mass is around 10-10 eV, and at the Planck scale (1019 GeV) around 10-13 eV – both many orders of magnitude less massive than even neutrinos. Searching for ultralight axions therefore offers the exciting possibility of probing physics at the GUT and Planck scales, far beyond the direct reach of any existing collider.

Beyond the Standard Model

In addition to the axion, there are a wide range of other hypothetical ultralight bosons that couple to atomic spins and could generate signals potentially detectable with GNOME. Many theories predict the existence of spin-0 bosons with properties similar to the axion (so-called axion-like particles, ALPs). A prominent example is the relaxion, proposed by Peter Graham, David Kaplan and Surjeet Rajendran to explain the hierarchy problem: the mystery of why the electroweak force is about 24 orders-of-magnitude stronger than the gravitational force. In 2010, Asimina Arvanitaki and colleagues found that string theory suggests the existence of many ALPs of widely varying masses, from 10-33 eV to 10-10 eV. From the perspective of BSM theories, ultralight bosons are ubiquitous. Some predict ALPs such as “familons”, “majorons” and “arions”. Others predict new ultralight spin-1 bosons such as dark and hidden photons. There is even a possibility of exotic spin-0 or spin-1 gravitons: while the graviton for a quantum theory of gravity matching that described by general relativity must be spin-2, alternative gravity theories (for example torsion gravity and scalar-vector-tensor gravity) predict additional spin-0 and/or spin-1 gravitons.

Earth passing through a topological defect

It also turns out that such ultralight bosons could explain dark matter. Most searches for ultralight bosonic dark matter assume the bosons to be approximately uniformly distributed throughout the dark matter halo that envelopes the Milky Way. However, in some theoretical scenarios, the ultralight bosons can clump together into bosonic “stars” due to self-interactions. In other scenarios, due to a non-trivial vacuum energy landscape, the ultralight bosons could take the form of “topological” defects, such as domain walls that separate regions of space with different vacuum states of the bosonic field (see “New domains” figure). In either of these cases, the mass-energy associated with ultralight bosonic dark matter would be concentrated in large composite structures that Earth might only occasionally encounter, leading to the sort of transient signals that GNOME is designed to search for.

Magnetic field deviation

Yet another possibility is that intense bursts of ultralight bosonic fields might be generated by cataclysmic astrophysical events such as black-hole mergers. Much of the underlying physics of coalescing singularities is unknown, possibly involving quantum-gravity effects far beyond the reach of high-energy experiments on Earth, and it turns out that quantum gravity theories generically predict the existence of ultralight bosons. Furthermore, if ultralight bosons exist, they may tend to condense in gravitationally bound halos around black holes. In these scenarios, a sizable fraction of the energy released when black holes merge could plausibly be emitted in the form of ultralight bosonic fields. If the energy density of the ultralight bosonic field is large enough, networks of atomic sensors like GNOME might be able to detect a signal.

In order to use OAMs to search for exotic fields, the effects of environmental magnetic noise must be reduced, controlled, or cancelled. Even though the GNOME magnetometers are enclosed in multi-layer magnetic shields so that signals from external electromagnetic fields are significantly suppressed, there is a wide variety of phenomena that can mimic the sorts of signals one would expect from ultralight bosonic fields. These include vibrations, laser instabilities, and noise in the circuitry used for data acquisition. To combat these spurious signals, each GNOME station uses auxiliary sensors to monitor electromagnetic fields outside the shields (which could leak inside the shields at a far-reduced level), accelerations and rotations of the apparatus, and overall magnetometer performance. If the auxiliary sensors indicate data may be suspect, the data are flagged and ignored in the analysis (see “Spurious signals” figure).

GNOME data that have passed this initial quality check can then be scanned to see if there are signals matching the patterns expected based on various exotic physics hypotheses. For example, to test the hypothesis that dark matter takes the form of ALP domain walls, one searches for a signal pattern resulting from the passage of Earth through an astronomical-sized plane having a finite thickness given by the ALP’s Compton wavelength. The relative velocity between the domain wall and Earth is unknown, but can be assumed to be randomly drawn from the velocity distribution of virialised dark matter, having an average speed of about one thousandth the speed of light. The relative timing of signals appearing in different GNOME magnetometers should be consistent with a single velocity v: i.e. nearby stations (in the direction of the wall propagation) should detect signals with smaller delays and stations that are far apart should detect signals with larger delays, and furthermore the time delays should occur in a sensible sequence. The energy shift that could lead to a detectable signal in GNOME magnetometers is caused by an interaction of the domain-wall field φ with the atomic spin S whose strength is proportional to the scalar product of the spin with the gradient of the field, S∙∇φ. The gradient of the domain-wall field ∇φ is proportional to its momentum relative to S, and hence the signals appearing in different GNOME magnetometers are proportional to S∙v. Both the signal-timing pattern and the signal-amplitude pattern should be consistent with a single value of v; signals inconsistent with such a pattern can be rejected as noise.

If such exotic fields exist, their coupling to atomic spins must be extremely feeble

To claim discovery of a signal heralding BSM physics, detections must be compared to the background rate of spurious false-positive events consistent with the expected signal pattern but not generated by exotic physics. The false-positive rate can be estimated by analysing time-shifted data: the data stream from each GNOME magnetometer is shifted in time relative to the others by an amount much larger than any delays resulting from propagation of ultralight bosonic fields through Earth. Such time-shifted data can be assumed to be free of exotic-physics signals, so any detections are necessarily false positives: merely random coincidences due to noise. When the GNOME data are analysed without timeshifts, to be regarded as an indication of BSM physics, the signal amplitude must surpass the 5σ threshold as compared to the background determined with the time-shifted data. This means that, for a year-long data set, an event due to noise coincidentally matching the assumed signal pattern throughout the network would occur only once every 3.5 million years.

Inspiring efforts

Having already collected over a year of data, and with more on the way, the GNOME collaboration is presently combing the data for signs of BSM physics. New results based on recent GNOME science runs are expected in 2020. This would represent the first ever search for such transient exotic spin-dependent effects. Improvements in magnetometer sensitivity, signal characterisation, and data-analysis techniques are expected to improve on these initial results over the next several years. Significantly, GNOME has inspired similar efforts using other networks of precision quantum sensors: atomic clocks, interferometers, cavities, superconducting gravimeters, etc. In fact, the results of searches for exotic transient signals using clock networks have already been reported in the literature, constraining significant parameter space for various BSM scenarios. We would suggest that all experimentalists should seriously consider accurately time-stamping, storing, and sharing their data so that searches for correlated signals due to exotic physics can be conducted a posteriori. One never knows what nature might be hiding just beyond the frontier of the precision of past measurements.

The post Sensing a passage through the unknown appeared first on CERN Courier.

]]>
Feature A global network of ultra-sensitive optical atomic magnetometers – GNOME – has begun its search for exotic fields beyond the Standard Model. https://cerncourier.com/wp-content/uploads/2020/07/CCJulAug20_GNOME_frontis.jpg
Circular colliders eye Higgs self-coupling https://cerncourier.com/a/circular-colliders-eye-higgs-self-coupling/ Fri, 08 May 2020 16:33:26 +0000 https://preview-courier.web.cern.ch/?p=87406 Alain Blondel and Panagiotis Charitos report on developments at the third FCC Physics and Experiments Workshop.

The post Circular colliders eye Higgs self-coupling appeared first on CERN Courier.

]]>
Coupling correlations

Physics beyond the Standard Model must exist, to account for dark matter, the smallness of neutrino masses and the dominance of matter over antimatter in the universe; but we have no real clue of its energy scale. It is also widely recognised that new and more precise tools will be needed to be certain that the 125 GeV boson discovered in 2012 is indeed the particle postulated by Brout, Englert, Higgs and others to have modified the base potential of the whole universe, thanks to its coupling to itself, liberating energy for the masses of the W and Z bosons.

To tackle these big questions, and others, the Future Circular Collider (FCC) study, launched in 2014, proposed the construction of a new 100 km circular tunnel to first host an intensity-frontier 90 to 365 GeV e+e collider (FCC-ee), and then an energy-frontier (> 100 TeV) hadron collider, which could potentially also allow electron–hadron collisions. Potentially following the High-Luminosity LHC in the late 2030s, FCC-ee would provide 5 × 1012 Z decays – over five orders of magnitude more than the full LEP era, followed by 108 W pairs, 106 Higgs bosons (ZH events) and 106 top-quark pairs. In addition to providing the highest parton centre-of-mass energies foreseeable today (up to 40 TeV), FCC-hh would also produce more than 1013 top quarks and W bosons, and 50 billion Higgs bosons per experiment.

Rising to the challenge

Following the publication of the four-volume conceptual design report and submissions to the European strategy discussions, the third FCC Physics and Experiments Workshop was held at CERN from 13 to 17 January, gathering more than 250 participants for 115 presentations, and establishing a considerable programme of work for the coming years. Special emphasis was placed on the feasibility of theory calculations matching the experimental precision of FCC-ee. The theory community is rising to the challenge. To reach the required precision at the Z-pole, three-loop calculations of quantum electroweak corrections must include all the heavy Standard Model particles (W±, Z, H, t).

In parallel, a significant focus of the meeting was on detector designs for FCC-ee, with the aim of forming experimental proto-collaborations by 2025. The design of the interaction region allows for a beam vacuum tube of 1 cm radius in the experiments – a very promising condition for vertexing, lifetime measurements and the separation of bottom and charm quarks from light-quark and gluon jets. Elegant solutions have been found to bring the final-focus magnets close to the interaction point, using either standard quadrupoles or a novel magnet design using a superposition of off-axis (“canted”) solenoids. Delegates discussed solutions for vertexing, tracking and calorimetry during a Z-pole run at FCC-ee, where data acquisition and trigger electronics would be confronted with visible Z decays at 70 kHz, all of which would have to be recorded in full detail. A new subject was π/K/p identification at energies from 100 MeV to 40 GeV – a consequence of the strategy process, during which considerable interest was expressed in the flavour-physics programme at FCC-ee.

Physicists cannot refrain from investigating improvements

The January meeting showed that physicists cannot refrain from investigating improvements, in spite of the impressive statistics offered by the baseline design of FCC-ee. Increasing the number of interaction points from two to four is a promising way to nearly double the total delivery of luminosity for little extra power consumption, but construction costs and compatibility with a possible subsequent hadron collider must be determined. A bolder idea discussed at the workshop aims to improve both luminosity (by a factor of 10) and energy reach (perhaps up to 600 GeV), by turning FCC-ee into a 100 km energy-recovery linac. The cost, and how well this would actually work, are yet to be established. Finally, a tantalising possibility is to produce the Higgs boson directly in the s-channel: e+e → H, sitting exactly at a centre-of-mass energy equal to that of the Higgs boson. This would allow unique access to the tiny coupling of the Higgs boson to the electron. As the Higgs width (4.2 MeV in the Standard Model) is more than 20 times smaller than the natural energy spread of the beam, this would require a beam manipulation called monochromatisation and a careful running procedure, which a task force was nominated to study.

The ability to precisely probe the self-coupling of the Higgs boson is the keystone of the FCC physics programme. As said above, this self-interaction is the key to the electroweak phase transition, and could have important cosmological implications. Building on the solid foundation of precise and model-independent measurements of Higgs couplings at FCC-ee, FCC-hh would be able to access Hμμ, Hγγ, HZγ and Htt couplings at sub-percent precision. Further study of double Higgs production at FCC-hh shows that a measurement of the Higgs self-coupling could be done with a statistical precision of a couple of percent with the full statistics – which is to say that after the first few years of running the precision will already have been reduced to below 10%. This is much faster than previously realised, and definitely constituted the highlight of the workshop

The post Circular colliders eye Higgs self-coupling appeared first on CERN Courier.

]]>
Meeting report Alain Blondel and Panagiotis Charitos report on developments at the third FCC Physics and Experiments Workshop. https://cerncourier.com/wp-content/uploads/2020/05/FCC-wheel-twitter.jpg
New SMOG on the horizon https://cerncourier.com/a/new-smog-on-the-horizon/ Fri, 08 May 2020 16:11:28 +0000 https://preview-courier.web.cern.ch/?p=87393 LHCb will soon become the first LHC experiment able to run simultaneously with two separate interaction regions.

The post New SMOG on the horizon appeared first on CERN Courier.

]]>
Figure 1

LHCb will soon become the first LHC experiment able to run simultaneously with two separate interaction regions. As part of the ongoing major upgrade of the LHCb detector, the new SMOG2 fixed‑target system will be installed in long shutdown 2. SMOG2 will replace the previous System for Measuring the Overlap with Gas (SMOG), which injected noble gases into the vacuum vessel of LHCb’s vertex detector (VELO) at a low rate with the initial goal of calibrating luminosity measurements. The new system has several advantages, including the ability to reach effective area densities (and thus luminosities) up to two orders of magnitude higher for the same injected gas flux.

SMOG2 is a gas target confined within a 20 cm‑long aluminium storage cell that is mounted at the upstream edge of the VELO, 30 cm from the main interaction point, and coaxial with the LHC beam (figure 1). The storage‑cell technology allows a very limited amount of gas to be injected in a well defined volume within the LHC beam pipe, keeping the gas pressure and density profile under precise control, and ensuring that the beam‑pipe vacuum level stays at least two orders of magnitude below the upper threshold set by the LHC. With beam‑gas interactions occurring at roughly 4% of the proton–proton collision rate at LHCb, the lifetime of the beam will be essentially unaffected. The cell is made of two halves, attached to the VELO with an alignment precision of 200 μm. Like the VELO halves, they can be opened for safety during LHC beam injection and tuning, and closed for data‑taking. The cell is sufficiently narrow that as small a flow as 10–15 particles per second will yield tens of pb–1 of data per year. The new injection system will be able to switch between gases within a few minutes, and in principle is capable of injecting not just noble gases, from helium up to krypton and xenon, but also several other species, including H2, D2, N2, and O2.

SMOG2 will open a new window on QCD studies and astroparticle physics at the LHC

SMOG2 will open a new window on QCD studies and astroparticle physics at the LHC, performing precision measurements in poorly known kinematic regions. Collisions with the gas target will occur at a nucleon–nucleon centre‑of‑mass energy of 115 GeV for a proton beam of 7 TeV, and 72 GeV for a Pb beam of 2.76 TeV per nucleon. Due to the boost of the interacting system in the laboratory frame and the forward geometrical acceptance of LHCb, it will be possible to access the largely unexplored high‑x and intermediate Q2 regions.

Combined with LHCb’s excellent particle identification capabilities and momentum resolution, the new gas target system will allow us to advance our understanding of the gluon, antiquark, and heavy‑quark components of nucleons and nuclei at large‑x. This will benefit searches for physics beyond the Standard Model at the LHC, by improving our knowledge of the parton distribution functions of both protons and nuclei, particularly at high‑x, where new particles are most often expected, and will inform the physics programmes of proposed next‑generation accelerators such as the Future Circular Collider. The gas target will also allow the dynamics and spin distributions of quarks and gluons inside unpolarised nucleons to be studied for the first time at the LHC, a decade before corresponding measurements at much higher accuracy are performed at the Electron‑Ion Collider in the US. Studying particles produced in collisions with light nuclei, such as He, and possibly N and O, will also allow LHCb to give important inputs to cosmic‑ray physics and dark‑matter searches. Last but not least, SMOG2 will allow LHCb to perform studies of heavy‑ion collisions at large rapidities, in an unexplored energy range between the SPS and RHIC, offering new insights into the QCD phase diagram.

The post New SMOG on the horizon appeared first on CERN Courier.

]]>
News LHCb will soon become the first LHC experiment able to run simultaneously with two separate interaction regions. https://cerncourier.com/wp-content/uploads/2020/05/CCMayJun20_EF-LHCb.jpg
A labour of love https://cerncourier.com/a/a-labour-of-love/ Mon, 09 Mar 2020 21:11:41 +0000 https://preview-courier.web.cern.ch/?p=86603 Ten years on from the LHC's first high-energy collisions, Mark Rayner interviews some of the foremost detector experts on what it took to keep the experiments fighting fit.

The post A labour of love appeared first on CERN Courier.

]]>
The CMS detector

Two detectors, both alike in dignity, sit 100 m underground and 8 km apart on opposite sides of the border between Switzerland and France. Different and complementary in their designs, they stand ready for anything nature might throw at them, and over the past 10 years physicists in the ATLAS and CMS collaborations have matched each other paper for paper, blazing a path into the unknown. And this is only half of the story. A few kilometres around the ring either way sit the LHCb and ALICE experiments, continually breaking new ground in the physics of flavour and colour.

Plans hatched when the ATLAS and CMS collaborations formed in the spring of 1992 began to come to fruition in the mid 2000s. While liquid-argon and tile calorimeters lit up in ATLAS’s cavern, cosmic rays careened through partially assembled segments of each layer of the CMS detector, which was beginning to be integrated at the surface. “It was terrific, we were taking cosmics and everybody else was still in pieces!” says Austin Ball, who has been technical coordinator of CMS for the entire 10-year running period of the LHC so far. “The early cosmic run with magnetic field was a byproduct of our design, which stakes everything on a single extraordinary solenoid,” he explains, describing how the uniquely compact and modular detector was later lowered into its cavern in enormous chunks. At the same time, the colossal ATLAS experiment was growing deep underground, soon to be enveloped by the magnetic field generated by its ambitious system of eight air–core superconducting barrel loops, two end-caps and an inner solenoid. A thrilling moment for both experiments came on 10 September 2008, when protons first splashed off beam stoppers and across the detectors in a flurry of tracks. Ludovico Pontecorvo, ATLAS’s technical coordinator since 2015, remembers “first beam day” as a new beginning. “It was absolutely stunning,” he says. “There were hundreds of people in the control room. It was the birth of the detector.” But the mood was fleeting. On 19 September a faulty electrical connection in the LHC caused a hundred or so magnets to quench, and six tonnes of liquid helium to escape into the tunnel, knocking the LHC out for more than a year.

You have this monster and suddenly it turns into this?

Werner Riegler

The experimentalists didn’t waste a moment. “We would have had a whole series of problems if we hadn’t had that extra time,” says Ball. The collaborations fixed niggling issues, installed missing detector parts and automated operations to ease pressure on the experts. “Those were great days,” agrees Richard Jacobsson, commissioning and run coordinator of the LHCb experiment from 2008 to 2015. “We ate pizza, stayed up nights and slept in the car. In the end I installed a control monitor at home, visible from the kitchen, the living room and the dining room, with four screens – a convenient way to avoid going to the pit every time there was a problem!” The hard work paid off as the detectors came to life once again. For ALICE, the iconic moment was the first low-energy collisions in December 2009. “We were installing the detector for 10 years, and then suddenly you see these tracks on the event display…” reminisces Werner Riegler, longtime technical coordinator for the collaboration. “I bet then-spokesperson Jürgen Schukraft three bottles of Talisker whisky that they couldn’t possibly be real. You have this monster and suddenly it turns into this? Everybody was cheering. I lost the bet.”

The first high-energy collisions took place on 30 March 2010, at a centre-of-mass energy of 7 TeV, three-and-a-half times higher than the Tevatron, and a leap into terra incognita, in the words of ATLAS’s Pontecorvo. The next signal moment came on 8 November with the first heavy-ion collisions, and almost immediate insights into the quark–gluon plasma.

ALICE in wonderland

For a few weeks each year, the LHC ditches its signature proton collisions at the energy frontier to collide heavy ions such as lead nuclei, creating globules of quark–gluon plasma in the heart of the detectors. For the past 10 years, ALICE has been the best-equipped detector in the world to record the myriad tracks that spring from these hot and dense collisions of up to 416 nucleons at a time.

ALICE’s magnet

Like LHCb, ALICE is installed in a cavern that previously housed a LEP detector – in ALICE’s case the L3 experiment. Its tracking and particle-identification subdetectors are mostly housed within that detector’s magnet, fixed in place and still going strong since 1989, the only worry a milli-Amp leak current, present since L3 days, which shifters monitor watchfully. Its relatively low field is not a limitation as ALICE’s specialist subject is low-momentum tracks – a specialty made possible by displacing the beams at the interaction point to suppress the luminosity. “The fact that we have a much lower radiation load than ATLAS, CMS and LHCb allows us to use technologies that are very good for low-momentum measurements, which the other experiments cannot use because their radiation-hardness requirements are much higher,” says Riegler, noting that the design of ALICE requires less power, less cooling and a lower material budget. “This also presents an additional challenge in data processing and analysis in terms of reconstructing all these low-momentum particles, whereas for the other experiments, this is background that you can cut away.” The star performer in ALICE has been the time-projection chamber (TPC), he counsels me, describing a detector capable of reconstructing the 8000 tracks per rapidity unit that were forecast when the detector was designed.

But nature had a surprise in store when the LHC began running with heavy ions. The number of tracks produced was a factor three lower than expected, allowing ALICE to push the TPC to higher rates and collect more data. By the end of Run 2, a detector designed to collect “minimum- bias” events at 50 Hz was able to operate at 1 kHz – a factor 20 larger than the initial design.

The discovery of jet quenching came simply by looking at event displays in the control room

Ludovico Pontecorvo

The lower-than-expected track multiplicities also had a wider effect among the LHC experiments, making ATLAS, CMS and LHCb highly competitive for certain heavy-ion measurements, and creating a dynamic atmosphere in which insights into the quark–gluon plasma came thick and fast. Even independently of the less-taxing-than-expected tracking requirements, top-notch calorimetry allowed immediate insights. “The discovery of jet quenching came simply by looking at event displays in the control room,” confirms Pontecorvo of ATLAS. “You would see a big jet that wasn’t counterbalanced on the other side of the detector. This excitement was transmitted across the world.”

Keeping cool

Despite the exceptional and expectation-busting performance of the experiments, the first few years were testing times for the physicists and engineers tasked with keeping the detectors in rude health. “Every year we had some crisis in cooling the calorimeters,” recalls Pontecorvo. Fortunately, he says, ATLAS opted for “under-pressure” cooling, which prevents water spilling in the event of a leak, but still requires a big chunk of the calorimeter to be switched off. The collaboration had to carry out spectacular interventions, and put people in places that no one would have guessed would be possible, he says. “I remember crawling five metres on top of the end-cap calorimeter to arrive at the barrel calorimeter to search for a leak, and using 24 clamps to find which one of 12 cooling loops had the problem – a very awkward situation!” Ball recalls experiencing similar difficulties with CMS. There are 11,000 joints in the copper circuits of the CMS cooling system, and a leak in any one is enough to cause a serious problem. “The first we encountered leaked into the high-voltage system of the muon chambers, down into the vacuum tank containing the solenoid, right through the detector, which like the LHC itself is on a slope, and out the end as a small waterfall,” says Ball.

The ATLAS cavern

The arresting modularity of CMS, and the relative ease of opening the detector – admittedly an odd way to describe sliding a 1500-tonne object along the axis of a 0.8 mm thick beam pipe – proved to be the solution to many problems. “We have exploited it relentlessly from day one,” says Ball. “The ability to access the pixel tracker, which is really the heart of CMS, with the highest density of sensitive channels, was absolutely vital – crucial for repairing faults as well as radiation damage. Over the course of five or six years we became very efficient at accessing it. The performance of the whole silicon tracking system has been outstanding.”

The early days were also challenging for LHCb, which is set up to reconstruct the decays of beauty hadrons in detail. The dawning realisation that the LHC would run optimally with fewer but brighter proton bunches than originally envisaged set stern tests from the start. From LHCb’s conception to first running, all of the collaboration’s discussions were based on the assumption that the detector would veto any crossing of protons where there would be more than one interaction. In the end, faced with a typical “pile-up” of three, the collaboration had to reschedule its physics priorities and make pragmatic decisions about the division of bandwidth in the high-level trigger. “We were faced with enormous problems: synchronisation crashes, event processing that was taking seconds and getting stuck…,” recalls Jacobsson. “Some run numbers, such as 1179, still send shivers down the back of my spine.” By September, however, they had demonstrated that LHCb was capable of running with much higher pile-up than anybody had thought possible.

No machine has ever been so stable in its operational mode

Rolf Lindner

Necessity was the mother of invention. In 2011 and 2012 LHCb introduced a feedback system that maintains a manageable luminosity during each fill by increasing the overlap between the colliding beams as protons “burn out” in collisions, and the brightness of the bunches decreases. When Jacobsson and his colleagues mentioned it to the CERN management in September 2010, the then director of accelerators, Steve Myers, read the riot act, warning of risks to beam stability, recalls Jacobsson. “But since I had a few good friends at the controls of the LHC, we could carefully and quietly test this, and show that it produced stable beams. This changed life on LHCb completely. The effect was that we would have one stable condition throughout every fill for the whole year – perfect for precision physics.”

Initially, LHCb had planned to write events at 200 Hz, recalls Rolf Lindner, the experiment’s longtime technical coordinator, but by the end of Run 1, LHCb was collecting data at up to 10 kHz, turning offline storage, processing and “physics stripping” into an endless fire fight. Squeezing every ounce of performance out of the LHC generated greater data volumes than anticipated by any of the experiments, and even stories (probably apocryphal) of shifters running down to local electronics stores to buy data discs because they were running out of storage. “The LHC would run for several months with stable beams for 60% of every 24 hours in a day,” says Lindner. “No machine has ever been so stable in its operational mode.”

Engineering all-stars

The eyes of the world turned to ATLAS and CMS on 4 July 2012 as the collaborations announced the discovery of a new boson – an iconic moment to validate countless hours of painstaking work by innumerable physicists, engineers and computer scientists, which is nevertheless representative of just one of a multitude of physics insights made possible by the LHC experiments (see LHC at 10: the physics legacy). The period running up to the euphoric Higgs discovery had been smooth for all except LHCb, who had to scramble to disprove unfounded suggestions that their dipole magnet, occasionally reversed in field to reduce systematic uncertainties, was causing beam instabilities. But new challenges would shortly follow. Chief among several hair-raising moments in CMS was the pollution of the magnet cryogenic system in 2015 and 2016, which caused instability in the detector’s cold box and threatened the reliable operation of the superconducting solenoid surrounding the tracker and calorimeters. The culprit turned out to be superfluous lubricant – a mere half a litre of oil, now in a bottle in Ball’s office – which clogged filters and tiny orifices crucial to the cyclical expansion cycle used to cool the helium. “By the time we caught on to it, we hadn’t just polluted the cold box, we had polluted the whole of the distribution from upstairs to downstairs,” he recalls, launching into a vivid account of seat-of-the-pants interventions, and also noting that the team turned their predicament into an opportunity. “With characteristic physics ingenuity, and faced with spoof versions of the CMS logo with straightened tracks, we exploited data with the magnet off to calibrate the calorimeters and understand a puzzling 750 GeV excess in the diphoton invariant mass distribution,” he says.

Now I look back on the cryogenic crisis as the best project I ever worked on at CERN

Austin Ball
LHCb’s dipole magnet

With resolute support from CERN, bold steps were taken to fix the problem. It transpired that slightly-undersized replaceable filter cartridges were failing to remove the oil after it was mixed with the helium to lubricate screw-turbine compressors in the surface installation. “Now I look back on the cryogenic crisis as the best project I ever worked on at CERN, because we were allowed to assemble this cross-departmental superstar engineering team,” says Ball. “You could ask for anyone and get them. Cryogenics experts, chemists and mechanical engineers… even Rolf Heuer, then the Director-General, showed up frequently. The best welders basically lived in our underground area – you could normally only see their feet sticking out from massive pipework. If you looked carefully you might spot a boot. It’s a complete labyrinth. That one will stick with me for a long time. A crisis can be memorable and satisfying if you solve it.”

Heroic efforts

During the long shutdown that followed, the main task for LHCb was to exchange a section of beryllium beam pipe in which holes had been discovered and meticulously varnished over in haste before being used in Run 1. At the same time, right at the end of an ambitious and successful consolidation and improvement programme, CMS suffered the perils of extraordinarily dense circuit design when humid air condensed onto cold silicon sensor modules that had temporarily been moved to a surface clean room. 10% of the pixels short-circuited when it was powered up again, and heroic efforts were needed to re-manufacture replacements and install them in time for the returning LHC beams. Meanwhile, wary of deteriorating optical readout, ATLAS refurbished their pixel-detector cabling, taking electronics out of the detector to make it serviceable and inserting a further inner pixel layer just 33 mm from the beam pipe to up their b-tagging game. The bigger problem was mechanical shearing of the bellows that connect the cryostat of one of the end-cap toroids to the vacuum system – the only problem experienced so far with ATLAS’s ambitious magnet system. “At the beginning people speculated that with eight superconducting coils, each independent from the others, we would experience one quench after another, but they have been perfect really,” confirms Pontecorvo. Combined with the 50-micron alignment of the 45 m-long muon detector, ATLAS has exceeded the design specifications for resolving the momentum of high-momentum muons – just one example of a pattern repeated across all the LHC detectors.

As the decade wore on, the experiments streamlined operations to reach unparalleled performance levels, and took full advantage of technical and end-of-year stops to keep their detectors healthy. Despite their very high-luminosity environments, ATLAS and CMS pushed already world-beating initial data-taking efficiencies of around 90% beyond the 95% mark. “ATLAS and CMS were designed to run with an average pile-up of 20, but are now running with a pile-up of 60. This is remarkable,” states Pontecorvo.

Accelerator rising

At 10, with thousands of physics papers behind them and many more stories to tell, the LHC experiments are as busy as ever, using the second long shutdown, which is currently underway, to install upgrades, many of which are geared to the high-luminosity LHC (HL-LHC) due to operate later this decade. Many parts are being recycled, for example with ALICE’s top-performing TPC chambers donated to Fermilab for the near detector of the DUNE long-baseline neutrino-oscillation experiment. And major engineering challenges remain. A vivid example is that the LHC tunnel, carved out of water-laden rock 30 years ago, is rising up, while the experiments – particularly the very compact CMS, which has a density almost the same as rock – remain fixed in place, counterbalancing upthrust due to the removed rock with their weight. CMS faces the greatest challenge due to the geology of the region, explains Ball. “The LHC can use a corrector magnet to adjust the level of the beam, but there is a risk of running out of magnetic power if the shifts are big. Just a few weeks ago they connected a parallel underground structure for HL-LHC equipment, and the whole tunnel went up 3 mm almost overnight. We haven’t solved that one yet.”

Most of all, it is important to acknowledge the dedication of the people who run the experiments

Ludovico Pontecorvo

Everyone I interviewed agrees wholeheartedly on one crucial point. “Most of all, it is important to acknowledge the dedication of the people who run the experiments,” explains Pontecorvo of ATLAS, expressing a sentiment emphasised by his peers on all the experiments. “These people are absolutely stunning. They devote their life to this work. This is something that we have to keep and which it is not easy to keep. Unfortunately, many feel that this work is undervalued by selection committees for academic positions. This is something that must change, or our work will finish – as simple as that.”

Pontecorvo hurries out of the door at the end of our early-morning interview, hastily squeezed into a punishing schedule. None of the physicists I interviewed show even a smidgen of complacency. Ten years in, the engineering and technological marvels that are the four biggest LHC experiments are just getting started.

The post A labour of love appeared first on CERN Courier.

]]>
Feature Ten years on from the LHC's first high-energy collisions, Mark Rayner interviews some of the foremost detector experts on what it took to keep the experiments fighting fit. https://cerncourier.com/wp-content/uploads/2020/02/CCMarApr_detectors_CMS.jpg
Crystal calorimeter hones Higgs mass https://cerncourier.com/a/crystal-calorimeter-hones-higgs-mass/ Fri, 10 Jan 2020 09:44:18 +0000 https://preview-courier.web.cern.ch/?p=86025 This is the most precise measurement so far of a parameter with implications for the stability of the vacuum.

The post Crystal calorimeter hones Higgs mass appeared first on CERN Courier.

]]>
Figure 1

Though a free parameter in the Standard Model, the mass of the Higgs boson is important for both theoretical and experimental reasons. Most peculiarly from a theoretical standpoint, our current knowledge of the masses of the Higgs boson and the top quark imply that the quartic coupling of the Higgs vanishes and becomes negative tantalisingly close to, but just before, the Planck scale. There is no established reason for the Standard Model to perch near to this boundary. The implication is that the vacuum is almost but not quite stable, and that on a timescale substantially longer than the age of the universe, some point in space will tunnel to a lower energy state and a bubble of true vacuum will expand to fill the universe. Meanwhile, from an experi­mental perspective, it is important to continually improve measurements so that uncertainty on the mass of the Higgs boson eventually rivals the value of its width. At that point, measuring the Higgs-boson mass can provide an independent method to determine the Higgs-boson width. The Higgs-boson width is sensitive to the existence of possible undiscovered particles and is expected to be a few MeV according to the Standard Model.

The CMS collaboration recently announced the most precise measurement of the Higgs-boson mass achieved thus far, at 125.35 ± 0.15 GeV – a precision of roughly 0.1%. This very high precision was achieved thanks to an enormous amount of work over many years to carefully calibrate and model the CMS detector when it measures the energy and momenta of the electrons, muons and photons necessary for the measurement.

The most recent contribution to this work was a measurement of the mass in the di-photon channel using data collected at the LHC by the CMS collaboration in 2016 (figure 1). This measurement was made using the lead–tungstate crystal calorimeter, which uses approximately 76,000 crystals, each weighing about 1.1 kg, to measure the energy of the photons. A critical step of this analysis was a precise calibration of each crystal’s response using electrons from Z-boson decay, and accounting for the tiny difference between the electron and photon showers in the crystals.

Figure 2

This new result was combined with earlier results obtained with data collected between 2011 and 2016. One measurement was in the decay channel to two Z bosons, which subsequently decay into electron or muon pairs, and another was a measurement in the di-photon channel made with earlier data. The 2011 and 2012 data combined yield 125.06 ± 0.29 GeV. The 2016 data yield 125.46 ± 0.17 GeV. Combining these yields CMS’s current best precision of 125.35 ± 0.15 GeV (figure 2). This new precise measurement of the Higgs-boson mass will not, at least not on its own, lead us in a new direction of physics, but it is an indispensable piece of the puzzle of the Standard Model – and one fruit of the increasing technical mastery of the LHC detectors.

The post Crystal calorimeter hones Higgs mass appeared first on CERN Courier.

]]>
News This is the most precise measurement so far of a parameter with implications for the stability of the vacuum. https://cerncourier.com/wp-content/uploads/2020/01/oreach-2001-001.jpg
Astronomers scale new summit https://cerncourier.com/a/astronomers-scale-new-summit/ Fri, 29 Nov 2019 14:44:05 +0000 https://preview-courier.web.cern.ch/?p=85128 The world’s largest optical/near-infrared telescope, the Extremely Large Telescope, under construction in Chile, will bring mysteries such as dark energy into focus.

The post Astronomers scale new summit appeared first on CERN Courier.

]]>
The foundations of ESO’s Extremely Large Telescope

The 3 km-high summit of Cerro Armazones, located in the Atacama desert of Northern Chile, is a construction site for one of most ambitious projects ever mounted by astronomers: the Extremely Large Telescope (ELT). Scheduled for first light in 2025, the ELT is centred around a 39 m-diameter main mirror that will gather 250 times more light than the Hubble Space Telescope and use advanced corrective optics to obtain exceptional image quality. It is the latest major facility of the European Southern Observatory (ESO), which has been surveying the southern skies for almost 60 years.

The science goals of the ELT are vast and diverse. Its sheer size will enable the observation of distant objects that are currently beyond reach, allowing astronomers to better understand the formation of the first stars, galaxies and even black holes. The sharpness of its images will also enable a deeper study of extrasolar planets, possibly even the characterisation of their atmospheres. “One new direction may become possible through very high precision spectroscopy – direct detection of the expansion rate of the universe, which would be an amazing feat,” explains Pat Roche of the University of Oxford and former president of the ESO council. “But almost certainly the most exciting results will be from unexpected discoveries.”

Technical challenges

Approved in 2006, civil engineering for the ELT began in 2014. Construction of the 74 m-high, 86 m-diameter dome and the 3400-tonne main structure began in 2019. In January 2018 the first segments of the main mirror were successfully cast, marking the first step of a challenging five-mirror system that goes beyond the traditional two-mirror “Gregorian” design. The introduction of a third powered mirror delivers a focal plane that remains un-aberrated at all field locations, while a fourth and a fifth mirror correct distortions in real-time due to the Earth’s atmosphere or other external factors. This novel arrangement, combined with the sheer size of the ELT, makes almost every aspect of the design particularly challenging.

Concepts of the ELT at work

The main mirror is itself a monumental enterprise; it consists of 798 hexagonal segments, each measuring approximately 1.4 m across and 50 mm thick. To keep the surface unchanged by external factors such as temperature or wind, each segment has edge sensors measuring its location within a few nanometres – the most accurate ever used in a telescope. The construction and polishing of the segments, as well as the edge sensors, is a demanding task and only possible thanks to the collaboration with industry; at least seven private companies are working on the main mirror alone. The size of the mirror was originally 42 m, but it was later reduced to 39 m, mainly for costs reasons, but still allowing the ELT to fulfill its main scientific goals. “The ELT is ESO’s largest project and we have to ensure that it can be constructed and operated within the available budget,” says Roche. “A great deal of careful planning and design, most of it with input from industry, was undertaken to understand the costs and the cost drivers, and the choice of primary mirror diameter emerged from these analyses.”

The task is not much easier for the other mirrors. The secondary mirror, measuring 4 m across, is highly convex and will be the largest secondary mirror ever employed on a telescope and the largest convex mirror ever produced. The ELT’s tertiary mirror also has a curved surface, contrary to more traditional designs. The fourth mirror will be the largest adaptive mirror ever made, supported by more than 5000 actuators that will deform and adjust its shape in real-time to achieve a factor-500 improvement in resolution. 

Currently 28 companies are actively collaborating on different parts of the ELT design; most of these companies are European, but also include contracts with the Chilean companies ICAFAL, for the road and platform construction, and Abengoa for the ELT technical facility. Among the European contracts, the construction of the telescope dome and main structure by the Italian ACe consortium of Astraldi and Cimolai is the largest in ESO’s history. The total cost estimate for the baseline design of the ELT is 1.174 billion, while the running cost is estimated to be around 50 million per year. Since the approval of the ELT, ESO has increased its number of member states from 14 to 16, with Poland and Ireland incorporating in 2015 and 2018, respectively. Chile is a host state and Australia a strategic partner.

European Southern Observatory’s particle-physics roots

ESO’s Telescope Project Division and Sky Atlas Laboratory in the 1970s

The ELT’s success lies in ESO’s vast experience in the construction of innovative telescopes. The idea for ESO, a 16-nation intergovernmental organisation for research in ground-based astronomy, was conceived in 1954 with the aim of creating a European observatory dedicated to observations of the southern sky. At the time, the largest such facilities had an aperture of about 2 m; more than 50 years later, ESO is responsible for a variety of observatories, including its first telescope at La Silla, not far from Cerro Armazones (home of the ELT).

Like CERN, ESO was born in the aftermath of the war to allow European countries to develop scientific projects that nations were unable to do on their own. The similarities are by no means a mere coincidence. From the beginning, CERN served as a model regarding important administrative aspects of the organisation, such as the council delegate structure, the finance base or personnel regulations. A stronger collaboration ensued in 1969, when ESO approached CERN to assist with the powerful and sophisticated instrumentation of its 3.6 m telescope and other challenges ESO was facing, both administrative and technological. This collaboration saw ESO facilities established at CERN: the Telescope Project Division and, a few years later, ESO’s Sky Atlas Laboratory. A similar collaboration has since been organised for EMBL and, more recently for a new hadron-therapy facility in Southeast Europe.

Unprecedented view

A telescope of this scale has never been attempted before in astronomy. Not only must the ELT be constructed and operated within the available budget, but it should not impact the operation of ESO’s current flagship facilities (such as the VLT, the VLT interferometer and the ALMA observatory).

The amount of data produced by the ELT is estimated to be around 1-2 TB per night, including scientific observations plus calibration observations. The data will be analysed automatically, and users have the option to download the processed data or, if needed, download the original data and process it in their own research centres. To secure observation time with the facility, ESO makes a call for proposals once or twice a year, at which researchers propose desired observations according to their own fields. “A committee of astronomers then evaluates the proposals and ranks them according to their relevance and potential scientific impact, the highest ranked ones are then chosen to be followed,” explains project scientist Miguel Pereira of the University of Oxford.

Currently, 28 companies are actively collaborating on different parts of the ELT design, mostly from Europe

In addition to its astronomical goals, the ELT will contribute to the growing confluence of cosmology and fundamental physics. Specifically, it will help elucidate the nature of dark energy by identifying distant type 1a supernovae, which serve as excellent markers of the universe’s expansion history. The ELT will also measure the change in redshift with time of distant objects – a feat that is beyond the capabilities of current telescopes – to indicate the rate of expansion. Possible variations over time of fundamental physics constants, such as the fine-structure constant and the strong coupling constant, will also be targeted. Such measurements are very challenging because the strength of the constraint on the variability depends critically on the accuracy of the wavelength calibration. The ELT’s ultra-stable high-resolution spectrograph aims to remove the systematic uncertainty currently present in the wavelength calibration measurements, offering the possibility to make an unambiguous detection of such variations.

The ELT construction is on schedule for completion, and first light is expected in 2025. “In the end, projects succeed because of the people who design, build and support them,” Roche says, attributing the success of the ELT to rigorous attention to design and analysis across all aspects of the project. The road ahead is still challenging and full of obstacles, but, as the former director of the Paris observatory André Danjon wrote to his counterpart at the Leiden Observatory, Jan Oort, in 1962: “L’astronomie est bien l’ecole de la patience.” No doubt the ELT will pay extraordinary scientific rewards.

The post Astronomers scale new summit appeared first on CERN Courier.

]]>
Feature The world’s largest optical/near-infrared telescope, the Extremely Large Telescope, under construction in Chile, will bring mysteries such as dark energy into focus. https://cerncourier.com/wp-content/uploads/2019/11/CCSepOct19_ELT1.jpg
Taking the temperature of collider calorimetry https://cerncourier.com/a/calorimetry-for-collider-physics-an-introduction/ Tue, 26 Nov 2019 21:59:59 +0000 https://preview-courier.web.cern.ch/?p=85577 Michele Livan and Richard Wigmans have written an up-to-date introduction to both the fundamental physics and the technical parameters.

The post Taking the temperature of collider calorimetry appeared first on CERN Courier.

]]>
Concise and accessible, Calorimetry for Collider Physics is a reference book worthy of the name. Well known experts Michele Livan and Richard Wigmans have written an up-to-date introduction to both the fundamental physics and the technical parameters that determine the performance of calorimeters. Students and senior experts alike will be inspired to deepen their study of the characteristics of these instruments – instruments that have become crucial to most contemporary experiments in particle physics.

Following a light and attractive introductory chapter, the reader is invited to refresh his or her knowledge of the interactions of particles with matter. Key topics such as shower development, containment and profile, linearity and energy resolution are discussed for both electromagnetic and hadronic components. The authors provide illustrations with test-beam results and detailed Monte Carlo simulations. Practical and numerical examples help the reader to understand even counterintuitive effects, stimulating critical thinking in detector designers, and helping the reader develop a feeling for the importance of the various parameters that affect calorimetry.

The authors do not shy away from criticising calorimetric approaches

An important part of the book is devoted to hadron calorimetry. The authors have made a remarkably strong impact in understanding the fundamental problems with large set-ups in test beams, for example the famous lead-fibre sampling spaghetti calorimeter SPACAL. Among other issues, they correct “myths” as to which processes really cause compensation, and discuss quantities that correlate to the invisible energy fraction from hadrons involved in the shower process, for example, to measure the electromagnetic shower fraction event-by-event. The topical development of the dual-readout calorimeter concept follows logically from there – a very promising future direction for this central detector component, as the book discusses in considerable detail. This technology would avoid the question of longitudinal segmentation, which has a particular impact on linearity and calibration.

Calorimetry Wigmans Livan

Livan and Wigmans’ book also gives a valuable historical overview of the field, and corrects several erroneous interpretations of past experimental results. The authors do not shy away from criticising calorimetric approaches in former, present and planned experiments, making the book “juicy” reading for experts. The reader will not be surprised that the authors are, for example, rather critical about highly segmented calorimeters aiming at particle flow approaches.

There is only limited discussion about other aspects of calorimetry, such as triggering, measuring jets and integrating calorimeters into an overall detector concept, which may impose many constraints on their mechanical construction. These aspects were obviously considered beyond the scope of the book, and indeed one cannot pack everything into a single compact textbook, though the authors do include a very handy appendix with tables of parameters relevant to calorimetry.

By addressing the fundamentals of calorimetry, Livan and Wigmans have provided an outstanding reference book. I recommend it highly to everybody interested in basic detector aspects of experimental physics. It is pleasant and stimulating to read, and if in addition it triggers critical thinking, so much the better!

The post Taking the temperature of collider calorimetry appeared first on CERN Courier.

]]>
Review Michele Livan and Richard Wigmans have written an up-to-date introduction to both the fundamental physics and the technical parameters. https://cerncourier.com/wp-content/uploads/2019/11/Spaghetti-Calorimeter-9005322-small.jpg
Tricky component? Use 3D printing… https://cerncourier.com/a/tricky-component-use-3d-printing/ Thu, 11 Jul 2019 08:43:10 +0000 https://preview-courier.web.cern.ch?p=83626 Embedded radio-frequency (RF) cavities such as those featured in spiral-shaped cooling channels are one example.

The post Tricky component? Use 3D printing… appeared first on CERN Courier.

]]>

Some 120 physicists gathered in Orsay on 13–14 December 2018 for a workshop on additive manufacturing – popularly called 3D printing – with metals. The goal was to review the work being done in Europe (particularly at CERN, CEA and CNRS) on the application of the technique to high-energy physics and astrophysics.

3D printing makes possible novel and optimised designs that would be difficult to create with conventional methods. Embedded radio-frequency (RF) cavities such as those featured in spiral-shaped cooling channels are one example. Another comes from detector design: mesh structures, as required for many gas-filled ionisation tracking detectors, are often difficult to manufacture with traditional methods as the removal of material in one part of the mesh may destroy another part of it; but they are easy to build with additive manufacturing.

Despite the remaining challenges, which relate to ultra-high-vacuum properties, mechanical strength, electrical conductivity, new alloys and post-processing, the technique is beginning to be used for working accelerator components. Participants of the Orsay event heard about the beam test at an accelerator (LAL’s photoinjector PHIL), of a beam position monitor and about the performances of RF antennas designed at Université de Rennes for future space missions. Plans for employing additive manufacturing at future accelerators and HEP experiments were also discussed.

Although metal additive manufacturing is currently limited to a few applications, the workshop, which was the first of its kind, showed that there is strong potential for it to play a larger role in the coming years.

The post Tricky component? Use 3D printing… appeared first on CERN Courier.

]]>
Meeting report Embedded radio-frequency (RF) cavities such as those featured in spiral-shaped cooling channels are one example. https://cerncourier.com/wp-content/uploads/2019/07/CCJulAug19_FN-printing-1.jpg
Cross-fertilisation in detector development https://cerncourier.com/a/cross-fertilisation-in-detector-development/ Wed, 08 May 2019 09:30:24 +0000 https://preview-courier.web.cern.ch?p=83050 More than 300 experts convened from 18-22 February for the 15th Vienna Conference on Instrumentation.

The post Cross-fertilisation in detector development appeared first on CERN Courier.

]]>

More than 300 experts convened from 18-22 February for the 15th Vienna Conference on Instrumentation to discuss ongoing R&D efforts and set future roadmaps for collaboration. “In 1978 we discussed wire chambers as the first electronic detectors, and now we have a large number of very different detector types with performances unimaginable at that time,” said Manfred Krammer, head of CERN’s experimental physics department, recalling the first conference of the triennial series. “In the long history of the field we have seen the importance of cross-fertilisation as developments for one specific experiment can catalyse progress in many fronts.”

Following this strong tradition, the conference covered fundamental and technological issues associated with the most advanced detector technologies as well as the value of knowledge transfer to other domains. Over five days, participants covered topics ranging from sensor types and fast and efficient electronics to cooling technologies and their mechanical structures.

Contributors highlighted experiments proposed in laboratories around the world, spanning gravitational-wave detectors, colliders, fixed-target experiments, dark-matter searches, and neutrino and astroparticle experiments. A number of talks covered upgrade activities for the LHC experiments ahead of LHC Run 3 and for the high-luminosity LHC. An overview of LIGO called for serious planning to ensure that future ground-based gravitational-wave detectors can be operational in the 2030s. Drawing a comparison between the observation of gravitational waves and the discovery of the Higgs boson, Christian Joram of CERN noted “Progress in experimental physics often relies on breakthroughs in instrumentation that lead to substantial gains in measurement accuracy, efficiency and speed, or even open completely new approaches.”

Beyond innovative ideas and cross-disciplinary collaboration, the development of new detector technologies calls for good planning of resources and times. The R&D programme for the current LHC upgrades was set out in 2006, and it is already timely to start preparing for the third long shutdown in 2023 and the High-Luminosity LHC. Meanwhile, the CLIC and Future Circular Collider studies are developing clear ideas of the future experimental challenges in tackling the next exploration frontier.

The post Cross-fertilisation in detector development appeared first on CERN Courier.

]]>
Meeting report More than 300 experts convened from 18-22 February for the 15th Vienna Conference on Instrumentation. https://cerncourier.com/wp-content/uploads/2019/05/CCMayJun19_FN-vienna.jpg
Deciphering elementary particles https://cerncourier.com/a/deciphering-elementary-particles/ Fri, 22 Mar 2019 14:01:04 +0000 https://preview-courier.web.cern.ch?p=13671 Former CERN physicist Christian Fabjan takes a whirlwind tour of 60 years of innovation in particle-detection technology at CERN and beyond.

The post Deciphering elementary particles appeared first on CERN Courier.

]]>
Particle physics began more than a century ago with the discoveries of radioactivity, the electron and cosmic rays. Photographic plates, gas-filled counters and scintillating substances were the early tools of the trade. Studying cloud formation in moist air led to the invention of the cloud chamber, which, in 1932, enabled the discovery of the positron. The photographic plate soon morphed into nuclear-emulsion stacks, and the Geiger tube of the Geiger–Marsden–Rutherford experiments developed into the workhorse for cosmic-ray studies. The bubble chamber, invented in 1952, represented the culmination of these “imaging detectors”, using film as the recording medium. Meanwhile, in the 1940s, the advent of photomultipliers had opened the way to crystal-based photon and electron energy measurements and Cherenkov detectors. This was the toolbox of the first half of the 20th century, credited with a number of groundbreaking discoveries that earned the toolmakers and their artisans more than 10 Nobel Prizes.

extraction of the ALICE time projection chamber

Game changer

The invention of the Multi Wire Proportional Chamber (MWPC) by Georges Charpak in 1968 was a game changer, earning him the 1992 Nobel Prize in Physics. Suddenly, experimenters had access to large-area charged particle detectors with millimetre spatial resolution and staggering MHz-rate capability. Crucially, the emerging integrated-circuit technology could deliver amplifiers so small in size and cost to equip many thousands of proportional wires. This ingenious and deceptively simple detector is relatively easy to construct. The workshops of many university physics departments could master the technology, attracting students and “democratising” particle physics. So compelling was experimentation with MWPCs that within a few years, large detector facilities with tens of thousands of wires were constructed – witness the Split Field Magnet at CERN’s Intersecting Storage Rings (ISR). Its rise to prominence was unstoppable: it became the detector of choice for the Proton Synchrotron, Super Proton Synchrotron (SPS) and ISR programmes. An extension of this technique is the drift chamber, a MWPC-type geometry, with which the time difference between the passage of the particle and the onset of the wire signal is recorded, providing a measure of position with 100 µm-level resolution. The MWPC concept lends itself to a multitude of geometries and has found its “purest” application as the readout of time projection chambers (TPCs). Modern derivatives replace the wire planes with metallised foils with holes in a sub-millimetre pattern, amplifying the ionisation signals.

The ambition, style and success of these large, global collaborations was contagious

The ISR was a hotbed for accelerator and detector inventions. The world’s first proton–proton collider, an audacious project, was clearly ahead of its time and the initial experiments could not fully exploit its discovery potential. It prompted, however, the concept of multi-purpose facilities capable of obtaining “complete” collision information. For the first time, a group developed and used transition-radiation detectors for electron detection and liquid-argon calorimetry. The ISR’s Axial Field Spectrometer (AFS) provided high-quality hadron calorimetry with close to 4π coverage. These technologies are now widely used at accelerators and for non-accelerator experiments. The stringent performance requirements for experiments at the ISR encouraged the detector developers to explore and reach a measurement quality only limited by the laws of detector physics: science-based procedures had replaced the “black magic” of detector construction. With collision rates in the 10 MHz range, these experiments (and the ISR) were forerunners of today’s Large Hadron Collider (LHC) experiments. Of course, the ISR is most famous for its seminal accelerator developments, in particular the invention of stochastic cooling, which was the enabling technology for converting the SPS into a proton–antiproton collider.

The SPS marked another moment of glory for CERN. In 1976 first beams were accelerated to 400 GeV, initiating a diverse physics programme and motivating a host of detector developments. Advances in semiconductor technology led to the silicon-strip detector. With the experiments barely started, Carlo Rubbia and collaborators launched the idea, as ingenious as it was audacious, to convert the SPS into a proton–antiproton collider. The goal was clear: orchestrate quickly and rather cheaply a machine with enough collision energy to produce the putative W and Z bosons. Simon van der Meer’s stochastic-cooling scheme had to deliver the required beam intensity and lifetime, and two experimental teams were charged with the conception and construction of the equally novel detectors. The centrepiece of the UA1 detector was a 6 m-long and 2 m-diameter “electronic bubble chamber”, which adapted the drift-chamber concept to the event topology and collision rate, combined with state-of-the-art electronic readout. The electronic images were of such illuminating quality that “event scanning”, the venerable bubble- chamber technique, was again a key tool in data analysis. The UA2 team pushed calorimetry and silicon detectors to new levels of performance, provided healthy competition and independent discoveries. The discovery of the W and Z bosons was achieved in 1983 and, the following year, Rubbia and van der Meer became Nobel Laureates.

Laying foundations

In 1981, with the approval of the Large Electron Positron (LEP) collider, the community laid the foundation for decades of research at CERN. Mastering the new scale of the accelerator dimension also brought a new approach to managing the larger experimental collaborations and to meeting their more stringent experimental requirements. For the first time, mostly outside collaborators developed and built the experimental apparatus, a non-trivial, but needed success in technology transfer. The detection techniques reached a new state of matureness. Silicon-strip detectors became ubiquitous. Gaseous tracking in a variety of forms, such as TPCs and jet chambers, reached new levels of size and performance. There were also some notable firsts. The DELPHI collaboration developed the Ring Imaging Cherenkov Counter, a delicate technology in which the distribution of Cherenkov photons, imaged with mirrors onto photon-sensitive MWPC-type detectors, provides a measure of the particle’s velocity. The L3 collaboration aimed at ultimate-precision energy measurements of muons, photons and electrons, and put its money on a recently discovered scintillating crystal, bismuth germanate. Particle physicists, material scientists and crystallographers from academia and industry transformed this laboratory curiosity into mass-producible technology: ultimately, 12,000 crystals were grown, cut to size as truncated pyramids and assembled into the calorimeter, a pioneering trendsetter.

the multi-wire proportional chamber

The ambition, style and success of these large, global collaborations was contagious. It gave the cosmic-ray community a new lease of life. The Pierre Auger Observatory, one of whose initiators was particle physicist and Nobel Laureate James Cronin, explores cosmic rays at extreme energies with close to 2000 detector stations spread over an area of 3000 km2. The IceCube collaboration has instrumented around a cubic kilometre of Antarctic ice to detect neutrinos. One of the most ambitious experiments is the Alpha Magnetic Spectrometer, hosted by the International Space Station – again with a particle physicist and Nobel Prize winner, Samuel Ting, as a prime mover and shaker.

These decade-long efforts in experimentation find their present culmination at the LHC. Experimenters had to innovate on several fronts: all detector systems were designed for and had to achieve ultimate performance, limited only by the laws of physics; the detectors must operate at a GHz or more collision rate, generating some 100 billion particles per second. “Impossible” was many an expert’s verdict in the early 1990s. The successful collaboration with industry giants in the IT and electronics sectors was a life-saver; and achieving all this – fraught with difficulties, technical and sociological – in international collaborations of several thousand scientists and engineers was an immense achievement. All existing detection technologies – ranging from silicon-tracking, to transition-radiation and RICH detectors, liquid-argon, scintillator and crystal calorimeters to 10,000 m3-scale muon spectrometers – needed novel ideas, major improvements and daring extrapolations. The success of the LHC experiments is beyond the wildest dreams: hundreds of measurements achieve a precision, previously considered only possible at electron–positron colliders. The Higgs boson, discovered in 2012, will be part of the research agenda for most of the 21st century, and CERN is in the starting block with ambitious plans.

Sharing with society

Worldwide, more than 30,000 accelerators are in operation. Particle and nuclear physics research uses barely more than 100 of them. Society is the principal client, and many of the accelerator innovations and particle detectors have found their way into industry, biology and health applications. A class of accelerators, to which CERN has contributed significantly, is specifically dedicated to tumour therapy. Particle detectors have made a particular impact on medical imaging, such as positron emission tomography (PET), whose origin dates back to CERN with a MWPC-based detector in the 1970s. Today’s clinical PETs use crystals, very similar to those used in the discovery of the Higgs boson.

Possibly the most important benefit of particle physics to society is the collaborative approach developed by the community, which underpins the incredible success that has led us to the LHC experiments today. There are no signs that the rate of innovation in detectors and instrumentation is slowing. Currently the LHC experiments are undergoing major upgrades and plans for the next generation of experiments and colliders are already well under way. These collaborations succeed in being united and driven by a common goal, bridging cultural and political divides. 

The post Deciphering elementary particles appeared first on CERN Courier.

]]>
Feature Former CERN physicist Christian Fabjan takes a whirlwind tour of 60 years of innovation in particle-detection technology at CERN and beyond. https://cerncourier.com/wp-content/uploads/2019/03/CCSupp_1_Detec_Foreword-1.png
Physics Beyond Colliders initiative presents main findings https://cerncourier.com/a/physics-beyond-colliders-initiative-presents-main-findings/ Mon, 11 Mar 2019 17:01:26 +0000 https://preview-courier.web.cern.ch?p=13565 Experiments have the potential to explore open questions in QCD and to search for hidden-sectors in which dark matter does not couple directly to Standard Model particles.

The post Physics Beyond Colliders initiative presents main findings appeared first on CERN Courier.

]]>

In a workshop held at CERN on 16–17 January, researchers presented the findings of the Physics Beyond Colliders (PBC) initiative, which was launched in 2016 to explore the opportunities at CERN via projects complementary to the LHC and future colliders (CERN Courier November 2016 p28). PBC members have weighed up the potential for such experiments to explore open questions in QCD and the existence of physics beyond the Standard Model (BSM), in particular including searches for signatures of hidden-sector models in which the conjectured dark matter does not couple directly to Standard Model particles.

The BSM and QCD groups of the PBC initiative have developed detailed studies of CERN’s options and compared them to other worldwide possibilities. The results show the international competitiveness of the PBC options.

The Super Proton Synchrotron (SPS) remains a clear attraction, offering the world’s highest-energy beams to fixed-target experiments in the North Area (see Fixed target, striking physics). The SPS high-intensity muon beam could allow a better understanding of the theoretical prediction of the muon anomalous magnetic moment (MUonE project), and a significant contribution to the resolution of the proton radius puzzle by COMPASS(Rp). The NA61 experiment could explore QCD in the interesting region of “criticality”, while upgrades of NA64 and a few months of NA62 operation in beam-dump mode (whereby a target absorbs most of the incident protons and contains most of the particles generated by the primary beam interactions) would explore the hidden-sector parameter space. In the longer term, the KLEVER experiment could probe rare decays of neutral kaons, and NA60 and DIRAC could enhance our understanding of QCD.

A novel North Area proposal is the SPS Beam Dump Facility (BDF). Such a facility could, in the first instance, serve the SHiP experiment, which would perform a comprehensive investigation of the hidden sector with discovery potential in the MeV–GeV mass range, and the TauFV experiment, which would search for forbidden τ decays. The BDF team has made excellent progress with the facility design and is preparing a comprehensive design study report. Options for more novel exploitation of the SPS have also been considered: proton-driven plasma- wakefield acceleration of electrons for a dark-matter experiment (AWAKE++); the acceleration and slow extraction of electrons to light–dark-matter experiments (eSPS); and the production of well-calibrated neutrinos via a muon decay ring (nuSTORM).

Fixed-target studies at the LHC are also considered within PBC, and these could improve our understanding of QCD in regions where it is relevant for new-physics searches at the high-luminosity LHC upgrade. The LHC could also be supplemented with new experiments to search for long-lived particles, and PBC support for a small experiment called FASER has helped pave the way for its installation in the ongoing long shutdown of CERN’s accelerator complex.

2018 was a notable year for the gamma factory, a novel concept that would use the LHC to produce intense gamma-ray beams for precision measurements and searches (CERN Courier November 2017 p7). The team has already demonstrated the acceleration of partially stripped ions in the LHC, and is now working towards a proof-of-principle experiment in the SPS. Meanwhile, the Electric Dipole Moment (CPEDM) collaboration has continued studies, supported by experiments at the COSY synchrotron in Germany (CERN Courier September 2016 p27), towards a prototype storage ring to measure the proton EDM.

The PBC technology team has also been working to leverage CERN’s skills base to novel experiments, for example by exploring synergies across experiments and collaboration in technologies – in particular, concerning light-shining-through-walls experiments and QED vacuum-birefringence measurements.

Finally, some PBC projects are likely to flourish outside CERN: the IAXO axion helioscope, now under consideration at DESY; the proton EDM ring, which could be prototyped at the Jülich laboratory, also in Germany; and the REDTOP experiment devoted to η meson rare decays, for which Fermilab in the US seems better suited.

The PBC groups have submitted their full findings to the European Particle Physics Strategy Update (http://pbc.web.cern.ch/).

The post Physics Beyond Colliders initiative presents main findings appeared first on CERN Courier.

]]>
Meeting report Experiments have the potential to explore open questions in QCD and to search for hidden-sectors in which dark matter does not couple directly to Standard Model particles. https://cerncourier.com/wp-content/uploads/2019/03/CCMarApr19_fieldnotes-pbc.png
Fixed target, striking physics https://cerncourier.com/a/fixed-target-striking-physics/ Mon, 11 Mar 2019 16:00:54 +0000 https://preview-courier.web.cern.ch?p=13522 A strong tradition of innovation and ingenuity shows that, for CERN’s North Area, life really does begin at 40.

The post Fixed target, striking physics appeared first on CERN Courier.

]]>

As generations of particle colliders have come and gone, CERN’s fixed-target experiments have remained a backbone of the lab’s physics activities. Notable among them are those fed by the Super Proton Synchrotron (SPS). Throughout its long service to CERN’s accelerator complex, the 7 km-circumference SPS has provided a steady stream of high-energy proton beams to the North Area at the Prévessin site, feeding a wide variety of experiments. Sequentially named, they range from the pioneering NA1, which measured the photoproduction of vector and scalar bosons, to today’s NA64, which studies the dark sector. As the North Area marks 40 years since its first physics result, this hub of experiments large and small is as lively and productive as ever. Its users continue to drive developments in detector design, while reaping a rich harvest of fundamental physics results.

Specialised and precise

In fixed-target experiments, a particle beam collides with a target that is stationary in the laboratory frame, in most cases producing secondary particles for specific studies. High-energy machines like the SPS, which produces proton beams with a momentum up to 450 GeV/c, give the secondary products a large forward boost, providing intense sources of secondary and tertiary particles such as electrons, muons and hadrons. With respect to collider experiments, fixed-target experiments tend to be more specialised and focus on precision measurements that demand very high statistics, such as those involving ultra-rare decays.

Fixed-target experiments have a long history at CERN, forming essential building blocks in the physics landscape in parallel to collider facilities. Among these were the first studies of the quark–gluon plasma, the first evidence of direct CP violation and a detailed understanding of how nucleon spin arises from quarks and gluons. The first muons in CERN’s North Area were reported at the start of the commissioning run in March 1978, and the first physics publication – a measurement of the production rate of muon pairs by quark–antiquark annihilation as predicted by Drell and Yan – was published in 1979 by the NA3 experiment. Today, the North Area’s physics programme is as vibrant as ever.

The longevity of the North Area programme is explained by the unique complex of proton accelerators at CERN, where each machine is not only used to inject the protons into the next one but also serves its own research programme (for example, the Proton Synchrotron Booster serves the ISOLDE facility, while the Proton Synchrotron serves the Antiproton Decelerator and the n_TOF experiment). Fixed-target experiments using protons from the SPS started taking data while the ISR collider was already in operation in the late 1970s, continued during SPS operation as a proton–antiproton collider in the early 1980s, and again during the LEP and now LHC eras. As has been the case with collider experiments, physics puzzles and unexpected results were often at the origin of unique collaborations and experiments, pushing limits in several technology areas such as the first use of silicon-microstrip detectors.

The initial experimental programme in the North Area involved two large experimental halls: EHN1 for hadronic studies and EHN2 for muon experiments. The first round of experiments in EHN1 concerned studies of: meson photoproduction (NA1); electromagnetic form factors of pions and kaons (NA7); hadronic production of particles with large transverse momentum (NA3); inelastic hadron scattering (NA5); and neutron scattering (NA6). In EHN2 there were experiments devoted to studies with high-intensity muon beams (NA2 and NA4). A third, underground, area called ECN3 was added in 1980 to host experiments requiring primary proton beams and secondary beams of the highest intensity (up to 1010 particles per cycle).

Experiments in the North Area started a bit later than those in CERN’s West Area, which started operation in 1971 with 28 GeV/c protons supplied by the PS. Built to serve the last stage of the PS neutrino programme and the Omega spectrometer, the West Area zone was transformed into an SPS area in 1975 and is best known for seminal neutrino experiments (by the CDHS and CHARM collaborations, later CHORUS and NOMAD) and hadron-spectroscopy experiments with Omega. We are now used to identifying experimental collaborations by means of fancy acronyms such as ATLAS or ALICE, to mention two of the large LHC collaborations. But in the 1970s and the 1980s, one could distinguish between the experiments (identified by a sequential number) and the collaborations (identified by the list of the cities hosting the collaborating institutes). For instance CDHS stood for the CERN–Dortmund–Heidelberg–Saclay collaboration that operated the WA1 experiment in the West Area.

Los Alamos, SLAC, Fermilab and Brookhaven National Laboratory in the US, JINR and the Institute for High Energy Physics in Russia, and KEK in Japan, for example, also all had fixed-target programmes, some of which date back to the 1960s. As fixed-target programmes got into their stride, however, colliders were commanding the energy frontier. In 1980 the CERN North Area experimental programme was reviewed in a special meeting held in Cogne, Italy, and it was not completely obvious that there was a compelling physics case ahead. But it also led to highly optimised installations thanks to strong collaborations and continuous support from the CERN management. Advances in detectors and innovations such as silicon detectors and aerogel Cherenkov counters, plus the hybrid integration of bubble chambers with electronic detectors, led to a revamp in the study of hadron interactions at fixed-target experiments, especially for charmed mesons.

Physics landscape

Experiments at CERN’s North Area began shortly after the Standard Model had been established, when the scale of experiments was smaller than it is today. According to the 1979 CERN annual report, there were 34 active experiments at the SPS (West and North areas combined) and 14 were completed in 1978. This article cannot do justice to all of them, not even to those in the North Area. But over the past 40 years the experimental programme has clearly evolved into at least four main themes: probing nucleon structure with high-energy muons; hadroproduction and photoproduction at high energy; CP violation in very rare decays; and heavy-ion experiments (see “Forty years of fixed-target physics at CERN’s North Area”).

Aside from seminal physics results, fixed-target experiments at the North Area have driven numerous detector innovations. This is largely a result of their simple geometry and ease of access, which allows more adventurous technical solutions than might be possible with collider experiments. Examples of detector technologies perfected at the North Area include: silicon microstrips and active targets (NA11, NA14); rapid-cycling bubble chambers (NA27); holographic bubble chambers (NA25); Cherenkov detectors (CEDAR, RICH); liquid-krypton calorimeters (NA48); micromegas gas detectors (COMPASS); silicon pixels with 100 ps time resolution (NA62); time-projection chambers with dE/dx measurement (ISIS, NA49); and many more. The sheer amount of data to be recorded in these experiments also led to the very early adoption of PC farms for the online systems of the NA48 and COMPASS experiments.

Another key function of the North Area has been to test and calibrate detectors. These range from the fixed-target experiments themselves to experiments at colliders (such as LHC, ILC and CLIC), space and balloon experiments, and bent-crystal applications (such as UA9 and NA63). New detector concepts such as dual-readout calorimetry (DREAM) and particle-flow calorimetry (CALICE) have also been developed and optimised. Recently the huge EHN1 hall was extended by 60 m to house two very large liquid-argon prototype detectors to be tested for the Deep Underground Neutrino Experiment under construction in the US.

If there is an overall theme concerning the development of the fixed-target programme in the North Area, one could say that it was to be able to quickly evolve and adapt to address the compelling questions of the day. This looks set to remain true, with many proposals for new experiments appearing on the horizon, ranging from the study of very rare decays and light dark matter to the study of QCD with hadron and heavy-ion beams. There is even a study under way to possibly extend the North Area with an additional very-high-intensity proton beam serving a beam dump facility. These initiatives are being investigated by the Physics Beyond Collider study (see p20), and many of the proposals explore the high-intensity frontier complementary to the high-energy frontier at large colliders. Here’s to the next 40 years of North Area physics!

Forty years of fixed-target physics at CERN's North Area

Probing nucleon structure with high-energy muons

High-energy muons are excellent probes with which to investigate the structure of the nucleon. The North Area’s EHN2 hall was built to house two sets of muon experiments: the sequential NA2/NA9/NA28 (also known as the European Muon Collaboration, EMC), which made the observation that nucleons bound in nuclei are different from free nucleons; and NA4 (pictured), which confirmed the electroweak effects between the weak and electromagnetic interactions. A particular success of the North Area’s muon experiments concerned the famous “proton spin crisis”. In the late-1980s, contrary to the expectation by the otherwise successful quark–parton model, data showed that the proton’s spin is not carried by the quark spins. This puzzle interested the community for decades, compelling CERN to further investigate by building the NA47 Spin Muon collaboration experiment in the early 1990s (which established the same result for the neutron) and, subsequently, the COMPASS experiment (which studied the contribution of the gluon spins to the nucleon spin). A second phase of COMPASS still ongoing today, is devoted to nucleon tomography using deeply virtual Compton scattering and, for the first time, polarised Drell–Yan reactions. Hadron spectroscopy is another area of research at the North Area, and among recent important results from COMPASS is the measurement of pion polarisability, which is an important test of low-energy QCD.

Hadroproduction and photoproduction at high energy

Following the first experiment to publish data in the North Area (NA3) concerning the production of μ+μ pairs from hadron collisions, the ingenuity to combine bubble chambers and electronic detectors led to a series of experiments. The European Hybrid Spectrometer facility housed NA13, NA16, NA22, NA23 and NA27, and studied charm production and many aspects of hadronic physics, while photoproduction of heavy bosons was the primary aim of NA1. A measurement of the charm lifetime using the first ever microstrip silicon detectors was pioneered by the ACCMOR collaboration (NA11/NA32; see image of Robert Klanner next to the ACCMOR spectrometer in 1977), and hadron spectroscopy with neutral final states was studied by NA12 (GAMS), which employed a large array of lead glass counters, in particular a search for glueballs. To study μ+μ pairs from pion interactions at the highest possible intensities, the toroidal spectrometer NA10 was housed in the ECN3 underground cavern. Nearby in the same cavern, NA14 used a silicon active target and the first big microstrip silicon detectors (10,000 channels) to study charm photoproduction at high intensity. Later, experiment NA30 enabled a direct measurement of the π0 lifetime by employing thin gold foils to convert the photons from the π0 decays. Today, electron beams are used by NA64 to look for dark photons while hadron spectroscopy is still actively pursued, in particular at COMPASS.

CP violation and very rare decays

The discovery of CP violation in the decay of the long-lived neutral kaon to two pions at Brookhaven National Laboratory in 1964 was unexpected. To understand its origin, physicists needed to make a subtle comparison (in the form of a double ratio) between long- and short-lived neutral kaon decays in pairs of neutral and charged kaons. In 1987 an ambitious experiment (NA31) showed a deviation from one of the double ratios, providing the first evidence of direct CP violation (that is, it happens in the decay of the neutral mesons, not only in the mixing between neutral kaons). A second-generation experiment (NA48, pictured in 1996), located in ECN3 to accept a much higher primary-proton intensity, was able to measure the four decay modes concurrently thanks to the deflection of a tiny fraction of the primary proton beam into a downstream target via channelling in a “bent” crystal. NA48 was approved in 1991 when it became evident that more precision was needed to confirm the original observation (a competing programme at Fermilab called E731 did not find a significant deviation from the unity of the double ratio). Both KTeV (the follow-up Fermilab experiment) and NA48 confirmed NA31’s results, firmly establishing direct CP violation. Continuations of the NA48 experiments studied rare decays of the short-lived neutral kaon and searched for direct CP violation in charged kaons. Nowadays the kaon programme continues with NA62, which is dedicated to the study of very rare K+ π+νν decays and is complementary to the B-meson studies performed by the LHCb experiment.

Heavy-ion experiments

In the mid-1980s, with a view to reproduce in the laboratory the plasma of free quarks and gluons predicted by QCD and believed to have existed in the early universe, the SPS was modified to accelerate beams of heavy ions and collide them with nuclei. The lack of a single striking signature of the formation of the plasma demands that researchers look for as many final states as possible, exploiting the evolution of standard observables (such as the yield of muon pairs from the Drell–Yan process or the production rate of strange quarks) as a function of the degree of overlap of the nuclei that participate in the collision (centrality). By 2000 several experiments had, according to CERN Courier in March that year, found “tantalising glimpses of mechanisms that shaped our universe”. The experiments included NA44, NA45, NA49, NA50, NA52 and NA57, as well as WA97 and WA98 in the West Area. Among the most popular signatures observed was the suppression of the J/ψ yield in ion–nucleus collisions with respect to proton–proton collisions, which was seen by NA50. Improved sensitivity to muon pairs was provided by the successor experiment NA60. The current heavy-ion programme at the North Area includes NA61/SHINE (see image), the successor of NA49, which is studying the onset of phase transitions in dense quark–gluon matter at different beam energies and for different beam species. Studies of the quark–gluon plasma continue today, in particular at the LHC and at RHIC in the US. At the same time, NA61/SHINE is measuring the yield of mesons from replica targets for neutrino experiments worldwide and particle production for cosmic-ray studies.

The post Fixed target, striking physics appeared first on CERN Courier.

]]>
Feature A strong tradition of innovation and ingenuity shows that, for CERN’s North Area, life really does begin at 40. https://cerncourier.com/wp-content/uploads/2019/03/CCMarApr19_north-frontis-1.png
German–Japanese centre to focus on precision physics https://cerncourier.com/a/german-japanese-centre-to-focus-on-precision-physics/ Thu, 24 Jan 2019 09:00:55 +0000 https://preview-courier.web.cern.ch/?p=13081 The Centre for Time, Constants and Fundamental Symmetries will offer access to ultra-sensitive equipment for atomic and nuclear physics, antimatter research, quantum optics and metrology.

The post German–Japanese centre to focus on precision physics appeared first on CERN Courier.

]]>
On time

On 1 January a new virtual centre devoted to some of the most precise measurements in science was established by researchers in Germany and Japan. The Centre for Time, Constants and Fundamental Symmetries will offer access to ultra-sensitive equipment to allow experimental groups in atomic and nuclear physics, antimatter research, quantum optics and metrology to collaborate closely on fundamental measurements. Three partners – the Max Planck Institutes for nuclear physics (MPI-K) and for quantum optics (MPQ), the National Metrology Institute of Germany (PTB) and RIKEN in Japan – agreed to fund the centre in equal amounts with a total of around €7.5 million for five years, and scientific activities will be coordinated at MPI-K.

A major physics target of the German–Japanese centre is to investigate whether the fundamental constants really are constant or if they change in time by tiny amounts. Another goal concerns the subtle differences in the properties of matter and antimatter, namely C, P and T invariance, which have not yet shown up, even though such differences intrinsically must exist, otherwise the universe would consist of almost pure radiation. Closely related to these tests of fundamental symmetries is the search for physics beyond the Standard Model. The broad research portfolio also includes the development of novel optical clocks based on atoms, nuclei and highly charged ions.

“It is fascinating that nowadays manageable laboratory experiments make it possible to investigate such fundamental questions in physics and cosmology by means of their high precision”, says Klaus Blaum of MPI-K.

Stringent tests of fundamental interactions and symmetries using the protons and antiprotons available at the BASE experiment at CERN are another key aspect of the German–Japanese initiative, explains Stefan Ulmer, co-director of the centre, chief scientist at RIKEN, and spokesperson of the BASE experiment: “This centre will strongly promote fundamental physics in general, in addition to the research goals of BASE. Given this support we are developing new equipment to improve both the precision of the proton-to-antiproton charge-to-mass ratio as well as the proton/antiproton magnetic moment comparison by factors of 10 to 100.”

To reach these goals, the researchers intend to develop novel experimental techniques – such as transportable antiproton traps, sympathetic cooling of antiprotons by laser-cooled beryllium ions, and optical clocks based on highly charged ions and thorium nuclei – which will outperform contemporary methods and enable measurements at even shorter time scales and with improved sensitivity. “The combined precision-physics expertise of the individual groups with their complementary approaches and different methods using traps and lasers has the potential for substantial progress,” says Ulmer. “The low-energy, ultra-high-precision investigations for physics beyond the Standard Model will complement studies in particle physics.”

The post German–Japanese centre to focus on precision physics appeared first on CERN Courier.

]]>
News The Centre for Time, Constants and Fundamental Symmetries will offer access to ultra-sensitive equipment for atomic and nuclear physics, antimatter research, quantum optics and metrology. https://cerncourier.com/wp-content/uploads/2019/01/CCJanFeb19_News-ontime.png
Large Hadron Collider: the experiments strike back https://cerncourier.com/a/large-hadron-collider-the-experiments-strike-back/ Thu, 24 Jan 2019 09:00:44 +0000 https://preview-courier.web.cern.ch/?p=13034 During the next two years of long-shutdown two (LS2), the LHC and its injectors will be tuned up for high-luminosity operations.

The post Large Hadron Collider: the experiments strike back appeared first on CERN Courier.

]]>
Forging ahead

The features in this first issue of 2019 bring you all the shutdown news from the seven LHC experiments, and what to expect when the souped-up detectors come back online in 2021.

During the next two years of long-shutdown two (LS2), the LHC and its injectors will be tuned up for high-luminosity operations: Linac2 will leave the floor to Linac4 to enable more intense beams; the Proton Synchrotron Booster will be equipped with completely new injection and acceleration systems; and the Super Proton Synchrotron will have new radio-frequency power. The LHC is also being tested for operation at its design energy of 14 TeV, while, in the background, civil-engineering works for the high-luminosity upgrade (HL-LHC), due to enter service in 2026, are proceeding apace.

The past three years of Run 2 at a proton–proton collision energy of 13 TeV have seen the LHC achieve record peak and integrated luminosities, forcing the detectors to operate at their limits. Now, the four main experiments ALICE, ATLAS, CMS and LHCb, and the three smaller experiments LHCf, MoEDAL and TOTEM, are gearing up for the extreme conditions of Run 3 and beyond.

At the limits

Since the beginning of the LHC programme, it was clear that the original detectors would last for approximately a decade due to radiation damage. That time has now come. Improvements, repairs and upgrades have been taking place in the LHC detectors throughout the past decade, but significant activities will take place during LS2 (and LS3, beginning 2024), capitalising on technology advances and the ingenuity of thousands of people over a period of several years. Combined, the technical design reports for the LHC experiment upgrades number some 20 volumes each containing hundreds of pages.

Wired

For LHCb, the term “upgrade” hardly does it justice, since large sections of the detector are to be completely replaced and a new trigger system is to be installed (LHCbs momentous metamorphosis). ALICE too is undergoing major interventions to its inner detectors during LS2 (ALICE revitalised), and both collaborations are installing new data centres to deal with the higher data rate from future LHC runs. ATLAS and CMS are upgrading numerous aspects of their detectors while at the same time preparing for major installations during LS3 for HL-LHC operations (CMS has high luminosity in sight and ATLAS upgrades in LS2). At the HL-LHC, one year of collisions is equivalent to 10 years of LHC operations in terms of radiation damage. Even more challenging, HL-LHC will deliver a mean event pileup of up to 200 interactions per beam crossing – 10 times greater than today – requiring totally new trigger and other capabilities.

Three smaller experiments at the LHC are also taking advantage of LS2. TOTEM, which comprises two detectors located 220 m either side of CMS to measure elastic proton–proton collisions (see “Forging ahead” image), aims to perform total-cross-section measurements at maximal LHC energies. For this, the collaboration is building a new scintillator detector to be integrated in CMS, in addition to service work on its silicon-strip and spectrometer detectors.

Forward physics

Another “forward” experiment called LHCf, made up of two detectors 140 m either side of ATLAS, uses forward particles produced by the LHC collisions to improve our knowledge about how cosmic-ray showers develop in Earth’s atmosphere. Currently, the LHCf detectors are being prepared for 14 TeV proton–proton operations, higher luminosities and also for the possibility of colliding protons with light nuclei such as oxygen, requiring a completely renewed data-acquisition system. Finally, physicists at MoEDAL, a detector deployed around the same intersection region as LHCb to look for magnetic monopoles and other signs of new physics, are preparing a request to take data during Run 3. For this, among other improvements, a new sub-detector called MAPP will be installed to extend MoEDAL’s physics reach to long-lived and fractionally charged particles.

The seven LHC experiments are also using LS2 to extend and deepen their analyses of the Run-2 data. Depending on what lies there, the collaborations could have more than just shiny new detectors on their hands by the time they come back online in the spring of 2021.

The post Large Hadron Collider: the experiments strike back appeared first on CERN Courier.

]]>
Feature During the next two years of long-shutdown two (LS2), the LHC and its injectors will be tuned up for high-luminosity operations. https://cerncourier.com/wp-content/uploads/2019/01/CCJanFeb19_Intro-frontis.png
Actinide series shown to end with lawrencium https://cerncourier.com/a/actinide-series-shown-to-end-with-lawrencium/ Thu, 24 Jan 2019 09:00:29 +0000 https://preview-courier.web.cern.ch/?p=13074 Scientists have determined the ionisation potentials from fermium to lawrencium, confirming the filling of the 5f shell in the heavy actinides.

The post Actinide series shown to end with lawrencium appeared first on CERN Courier.

]]>
Heavy elements

One hundred and fifty years since Dmitri Mendeleev revolutionised chemistry with the periodic table of the elements, an international team of researchers has resolved a longstanding question about one of its more mysterious regions – the actinide series (or actinoids, as adopted by the International Union of Pure and Applied Chemistry, IUPAC).

The periodic table’s neat arrangement of rows, columns and groups is a consequence of the electronic structures of the chemical elements. The actinide series has long been identified as a group of heavy elements starting with atomic number Z = 89 (actinium) and extending up to Z = 103 (lawrencium), each of which is characterised by a stabilised 7s2 outer electron shell. But the electron configurations of the heaviest elements of this sequence, from Z = 100 (fermium) onwards, have been difficult to measure, preventing confirmation of the series. The reason for the difficulty is that elements heavier than fermium can be produced only one atom at a time in nuclear reactions at heavy-ion accelerators.

Confirmation

Now, Tetsuya Sato at the Japan Atomic Energy Agency (JAEA) and colleagues have used a surface ion source and isotope mass-separation technique at the tandem accelerator facility at JAEA in Tokai to show that the actinide series ends with lawrencium. “This result, which would confirm the present representation of the actinide series in the periodic table, is a serious input to the IUPAC working group, which is evaluating if lawrencium is indeed the last actinide,” says team member Thierry Stora of CERN.

Using the same technique, Sato and co-workers measured the first ionisation potential of lawrencium back in 2015. Since this is the energy required to remove the most weakly bound electron from a neutral atom and is a fundamental property of every chemical element, it was a key step towards mapping lawrencium’s electron configuration. The result suggested that lawrencium has the lowest first ionisation potential of all actinides, as expected owing to its weakly bound electron in the 7p1/2 valence orbital. But with only this value the team couldn’t confirm the expected increase of the ionisation values of the heavy actinides up to nobelium (Z = 102). This occurs with the filling of the 5f electron shell in a manner similar to the filling of the 4f electron shell until ytterbium in the lanthanides.

In their latest study, Sato and colleagues have determined the successive first ionisation potentials from fermium to lawrencium, which is essential to confirm the filling of the 5f shell in the heavy actinides (see figure). The results agree well with those predicted by state-of-the-art relativistic calculations in the framework of QED and confirm that the ionisation values of the heavy actinides increase up to nobelium, while that of lawrencium is the lowest among the series.

The results demonstrate that the 5f orbital is fully filled at nobelium (with the [Rn] 5f14 7s2 electron configuration, where [Rn] is the radon configuration) and that lawrencium has a weakly bound electron, confirming that the actinides end with lawrencium. The nobelium measurement also agrees well with laser spectroscopy measurements made at the GSI Helmholtz Center for Heavy Ion Research in Darmstadt, Germany.

“The experiments conducted by Sato et al. constitute an outstanding piece of work at the top level of science,” says Andreas Türler, a chemist from the University of Bern, Switzerland. “As the authors state, these measurements provide unequivocal proof that the actinide series ends with lawrencium (Z = 103), as the filling of the 5f orbital proceeds in a very similar way to lanthanides, where the 4f orbital is filled. I am already eagerly looking forward to an experimental determination of the ionisation potential of rutherfordium (Z = 104) using the same experimental approach.”

The post Actinide series shown to end with lawrencium appeared first on CERN Courier.

]]>
News Scientists have determined the ionisation potentials from fermium to lawrencium, confirming the filling of the 5f shell in the heavy actinides. https://cerncourier.com/wp-content/uploads/2019/01/CCJanFeb19_News-heavy-1.png
CMS has high luminosity in sight https://cerncourier.com/a/cms-has-high-luminosity-in-sight/ Thu, 24 Jan 2019 09:00:27 +0000 https://preview-courier.web.cern.ch/?p=13045 One of the biggest challenges for the CMS collaboration during LS2 is to prepare its detector for the HL-LHC.

The post CMS has high luminosity in sight appeared first on CERN Courier.

]]>
Detector focus

The CMS detector has performed better than what was thought possible when it was conceived. Combined with advances in analysis techniques, this has allowed the collaboration to make measurements – such as the coupling between the Higgs boson and bottom quarks – that were once deemed impossible. Indeed, together with its sister experiment ATLAS, CMS has turned the traditional view of hadron colliders as “hammers” rather than “scalpels” on its head.

In exploiting the LHC and its high-luminosity upgrade (HL-LHC) to maximum effect in the coming years, the CMS collaboration has to battle higher overall particle rates, higher “pileup” of superimposed proton–proton collision events per LHC bunch crossing, and higher instantaneous and integrated radiation doses to the detector elements. In the collaboration’s arsenal to combat this assault are silicon sensors able to withstand the levels of irradiation expected, a new high-rate trigger, and detectors with higher granularity or precision timing capabilities to help disentangle piled-up events.

The majority of CMS detector upgrades for the HL-LHC will be installed and commissioned during long-shutdown three (LS3). However, the planned 30-month duration of LS3 imposes logistical constraints that result in a large part of the muon-system upgrade and many ancillary systems (such as cooling, power and environmental control) needing to be installed substantially beforehand. This makes the CMS work plan for LS2 extremely complex, dividing into three classes of activity: the five-yearly maintenance of the existing detectors and services, the completion of so called “phase 1” upgrades necessary for CMS to continue to operate until LS3, and the initial upgrades to detectors, infrastructure or ancillary systems necessary for HL-LHC. “The challenge of LS2 is to prepare CMS for Run 3 while not neglecting the work needed now to prepare for Run 4,” says technical coordinator Austin Ball.

A dedicated CMS upgrade programme was planned since the LHC switched on in 2008. It is being carried out in two phases: the first, which started in 2014 during LS1, concerns improvements to deal with a factor-of-two increase over the design instantaneous luminosity delivered in Run 2; and the second relates to the upgrades necessary for the HL- LHC. The phase-1 upgrade is almost complete, thanks to works carried out during LS1 and regular end-of-year technical stops. This included the replacement of the three-layer barrel (two-disk forward) pixel detector with a four-layer barrel (three-disk forward) version, the replacement of photosensors and front-end electronics for some of the hadron calorimeters, and the introduction of a more powerful, FPGA-based, level-1 hardware trigger. LS2 will conclude phase-1 by upgrading photosensors (hybrid photodiodes) in the barrel hadron calorimeter with silicon photomultipliers and replacing the innermost pixel barrel layer.

Phase-2 activities

But LS2 also sees the start of the phase-2 CMS upgrade, the first step of which is a new beampipe. The collaboration already replaced the beampipe during LS1 with a narrower one to allow the phase-1 pixel detector to reach closer to the interaction point. Now, the plan is to extend the cylindrical section of the beampipe further to provide space for the phase-2 pixel detector with enlarged pseudo-rapidity coverage, to be installed in LS3. In addition, for the muon detectors CMS will install a new gas electron multiplier (GEM), layer in the inner ring of the first endcap disk, upgrade the on-detector electronics of the cathode strip chambers, and lay services for a future GEM layer and improved resistive plate chambers. Several other preparations of the detector infrastructure and services will take place in LS2 to be ready for the major installations in LS3.

Assembly

Work plan

Key elements of the LS2 work plan include: constructing major new surface facilities; modifying the internal structure of the underground cavern to accommodate new detector services (especially CO2 cooling); replacing the beampipe for compatibility with the upgraded tracking system; and improving the powering system of the 3.8 T solenoid to increase its longevity through the HL-LHC era. In addition, the system for opening and closing the magnet yoke for detector access will be modified to accommodate future tolerance requirements and service volumes, and the shielding system protecting detectors from background radiation will be reinforced. Significant upgrades of electrical power, gas distribution and the cooling plant also have to take place during LS2.

The CMS LS2 schedule is now fully established, with a critical path starting with the pixel-detector and beampipe removal and extending through the muon system upgrade and maintenance, installation of the phase-2 beampipe plus the revised phase-1 pixel innermost layer, and, after closing the magnet yoke, re-commissioning of the mag-net with the upgraded powering system. The other LS2 activities, including the barrel hadron calorimeter work, will take place in the shadow of this critical path.

Pixel renewal

“The timely completion of the intense LS2 programme, including the construction of the on-site surface infrastructures necessary for the construction, assembly or refurbishment activities of the phase-2 detectors, is critical for a successful CMS phase-2 upgrade,” explains upgrade coordinator Frank Hartmann. “Although still far away, LS3 activities are already being planned in detail.” The future LS3 shutdown will see the CMS tracker completely replaced with a new outer tracker that can provide tracks at 40 MHz to the upgraded level-1 trigger, and with a new inner tracker with extended pseudo-rapidity coverage. The 36 modules of the barrel electromagnetic calorimeter will be removed and their on-detector electronics upgraded to enable the high readout rate, while both current hadron and electromagnetic endcap calorimeters will be replaced with a brand-new system (see “A new era in calorimetry” box). The addition of timing detectors in the barrel and endcaps will allow a 4D reconstruction of collision vertices and, together with the other new and upgraded detectors, reduce the effective event pile-up at the HL-LHC to a level comparable to that already seen.

“The upgraded CMS detector will be even more powerful and able to make even more precise measurements of the properties of the Higgs boson as well as extending the searches for new physics in the unprecedented conditions of the HL-LHC,” says CMS spokesperson Roberto Carlin. 

The post CMS has high luminosity in sight appeared first on CERN Courier.

]]>
Feature One of the biggest challenges for the CMS collaboration during LS2 is to prepare its detector for the HL-LHC. https://cerncourier.com/wp-content/uploads/2019/01/CCJanFeb19_CMS-frontis.png
ATLAS upgrades in LS2 https://cerncourier.com/a/atlas-upgrades-in-ls2/ Thu, 24 Jan 2019 09:00:27 +0000 https://preview-courier.web.cern.ch/?p=13051 New wheel-shaped detectors that allow a better trigger and measurement capability for muons are among numerous transformations taking place.

The post ATLAS upgrades in LS2 appeared first on CERN Courier.

]]>
Iron support

To precisely study the Higgs boson and extend our sensitivity to new physics in the coming years of LHC operations, the ATLAS experiment has a clear upgrade plan in place. Ageing of the inner tracker due to radiation exposure, data volumes that would saturate the readout links, obsolescence of electronics, and a collision environment swamped by up to 200 interactions per bunch crossing are some of the headline challenges facing the 3000-strong collaboration. While many installations will take place during long-shutdown three (LS3), beginning in 2024, much activity is taking place during the current LS2 – including major interventions to the giant muon spectrometer at the outermost reaches of the detector.

The main ATLAS upgrade activities during LS2 are aimed at increasing the trigger efficiency for leptonic and hadronic signatures, especially for electrons and muons with a transverse momentum of at least 20 GeV. To improve the selectivity of the electron trigger, the amount of information used for the trigger decision will be drastically increased: until now, the very fine-grained information produced by the electromagnetic calorimeter is grouped in “trigger towers” to limit the number and hence cost of trigger channels, but advances in electronics and the use of optical fibres allows the transmission of a much larger amount of information at a reasonable cost. By replacing some of the components of the front-end electronics of the electromagnetic calorimeter, the level of segmentation available at the trigger level will be increased fourfold, improving the ability to reject jets and preserve electrons and photons. The ATLAS trigger and data-acquisition systems will also be upgraded during LS2 by introducing new electronics boards that can deal with the more granular trigger information coming from the detector.

New small wheels

Since 2013, ATLAS has been working on a replacement for its “small wheel” forward-muon endcap systems so that they can operate under the much harsher background conditions of the future LHC. The new small wheel (NSW) detectors employ two detector technologies: small-strip thin gap chambers (sTGC) and Micromegas (MM). Both technologies are able to withstand the higher flux of neutrons and photons expected in future LHC interactions, which will produce counting rates as high as 20 kHz cm–2 in the inner part of the NSW, while delivering information for the first-level trigger and muon measurement. The main aim of the NSW is to reduce the fake muon triggers in the forward region and improve the sharpness of the trigger threshold drastically, allowing the same selection power as the present high-level trigger.

Extreme pile up

The first NSW started to take shape at CERN last year. The iron shielding disks (see “Iron support” image), which serve as the support for the NSW detectors in addition to shielding the endcap muon chambers from hadrons, have been assembled, while the services team is installing numerous cables and pipes on the disks. Only a few millimetres of space is available between the disk and the chambers for the cables on one side, and between the disk and the calorimeter on the other side, and the task is made even more difficult by having to work from an elevated platform. In a nearby building, the sTGC chambers coming from the different construction sites are being integrated in full wedges and, soon this year, the Micromegas wedges will be integrated and tested at a separate integration site. The construction of the sTGC chambers is taking place in Canada, Chile, China, Israel and Russia, while the Micromegas are being constructed in France, Germany, Greece, Italy and Russia. On a daily basis, cables arrive to be assembled with connectors and tested; piping is cut to length, cleaned and protected until installation; and gas-leak and high-voltage test stations are employed for quality control. In the meantime, several smaller upgrades will be deployed during LS2, including the installation of 16 new muon chambers in the inner layer of the barrel spectrometer.

The organisation of LS2 activities is a complex exercise in which the maintenance needs of the detectors have to be addressed in parallel with installation schedules. After a first period devoted to the opening of the detector and the maintenance of the forward muon spectrometer, the first major non-standard operation (scheduled for January) will be to bring to the surface the first small wheel. Having the detector fully open on one side will also allow very important test for the installation of the new all-silicon inner tracker, which is scheduled to be installed during LS3. The upgrade of the electromagnetic-calorimeter electronics will start in February and continue for about one year, requiring all front-end boards to be dismounted from their crates, modifications to both the boards and the crates, and reinstallation of the modified boards in their original position. Maintenance of the ATLAS tile calorimeter and inner detector will take place in parallel, a very important aspect of which will be the search for leaks in the front-end cooling system.

Endcap petals

Delicate operation

In August, the first small wheel will be lowered again, allowing the second small wheel to be brought to the surface to make space for the NSW installation foreseen in April 2020. In the same period, all the optical transmission boards of the pixel detector will have to be changed. Following these installations, there will be a long period of commissioning of all the upgraded detectors and the preparation for the installation of the second NSW in the autumn of 2020. At that moment the closing process will start and will last for about three months, including the bake-out of the beam pipe, which is a very delicate and dangerous operation for the pixel detectors of the inner tracker.

A coherent upgrade programme for ATLAS is now fully underway to enable the experiment to fully exploit the physics potential of the LHC in the coming years of high-luminosity operations. Thousands of people around the world in more than 200 institutes are involved, and the technical design reports alone for the upgrade so far number six volumes, each containing several hundred pages. At the end of LS2, ATLAS will be ready to take data in Run 3 with a renewed and better performing detector.

The post ATLAS upgrades in LS2 appeared first on CERN Courier.

]]>
Feature New wheel-shaped detectors that allow a better trigger and measurement capability for muons are among numerous transformations taking place. https://cerncourier.com/wp-content/uploads/2019/01/CCJanFeb19_ATLAS-frontis.png
LHCb’s momentous metamorphosis https://cerncourier.com/a/lhcbs-momentous-metamorphosis/ Thu, 24 Jan 2019 09:00:07 +0000 https://preview-courier.web.cern.ch/?p=13056 The LHCb detector is to be totally rebuilt in time for the restart of LHC operations.

The post LHCb’s momentous metamorphosis appeared first on CERN Courier.

]]>
Tender loving care

In November 2018 the LHC brilliantly fulfilled its promise to the LHCb experiment, delivering a total integrated proton–proton luminosity of 10 fb–1 from Run 1 and Run 2 combined. This is what LHCb was designed for, and more than 450 physics papers have come from the adventure so far. Having recently finished swallowing these exquisite data, however, the LHCb detector is due some tender loving care.

In fact, during the next 24 months of long-shutdown two (LS2), the 4500 tonne detector will be almost entirely rebuilt. When it emerges from this metamorphosis, LHCb will be able to collect physics events at a rate 10 times higher than today. This will be achieved by installing new detectors capable of sustaining up to five times the instantaneous luminosity seen at Run 2, and by implementing a revolutionary software-only trigger that will enable LHCb to process signal data in an upgraded CPU farm at the frenetic rate of 40 MHz – a pioneering step among the LHC experiments.

Subdetector structure

LHCb is unique among the LHC experiments in that it is asymmetric, covering only one forward region. That reflects its physics focus: B mesons, which, rather than flying out uniformly in all directions, are preferentially produced at small angles (i.e. close to the beam direction) in the LHC’s proton collisions. The detector stretches for 20 m along the beam pipe, with its sub-detectors stacked behind each other like books on a shelf, from the vertex locator (VELO) to a ring-imaging Cherenkov detector (RICH1), the silicon upstream tracker (UT), the scintillating fibre tracker (SciFi), a second RICH (RICH2), the calorimeters and, finally, the muon detector.

The LHCb upgrade was first outlined in 2008, proposed in 2011 and approved the following year at a cost of about 57 million Swiss francs. The collaboration started dismantling the current detector just before the end of 2018 and the first elements of the upgrade are about to be moved underground.

Physics boost

The LHCb collaboration has so far made numerous important measurements in the heavy-flavour sector, such as the first observation of the rare decay B0s  µ+µ, precise measurement of quark-mixing parameters and the observation of new baryonic and pentaquark states. However, many crucial measurements are currently statistically limited. The LHCb upgrade will boost the experiment’s physics reach by allowing the software trigger to handle an input rate around 30 times higher than before, bringing greater precision to theoretically clean observables.

Under construction

Flowing at an immense rate of 4 TB/s, data will travel from the cavern, straight from the detector electronics via some 9000 300 m-long optical fibres, into front-end computers located in a brand-new data centre that is currently nearing completion. There, around 500 powerful custom-made boards will receive the data and transfer it to thousands of processing cores. Current trigger-hardware equipment will be removed and new front-end electronics have been designed for all the experiment’s sub-detectors to cope with the substantially higher readout rates.

For the largest and heaviest LHCb devices, namely the calorimeters and muon stations, the detector elements will remain mostly in place. All the other LHCb detector systems are to be entirely replaced, apart from a few structural frames, the dipole magnet, shielding elements and gas or vacuum enclosures.

Development

Subdetector activities

The VELO at the heart of LHCb, which allows precise measurements of primary and displaced vertices of short-lived particles, is one of the key detectors to be upgraded during LS2. Replacing the current system based on silicon microstrip modules, the new VELO consists of 26 tracking layers made from 55 × 55 µm2 pixel technology, which offers better hit resolution and simpler track reconstruction. The new VELO will also be closer to the beam axis, which poses significant design challenges. A new chip, the VELOPIX, capable of collecting signal hits from 256 × 256 pixels and sending data at a rate of up to 15 Gb/s, was developed for this purpose. Pixel modules include a cutting-edge cooling substrate based on an array of microchannels trenched out of a 260 µm-thick silicon wafer that carry liquid carbon dioxide to keep the silicon at a temperature of –20 °C. This is vital to prevent thermal run-away, since these sensors will receive the heaviest irradiation of all LHC detectors. Prototype modules have recently been assembled and characterised in tests with high-energy particles at the Super Proton Synchrotron.

The RICH detector will still be composed of two systems: RICH1, which discriminates kaons from pions in the low-momentum range, and RICH2, which performs this task in the high-momentum range. The RICH mirror system, which is required to deflect and focus Cherenkov photons onto photodetector planes, will be replaced with a new one that has been optimised for the much increased particle densities of future LHC runs. RICH detector columns are composed of six photodetector modules (PDMs), each containing four elementary cells hosting the multi-anode photomultiplier tubes. A full PDM was successfully operated during 2018, providing first particle signals.

Mounted just between RICH1 and the dipole magnet, the upstream tracker (UT) consists of four planes of silicon microstrip detectors. To counter the effects of irradiation, the detector is contained in a thermal enclosure and cooled to approximately –5 °C using a CO2 evaporative cooling system. Lightweight staves, with a carbon foam back-plane and embedded cooling pipe, are dressed with flex cables and instrumented with 14 modules, each composed of a polymide hybrid circuit, a boron nitride stiffener and a silicon microstrip sensor.

VELO upgrade

Further downstream, nestled between the RICH2 and the magnet, will sit the SciFi – a new tracker based on scintillating fibres and silicon photomultiplier (SiPM) arrays, which replaces the drift straw detectors and silicon microstrip sensors used by the current three tracking stations. The SciFi represents a major challenge for the collaboration, not only due to its complexity, but also because the technology has never been used for such a large area in such a harsh radiation environment. More than 11,000 km of fibre was ordered, meticulously verified and even cured from a few rare and local imperfections. From this, about 1400 mats of fibre layers were recently fabricated in four institutes and assembled into 140 rigid 5 × 0.5 m2 modules. In parallel, SiPMs were assembled on flex cables and joined in groups of 16 with a 3D-printed titanium cooling tube to form sophisticated photodetection units for the modules, which will be operated at about –40 °C.

As this brief overview demonstrates, the LHCb detector is undergoing a complete overhaul during LS2 – with large parts being totally replaced – to allow this unique LHC experiment to deepen and broaden its exploration programme. CERN support teams and the LHCb technical crew are now busily working in the cavern, and many of the 79 institutes involved in the LHCb collaboration from around the world have shifted their focus to this herculean task. The entire installation will have to be ready for the commissioning of the new detector by mid-2020 so that it is ready for the start of Run 3 in 2021.

The post LHCb’s momentous metamorphosis appeared first on CERN Courier.

]]>
Feature The LHCb detector is to be totally rebuilt in time for the restart of LHC operations. https://cerncourier.com/wp-content/uploads/2019/01/CCJanFeb19_LHCB-frontis.png
ALICE revitalised https://cerncourier.com/a/alice-revitalised/ Thu, 24 Jan 2019 09:00:02 +0000 https://preview-courier.web.cern.ch/?p=13040 The ALICE experiment is being upgraded to make even more precise measurements of extreme nuclear matter

The post ALICE revitalised appeared first on CERN Courier.

]]>

ALICE (A Large Ion Collider Experiment) will soon have enhanced physics capabilities thanks to a major upgrade of the detectors, data-taking and data-processing systems. These upgrades will improve the precision on measurements of the high-density, high-temperature phase of strongly interacting matter, the quark–gluon plasma (QGP), together with the exploration of new phenomena in quantum chromodynamics (QCD). Since the start of the LHC programme, ALICE has been participating in all data runs, with the main emphasis on heavy-ion collisions, such as lead–lead, proton–lead, and xenon–xenon collisions. The collaboration has been making major inroads into the understanding of the dynamics of the QGP – a state of matter that prevailed in the first instants of the universe and is recreated in droplets at the LHC.

To perform precision measurements of strongly interacting matter, ALICE must focus on rare probes – such as heavy-flavour particles, quarkonium states, real and virtual photons, and low-mass dileptons – as well as the study of jet quenching and exotic nuclear states. Observing rare phenomena requires very large data samples, which is why ALICE is looking forward to the increased luminosity provided by the LHC in the coming years. The interaction rate of lead ions during the LHC Run 3 is foreseen to reach around 50 kHz, corresponding to an instantaneous luminosity of 6 × 1027 cm–2 s–1. This will enable ALICE to accumulate 10 times more integrated luminosity (more than 10 nb–1) and a data sample 100 times larger than what has been obtained so far. In addition, the upgraded detector system will have better efficiency for the detection of short-lived particles containing heavy-flavour quarks thanks to the improved precision of the tracking detectors.

During long-shutdown two (LS2), several major upgrades to the ALICE detector will take place. These include: a new inner tracking system (ITS) with a new high-resolution, low-material-budget silicon tracker, which extends to the forward rapidities with the new muon forward tracker (MFT); an upgraded time projection chamber (TPC) with gas electron multiplier (GEM) detectors, along with a new readout chip for faster readout; a new fast interaction trigger (FIT) detector and forward diffraction detector. New readout electronics will be installed in multiple subdetectors (the muon spectrometer, time-of-flight detector, transition radiation detector, electromagnetic calorimeter, photon spectrometer and zero-degree calorimeter) and an integrated online–offline (O2) computing system will be installed to process and store the large data volumes.

Detector upgrades

A new all-pixel silicon inner tracker based on CMOS monolithic active pixel sensor (MAPS) technology will be installed covering the mid-rapidity (|η| < 1.5) region of the ITS as well as the forward rapidity (–3.6 < η < –2.45) of the MFT. In MAPS technology, both the sensor for charge collection and the readout circuit for digitisation are hosted in the same piece of silicon instead of being bump-bonded together. The chip developed by ALICE is called ALPIDE, and uses a 180 nm CMOS process provided by TowerJazz. With this chip, the silicon material budget per layer is reduced by a factor of seven compared to the present ITS. The ALPIDE chip is 15 × 30 mm2 in area and contains more than half a million pixels organised in 1024 columns and 512 rows. Its low power consumption (< 40 mW/cm2) and excellent spatial resolution (~5 μm) are perfect for the inner tracker of ALICE.

Inner tracker

The ITS consists of seven cylindrical layers of ALPIDE chips, summing up to 12.5 billion pixels and a total area of 10 m2. The pixel chips are installed on staves with radial distances 22–400 mm away from the interaction point (IP). The beam pipe has also been redesigned with a smaller outer radius of 19 mm, allowing the first detection layer to be placed closer to the IP at a radius of 22.4 mm compared to 39 mm at present. The brand-new ITS detector will improve the impact parameter resolution by a factor of three in the transverse plane and by a factor of five along the beam axis. It will extend the tracking capabilities to much lower pT, allowing ALICE to perform measurements of heavy-flavour hadrons with unprecedented precision and down to zero pT.

In the forward-rapidity region, ALICE detects muons using the muon spectrometer. The new MFT detector is designed to add vertexing capabilities to the muon spectrometer and will enable a number of new measurements that are currently beyond reach. As an example, it will allow us to distinguish J/ψ mesons that are produced directly in the collision from those that come from decays of mesons that contain a beauty quark. The MFT consists of five disks, each composed of two MAPS detection planes, placed perpendicular to the beam axis between the IP and the hadron absorber of the muon spectrometer.

The TPC is the main device for tracking and charged-particle identification in ALICE. The readout rate of the TPC in its present form is limited by its readout chambers, which are based on multi-wire proportional chambers. In order to avoid drift-field distortions produced by ions from the amplification region, the present readout chambers feature a charge gating scheme to collect back-drifting ions that lead to a limitation of the readout rate to 3.5 kHz. To overcome this limitation, new readout chambers employing a novel configuration of stacks of four GEMs have been developed during an extensive R&D programme. This arrangement allows for continuous readout at 50 kHz with lead–lead collisions, at no cost to detector performance. The production of the 72 inner (one GEM stack each) and outer (three GEM stacks each) chambers is now practically completed and certified. The replacement of the chambers in the TPC will take place in summer 2019, once the TPC is extracted from the experimental cavern and transported to the surface.

The new forward interaction trigger, FIT, comprises two arrays of Cherenkov radiators with MCP–PMT sensors and a single, large-size scintillator ring. The arrays will be placed on both sides of the IP. It will be the primary trigger, luminosity and collision time-measurement detector in ALICE. The detector will be capable of triggering at an interaction rate of 50 kHz, with a time resolution better than 30 ps, with 99% efficiency.

The newly designed ALICE readout system presents a change in approach, as all lead–lead collisions that are produced in the accelerator, at a rate of 50 kHz, will be read out in a continuous stream. However, triggered readout will be used by some detectors and for commissioning and calibration runs and the central trigger processor is being upgraded to accommodate the higher interaction rate. The readout of the TPC and muon chambers will be performed by SAMPA, a newly developed, 32-channel front-end analogue-to-digital converter with integrated digital signal processor.

Performance boost

The significantly improved ALICE detector will allow the collaboration to collect 100 times more events during LHC Run 3 compared to Run 1 and Run 2, which requires the development and implementation of a completely new readout and computing system. The O2 system is designed to combine all the computing functionalities needed in the experiment: detector readout, event building, data recording, detector calibration, data reconstruction, physics simulation and analysis. The total data volume produced by the front-end cards of the detectors will increase significantly, reaching a sustained data throughput of up to 3 TB/s. To minimise the requirements of the computing system for data processing and storage, the ALICE computing model is designed for a maximal reduction in the data volume read out from the detectors as early as possible during the data processing. This is achieved by online processing of the data, including detector calibration and reconstruction of events in several steps synchronously with data taking. At its peak, the estimated data throughput to mass storage is 90 GB/s.

Enhancements

A new computing facility for the O2 system is being installed on the surface, near the experiment. It will have a data-storage system with a storage capacity large enough to accommodate a large fraction of data of a full year’s data taking, and will provide the interface to permanent data storage at the tier-0 Grid computing centre at CERN, as well as other data centres.

ALICE upgrade activities are proceeding at a frenetic pace. Soon after the machine stopped in December, experts entered the cavern to open the massive doors of the magnet and started dismounting the detector in order to prepare for the upgrade. Detailed planning and organisation of the work are mandatory to stay on schedule, as Arturo Tauro, the deputy technical coordinator of ALICE explains: “Apart from the new detectors, which require dedicated infrastructure and procedures, we have to install a huge number of services (for example, cables and optical fibres) and perform regular maintenance of the existing apparatus. We have an ambitious plan and a tight schedule ahead of us.”

When the ALICE detector emerges revitalised from the two busy and challenging years of work ahead, it will be ready to enter into a new era of high-precision measurements that will expand and deepen our understanding of the physics of hot and dense QCD matter and the quark–gluon plasma. 

The post ALICE revitalised appeared first on CERN Courier.

]]>
Feature The ALICE experiment is being upgraded to make even more precise measurements of extreme nuclear matter https://cerncourier.com/wp-content/uploads/2019/01/CCJanFeb19_Alice-frontis.jpg
Plasma lenses promise smaller accelerators https://cerncourier.com/a/plasma-lenses-promise-smaller-accelerators/ Fri, 30 Nov 2018 09:00:04 +0000 https://preview-courier.web.cern.ch/?p=12937 An international team has made an advance towards more compact particle accelerators, demonstrating that beams can be focused via a technique called active plasma lensing without reducing the beam quality. Building smaller particle accelerators has been a goal of the particle accelerator community for decades, both for basic research and applications such as radiotherapy. In […]

The post Plasma lenses promise smaller accelerators appeared first on CERN Courier.

]]>

An international team has made an advance towards more compact particle accelerators, demonstrating that beams can be focused via a technique called active plasma lensing without reducing the beam quality.

Building smaller particle accelerators has been a goal of the particle accelerator community for decades, both for basic research and applications such as radiotherapy. In addition to new accelerating mechanisms, smaller accelerators require novel ways to focus particle beams.

Active plasma lensing uses a large electric current to set up strong magnetic fields in a plasma that can focus high-energy beams over distances of centimetres, rather than metres as is the case for conventional magnet-based techniques. However, the large current also heats the plasma, preferentially heating the centre of the lens. This temperature gradient leads to a nonlinear magnetic field, an aberration, which degrades the particle-beam quality.

Using a high-quality 200 MeV electron beam at the CLEAR user facility at CERN, Carl A Lindstrøm of the University of Oslo, Norway, and collaborators recently made the first direct measurement of this aberration in an active plasma lens, finding it to be consistent with theory. More importantly, they discovered that this aberration can be suppressed by simply changing the gas used to make the plasma from a light gas (helium) to a heavier gas (argon). Changing the gas slows down the heat transfer so that the aberration does not have time to form, resulting in ideal, degradation-free focusing. It represents a significant step towards making active plasma lenses a standard accelerator component in the future, says the team.

CLEAR evolved from a test facility for the Compact Linear Collider (CLIC) called CTF3, which ended a successful programme in 2016. CLEAR offers general accelerator R&D and component studies for existing and possible future accelerator applications, such as high-gradient “X-band” acceleration methods (CERN Courier April 2018 p32), as well as prototyping and validation of accelerator components for the High-Luminosity LHC upgrade.

“Working at CLEAR was very efficient and fast-paced – not always the case in large-scale accelerator facilities,” says Lindstrøm. “Naturally, we hope to continue our plasma lens research at CLEAR. One exciting direction is probing the limits of how strong these lenses can be. This is clearly the lens of the future.”

The post Plasma lenses promise smaller accelerators appeared first on CERN Courier.

]]>
News https://cerncourier.com/wp-content/uploads/2018/11/CCDec18_News-CLEAR.jpg
Cosmic research poles apart https://cerncourier.com/a/cosmic-research-poles-apart/ Fri, 30 Nov 2018 09:00:03 +0000 https://preview-courier.web.cern.ch/?p=12965 Two independent groups are going to Earth’s extremes to make unprecedented measurements for physics, education and the environment.

The post Cosmic research poles apart appeared first on CERN Courier.

]]>

Every second, each square metre of the Earth is struck by thousands of charged particles travelling from deep space. It is now more than a century since cosmic rays were discovered, yet still they present major challenges to physics. The origin of high-energy cosmic rays is the biggest mystery, their energy too high to have been generated by astrophysical sources such as supernovae, pulsars or even black holes. But cosmic rays are also of interest beyond astrophysics. Recent studies at CERN’s CLOUD experiment, for example, suggest that cosmic rays may influence cloud cover through the formation of new aerosols, with important implications for the evolution of Earth’s climate.

This year, two independent missions were mounted in the Arctic and in Antarctica – Polarquest2018 and Clean2Antarctica – to understand more about the physics of high-energy cosmic rays. Both projects have a strong educational and environmental dimension, and are among the first to measure cosmic rays at such high latitudes.

Geomagnetic focus

Due to the shape of the geomagnetic field, the intensity of the charged cosmic radiation is higher at the poles than it is in equatorial regions. At the end of the 1920s it was commonly believed that cosmic rays were high-energy neutral particles (i.e. gamma rays), implying that the Earth’s magnetic field would not affect cosmic-ray intensity. However, early observations of the dependence of the cosmic-ray intensity on latitude rejected this hypothesis, showing that cosmic rays mainly consist of charged particles and leading to the first quantitative calculations of their composition.

The interest in measuring the cosmic-ray flux close to the poles is related to the fact that the geomagnetic field shields the Earth from low-energy charged cosmic rays, with an energy threshold (geomagnetic cut-off) depending on latitude, explains Mario Nicola Mazziotta, an INFN researcher and member of the Polarquest2018 team. “Although the geomagnetic cut-off decreases with increasing latitude, the cosmic-ray intensity at Earth reaches its maximum at latitudes of about 50–60°, where the cut-off is of a few GeV or less, and then seems not to grow anymore with latitude. This indicates that cosmic-ray intensity below a given energy is suppressed, due to solar effects, and makes the study of cosmic rays near the polar regions a very useful probe of solar activity.”

Polarquest2018 is a small cosmic-ray experiment that recently completed a six-week-long expedition to the Arctic Circle, on board a 18 m-long boat called Nanuq designed for sailing in extreme regions. The boat set out from Isafjordur, in North-East Iceland, on 22 July, circumnavigating the Svalbard archipelago in August and arriving in Tromsø on 4 September. The Polarquest2018 detectors reached 82 degrees north, shedding light on the soft component of cosmic rays trapped at the poles by Earth’s magnetic field.

Polarquest2018 is the result of the hard work of a team of a dozen people for more than a year, in addition to enthusiastic support from many other collaborators. Built at CERN by school students from Switzerland, Italy and Norway, Polarquest2018 encompasses three scintillator detectors to measure the cosmic-ray flux at different latitudes: one mounted on the Nanuq’s deck and two others installed in schools in Italy and Norway. The detectors had to operate with the limited electric power (12 W) that was available on board, both recording impinging cosmic rays and receiving GPS signals to timestamp each event with a precision of a few tens of nanoseconds. The detectors also had to be mechanically robust to resist the stresses from rough seas.

The three Polarquest2018 detectors join a network of around 60 others in Italy called the Extreme Energy Events – Science Inside Schools (EEE) experiment, proposed by Antonino Zichichi in 2004 and presently co-ordinated by the Italian research institute Centro Fermi in Rome, with collaborators including CERN, INFN and various universities. The detectors (each made of three multigap resistive plate chambers of about 2 m2 area) were built at CERN by high-school students and the large area of the EEE enables searches for very-long-distance correlations between cosmic-ray showers.

A pivotal moment in the arctic expedition came when the Nanuq arrived close to the south coast of the Svalbard archipelago and was sailing in the uncharted waters of the Recherche Fjord. While the crew admired a large school of belugas, the boat struck the shallow seabed, damaging its right dagger board and leaving the craft perched at a 45° incline. The crew fought to get the Nanuq free, but in the end had to wait almost 12 hours for the tide to rise again. Amazingly, explains Polarquest2018 project leader Paola Catapano of CERN, the incident had its advantages. “It allowed the team to check the algorithms used to correct the raw data on cosmic rays for the inclination and rolling of the boat, since the data clearly showed a decrease in the number of muons due to a reduced acceptance.”

Analysis of the Polarquest2018 data will take a few months, but preliminary results show no significant increase in the cosmic-ray flux, even at high latitudes. This is contrary to what one could naively expect considering the high density of the Earth’s magnetic field lines close to the pole, explains Luisa Cifarelli, president of Centro Fermi in Rome. “The lack of increase in the cosmic flux confirms the hypothesis formulated by Lemaître in 1932, with much stronger experimental evidence than was available up to now, and with data collected at latitudes where no published results exist,” she says. The Polarquest2018 detector has also since embarked on a road trip to measure cosmic rays all along the Italian peninsula, collecting data over a huge latitude interval.

Heading south

Meanwhile, 20,000 km south, a Dutch expedition to the South Pole called Clean2Antarctica has just got under way, carrying a small cosmic-ray experiment from Nikhef on board a vehicle called Solar Voyager. The solar-powered cart, built from recycled 3D-printed household plastics, will make the first ground measurements in Antarctica of the muon decay rate and of charged particles from extensive-air cosmic-ray showers. Cosmic rays will be measured by a roof-mounted scintillation device as the cart makes a 1200 km, six-week-long journey from the edge of the Antarctic icefields to the geometric South Pole.

The team taking the equipment across the Antarctic to the South Pole comprises mechanical engineer Ter Velde and his wife Liesbeth, who initiated the Clean2Antarctica project and are both active ocean sailors. Back in the warmer climes of the Netherlands, researchers from Nikhef will remotely monitor for any gradients in the incoming particle fluxes as the magnetic field lines are converging closer to the pole. In theory, the magnetic field will funnel charged particles from the high atmosphere to the Earth’s surface, leading to higher fluxes near the pole. But the incoming muon signal should not be affected, as this is produced by high-energy particles producing air showers of charged particles, explains Nikhef project scientist Bob van Eijk. “But this is experimental physics and a first, so we will just do the measurements and see what comes out,” he says.

The scintillation panel used is adapted from the HiSPARC rooftop cosmic-ray detectors that Nikhef has been providing in high schools in the Netherlands, the UK and Denmark for the past 15 years. Under professional supervision, students and teachers build these roof-box-sized detectors themselves and run the detection programme and data-analysis in their science classes. Some 140 rooftop stations are online and many thousands of pupils have been involved over the years, stimulating interest in science and research.

Pristine backdrop

The panel being taken to Antarctica is a doubled-up version that is half the usual area of the HiSPARC panels due to strict space restrictions. Two gyroscope systems will correct for any changes in the level of the panel while traversing the Antarctic landscape. All the instruments are solar powered, with the power coming from photovoltaic panels on two additional carts pulled by the main electric vehicle. The double detection depth of the panels will allow for muon-decay detection by photomultiplier tubes as well as regular cosmic-ray particles such as electrons and photons. Data from the experiment will be relayed regularly by satellite from the Solar Voyager vehicle so that analysis can take place in parallel, and will be made public through a dedicated website.

The Clean2Antarctic expedition set off in mid-November from Union Glacier Camp station near the Antarctic Peninsula. It is sponsored by Dutch companies and from crowd funding, and has benefitted from extensive press and television coverage. The trip will take the team across bleak snow planes and altitudes up to 2835 m and, despite being the height of Antarctic summer, temperatures could be down to –30 °C. The mission aims to use the pristine backdrop of Antarctica to raise public awareness about waste reduction and recycling.

“This is one of the rare occasions that a scientific outreach programme, with genuine scientific questions targeting high-school students as prime investigators, teams up with an idealist group that tries to raise awareness on environmental issues regarding circular economy,” says van Eijk. “The plastic for the vehicles was collected by primary-school kids, while three groups of young researchers formed ‘think tanks’ to generate solutions to questions about environmental issues that industrial sponsors/partners have raised.” Polarquest2018 had a similar goal, and its MantaNet project became the first to assess the presence and distribution of microplastics in the Arctic waters north of Svalbard at a record latitude of 82.7° north. According to MantaNet project leader Stefano Alliani: “One of the conclusions already drawn by sheer observation is that even at such high latitudes the quantity of macro plastic loitering in the most remote and wildest beaches of our planet is astonishing.”

The post Cosmic research poles apart appeared first on CERN Courier.

]]>
Feature Two independent groups are going to Earth’s extremes to make unprecedented measurements for physics, education and the environment. https://cerncourier.com/wp-content/uploads/2018/11/CCDec18_Polar_frontis.png
Nobel work shines a light on particle physics https://cerncourier.com/a/nobel-work-shines-a-light-on-particle-physics/ Mon, 29 Oct 2018 09:00:32 +0000 https://preview-courier.web.cern.ch/?p=12833 This year’s Nobel Prize in Physics was shared between three researchers for groundbreaking inventions in laser physics. Half the prize went to Arthur Ashkin of Bell Laboratories in the US for his work on optical tweezers, while the other half was awarded jointly to Gérard Mourou of the École Polytechnique in Palaiseau, France, and Donna […]

The post Nobel work shines a light on particle physics appeared first on CERN Courier.

]]>
Chirped-pulse amplification

This year’s Nobel Prize in Physics was shared between three researchers for groundbreaking inventions in laser physics. Half the prize went to Arthur Ashkin of Bell Laboratories in the US for his work on optical tweezers, while the other half was awarded jointly to Gérard Mourou of the École Polytechnique in Palaiseau, France, and Donna Strickland of the University of Waterloo in Canada “for their method of generating high-intensity, ultra-short optical pulses”.

Mourou and Strickland’s technique, called chirped-pulse amplification (CPA), opens new perspectives in particle physics. Proposed in 1985, and forming the foundation of Strickland’s doctoral thesis, CPA uses a strongly dispersive medium to temporally stretch (“chirp”) laser pulses to reduce their peak power, then amplifies and, finally, compresses them – boosting the intensity of the output pulse dramatically without damaging the optical medium. The technique underpins today’s high-power lasers and is used worldwide for applications such as eye surgery and micro-machining.

Surfing the waves

But CPA’s potential for particle physics was clear from the beginning. In particular, high-power ultra-short laser pulses can drive advanced plasma-wakefield accelerators in which charged particles are brought to high energies over very short distances by surfing longitudinal plasma waves.

“After we invented laser-wakefield acceleration back in 1979, I was acutely aware that the laser community at that time did not have the specification that we needed to drive wakefields, which needed ultrafast and ultra-intense pulses,” explains Toshi Tajima of the University of California at Irvine, a long-time collaborator of Mourou. Tajima became aware of CPA in 1989 and first met Mourou in 1993 at a workshop at the University of Texas at Austin devoted to the future of accelerator physics upon the demise of the Superconducting Super Collider. “Ever since then, Gérard and I have formed a strong scientific and personal bond to promote ultra-intense lasers and their applications to accelerators and other important societal applications such as medical accelerators, transmutation and intense X-rays,” he says.

Today, acceleration gradients two-to-three orders of magnitude higher than existing radio-frequency (RF) techniques are possible at state-of-the-art laser-driven plasma-wakefield experiments, promising more compact and potentially cheaper particle accelerators. Though not yet able to match the quality and reliability of conventional acceleration techniques, plasma accelerators might one day be able to overcome the limitations of today’s RF technology, thinks Constantin Haefner, program director for advanced photon technologies at Lawrence Livermore National Laboratory in the US. “The race has started,” he says. “The ability to amplify lasers to extreme powers enabled the discovery of new physics, and even more exciting, some of the early envisioned applications such as laser plasma accelerators are on the verge of moving from proof-of-principle to real machines.”

Electrons can also be used to drive plasma accelerators, as is being explored at SLAC and in European labs such as LNF in Italy and DESY in Germany. Meanwhile, the AWAKE experiment at CERN has recently demonstrated the first proton-driven plasma-wakefield acceleration (CERN Courier October 2018 p7). Although AWAKE does not use a laser to drive the plasma, it employs a high-power laser to generate the plasma from a gas, at the same time seeding the proton self-modulation process that allows charged particles to be accelerated. CERN is also a partner in a recent project called the International Coherent Amplification Network, led by Mourou and funded by the European Union, to explore advanced wakefield drivers based on the coherent combination of multiple high-intensity fibre lasers that can run at high repetition rates and efficiencies.

“We have a long way to go, but plasma accelerators have game-changing potential for high-energy physics,” says Wim Leemans, director of the accelerator technology and applied physics division and Berkeley Lab Laser Accelerator Center (BELLA) at Lawrence Berkeley National Laboratory. “Other applications already being explored include free-electron lasers, a quasi-monoenergetic gamma-ray source for nonproliferation and nuclear security purposes, and a miniaturised method for brachytherapy, a cancer-treatment modality in which radiation is delivered directly to the site of a tumour.”

Beyond accelerators, the enormous intensity of single-shot pulses enabled by CPA offer new types of experiments in high-energy physics. In 2005, Mourou initiated the Extreme Light Infrastructure (ELI), nearing completion in the Czech Republic, Hungary and Romania, to explore the use of high-power PW lasers such as Livermore Lab’s HAPLS facility (see image on previous page). Going beyond ELI is the International Center for Zetta- and Exawatt Science and Technology (IZEST), established in France in 2011 to develop and build a community around the emerging field of laser-based particle physics. Under Mourou and Tajima’s direction, IZEST will extend existing laser facilities (such as PETAL at the Megajoule Laser facility in France) to the exa- and zettawatt scale, opening studies including “searches for dark matter and energy and probes of the nonlinearity of the vacuum via zeptosecond dynamical spectroscopy.”

The post Nobel work shines a light on particle physics appeared first on CERN Courier.

]]>
News https://cerncourier.com/wp-content/uploads/2018/10/CCNov18_News-HAPLS.jpg
Satellite premieres in CERN irradiation facility https://cerncourier.com/a/satellite-premieres-in-cern-irradiation-facility/ Mon, 29 Oct 2018 09:00:32 +0000 https://preview-courier.web.cern.ch/?p=12842 CELESTA’s main goal is to enable a space version of an existing CERN technology called RadMon, which was developed to monitor radiation levels in the LHC.

The post Satellite premieres in CERN irradiation facility appeared first on CERN Courier.

]]>
The CELESTA micro satellite, carrying a space version of the radiation-monitoring system RadMon.

CHARM, a unique facility at CERN to test electronics in complex radiation environments, has been used to test its first full space system: a micro-satellite called CELESTA, developed by CERN in collaboration with the University of Montpellier and the European Space Agency. Built to monitor radiation levels in low-Earth orbit, CELESTA was successfully tested and qualified during July under a range of radiation conditions that it can be expected to encounter in space. It serves as an important validation of CHARM’s potential value for aerospace applications.

CELESTA’s main goal is to enable a space version of an existing CERN technology called RadMon, which was developed to monitor radiation levels in the Large Hadron Collider (LHC). RadMon also has potential applications in space missions that are sensitive to the radiation environment, ranging from telecom satellites to navigation and Earth-observation systems.

The CELESTA cubesat, a technological demonstrator and educational project made possible with funding from the CERN Knowledge Transfer fund, will play a key role in validating potential space applications by using RadMon sensors to measure radiation levels in low-Earth orbit. An additional goal of CELESTA is to demonstrate that the CHARM facility is capable of reproducing the low-Earth orbit radiation environment. “CHARM benefits from CERN’s unique accelerator facilities and was originally created to answer a specific need for radiation testing of CERN’s electronic equipment,” explains Markus Brugger, deputy head of the engineering department and initiator of both the CHARM and CELESTA projects in the frame of the R2E (Radiation to Electronics) initiative. The radiation field at CHARM is generated through the interaction of a 24 GeV/c proton beam extracted from the Proton Synchrotron with a cylindrical copper or aluminium target. Different shielding configurations and testing positions allow for controlled tests to account for desired particle types, energies and fluences.

It is the use of mixed fields that makes CHARM unique compared to other test facilities, which typically use mono-energetic particle beams or sources. For the latter, only one or a few discrete energies can be tested, which is usually not representative of the authentic and complex radiation environments encountered in aerospace missions. Most testing facilities also use focused beams, limiting tests to individual components, whereas CHARM has a homogenous field extending over an area of least one square metre, which allows complete and complex satellites and other systems to be tested.

CELESTA is now fully calibrated and will be launched as soon as a launch window is provided. When in orbit, in-flight data from CELESTA will be used to validate the CHARM test results for authentic space conditions. “This is a very important milestone for the CELESTA project, as well as an historical validation of the CHARM test facility for satellites,” says Enrico Chesta, CERN’s aerospace applications coordinator. 

The post Satellite premieres in CERN irradiation facility appeared first on CERN Courier.

]]>
News CELESTA’s main goal is to enable a space version of an existing CERN technology called RadMon, which was developed to monitor radiation levels in the LHC. https://cerncourier.com/wp-content/uploads/2018/10/CCNov18_News-Celesta.jpg
Beam tests bring ProtoDUNE to life https://cerncourier.com/a/beam-tests-bring-protodune-to-life/ Mon, 29 Oct 2018 09:00:29 +0000 https://preview-courier.web.cern.ch/?p=12838 The world’s largest liquid-argon neutrino detector has recorded its first particle tracks in tests at CERN.

The post Beam tests bring ProtoDUNE to life appeared first on CERN Courier.

]]>
Cosmic-muon tracks

The world’s largest liquid-argon neutrino detector has recorded its first particle tracks in tests at CERN, marking an important step towards the international Deep Underground Neutrino Experiment (DUNE) under preparation in the US. The enormous ProtoDUNE detector, designed and built at CERN’s neutrino platform, is the first of two prototypes for what will be a much larger DUNE detector. Situated deep beneath the Sanford Underground Research Facility in South Dakota, four final DUNE detector modules (each 20 times larger than the current prototypes and containing a total of 70,000 tonnes of liquid argon) will record neutrinos sent from Fermilab’s Long Baseline Neutrino Facility some 1300 km away.

DUNE’s scientific targets include CP violation in the neutrino sector, studies of astrophysical neutrino sources, and searches for proton decay. When neutrinos enter the detector and strike argon nuclei they produce charged particles, which leave ionisation traces in the liquid from which a 3D event can be reconstructed. The first ProtoDUNE detector took two years to build and eight weeks to fill with 800 tonnes of liquid argon, which needs to be cooled to a temperature below –184 degrees. It adopts a single-phase architecture, which is an evolution from the 170 tonne MicroBooNE detector at Fermilab’s short-baseline neutrino facility. The second ProtoDUNE module adopts a different, dual-phase, scheme with a second detection chamber.

The construction and operation of ProtoDUNE will allow researchers to validate the membrane cryostat technology and associated cryogenics for the final detector, in addition to the networking and computing infrastructure. Now that the first tracks have been seen, from beam tests involving cosmic rays and charged-particle beams from CERN’s SPS, ProtoDUNE’s operation will be studied in greater depth. The charged-particle beam test enables critical calibration measurements necessary for precise calorimetry, and will also produce valuable data for optimising event-reconstruction algorithms. These and other measurements will help quantify and reduce systematic uncertainties for the DUNE far detector and significantly improve the physics reach of the experiment. “Seeing the first particle tracks is a major success for the entire DUNE collaboration,” said DUNE co-spokesperson Stefan Soldner-Rembold of the University of Manchester, UK.

More than 1000 scientists and engineers from 32 countries in five continents are working on the development, design and construction of the DUNE detectors. For CERN, it is the first time the European lab has invested in infrastructure and detector development for a particle-physics project in the US. “Only two years ago we completed the new building at CERN to house two large-scale prototype detectors that form the building blocks for DUNE,” said Marzio Nessi, head of the neutrino platform at CERN. “Now we have the first detector taking beautiful data, and the second detector, which uses a different approach to liquid-argon technology, will be online in a few months.”

In July, the US Department of Energy also formally approved PIP-II, an accelerator upgrade project at Fermilab required to deliver the high-power neutrino beam required for DUNE. First data at DUNE is expected in 2026. Meanwhile, in Japan, an experiment with similar scientific goals and also with scientific links to the CERN neutrino platform – Hyper-Kamiokande – has recently been granted seed funding for construction to begin in 2020 (CERN Courier October 2018 p11). Together with several other experiments such as KATRIN in Germany, physicists are closing in on the neutrino’s mysteries two decades after the discovery of neutrino oscillations (CERN Courier July/August 2018 p5).

The post Beam tests bring ProtoDUNE to life appeared first on CERN Courier.

]]>
News The world’s largest liquid-argon neutrino detector has recorded its first particle tracks in tests at CERN. https://cerncourier.com/wp-content/uploads/2018/10/CCNov18_Viewpoint-relabel.png
J-PET’s plastic revolution https://cerncourier.com/a/j-pets-plastic-revolution/ Mon, 29 Oct 2018 09:00:03 +0000 https://preview-courier.web.cern.ch/?p=12858 A PET detector based on plastic scintillators offers whole-body imaging in addition to precision tests of fundamental symmetries.

The post J-PET’s plastic revolution appeared first on CERN Courier.

]]>
The J-PET detector

It is some 60 years since the conception of positron emission tomography (PET), which revolutionised the imaging of physiological and biochemical processes. Today, PET scanners are used around the world, in particular providing quantitative and 3D images for early-stage cancer detection and for maximising the effectiveness of radiation therapies. Some of the first PET images were recorded at CERN in the late 1970s, when physicists Alan Jeavons and David Townsend used the technique to image a mouse. While the principle of PET already existed, the detectors and algorithms developed at CERN made a major contribution to its development. Techniques from high-energy physics could now be about to enable another leap in PET technology.

In a typical PET scan, a patient is administered with a radioactive solution that concentrates in malignant cancers. Positrons from β+ decay annihilate with electrons from the body, resulting in the back-to-back emission of two 511 keV gamma rays that are registered in a crystal via the photoelectric effect. These signals are then used to reconstruct an image. Significant advances in PET imaging have taken place in the past few decades, and the vast majority of existing scanners use inorganic crystals – usually bismuth germanium oxide (BGO) or lutetium yttrium orthosilicate (LYSO) – organised in a ring to detect the emitted PET photons.

The main advantage of crystal detectors is their large stopping power, high probability of photoelectric conversion and good energy resolution. However, the use of inorganic crystals is expensive, limiting the number of medical facilities equipped with PET scanners. Moreover, conventional detectors are limited in their axial field of view: currently a distance of only about 20 cm along the body can be simultaneously examined from a single-bed position, meaning that several overlapping bed positions are needed to carry out a whole-body scan, and only 1% of quanta emitted from a patient’s body are collected. Extension of the scanned region from around 20 to 200 cm would not only improve the sensitivity and signal-to-noise ratio, but also reduce the radiation dose needed for a whole-body scan.

To address this challenge, several different designs for whole-body scanners have been introduced based on resistive-plate chambers, straw tubes and alternative crystal scintillators. In 2009, particle physicist Paweł Moskal of Jagiellonian University in Kraków, Poland, introduced a system that uses inexpensive plastic scintillators instead of inorganic ones for detecting photons in PET systems. Called the Jagiellonian PET (J-PET) detector, and based on technologies already employed in the ATLAS, LHCb, KLOE, COSY-11 and other particle-physics experiments, the aim is to allow cost effective whole-body PET imaging.

Whole-body imaging

The current J-PET setup comprises a ring of 192 detection modules axially arranged in three layers as a barrel-shaped detector and the construction is based on 17 patent-protected solutions. Each module consists of a 500 × 19 × 7 mm3 scintillator strip made of a commercially available material called EJ-230, with a photomultiplier tube (PMT) connected at each side. Photons are registered via the Compton effect and each analog signal from the PMTs is sampled in the voltage domain at four thresholds by dedicated field-programmable gate arrays.

In addition to recording the location and time of the electron—positron annihilation, J-PET determines the energy deposited by annihilation photons. The 2D position of a hit is known from the scintillator position, while the third space component is calculated from the time difference of signals arriving at both ends of scintillator, enabling direct 3D image reconstruction. PMTs connected to both sides of the scintillator strips compensate for the low detection efficiency of plastic compared to crystal scintillators and enable multi-layer detection. A modular and relatively easy to transport PET scanner with a non-magnetic and low density central part can be used as a magnetic resonance imaging (MRI) or computed-tomography compatible insert. Furthermore, since plastic scintillators are produced in various shapes, the J-PET approach can be also introduced for positron emission mammography (PEM) and as a range monitor for hadron therapy.

The J-PET detector offers a powerful new tool to test fundamental symmetries

J-PET can also build images from positronium (a bound state of electron and positron) that gets trapped in intermolecular voids. In about 40% of cases, positrons injected into the human body create positronium with a certain lifetime and other environmentally sensitive properties. Currently this information is neither recorded nor used for PET imaging, but recent J-PET measurements of the positronium lifetime in normal and cancer skin cells indicate that the properties of positronium may be used as diagnostic indicators for cancer therapy. Medical doctors are excited by the avenues opened by J-PET. These include a larger axial view (e.g. to check correlations between organs separated by more than 20  cm in the axial direction), the possibility of performing combined PET-MRI imaging at the same time and place, and the possibility of simultaneous PET and positronium (morphometric) imaging paving the way for in vivo determination of cancer malignancy.

Such a large detector is not only potentially useful for medical applications. It can also be used in materials science, where PALS enables the study of voids and defects in solids, while precise measurements of positronium atoms leads to morphometric imaging and physics studies. In this latter regard, the J-PET detector offers a powerful new tool to test fundamental symmetries.

Combinations of discrete symmetries (charge conjugation C, parity P, and time reversal T) play a key role in explaining the observed matter–antimatter asymmetry in the universe (CP violation) and are the starting point for all quantum field theories preserving Lorentz invariance, unitarity and locality (CPT symmetry). Positronium is a good system enabling a search for C, T, CP and CPT violation via angular correlations of annihilation quanta, while the positronium lifetime measurement can be used to separate the ortho- and para-positronium states (o-Ps and p-Ps). Such decays also offer the potential observation of gravitational quantum states, and are used to test Lorentz and CPT symmetry in the framework of the Standard Model Extension.

At J-PET, the following reaction chain is predominantly considered: 22Na 22Ne e+ νe, 22Ne 22Ne γ and e+e o-Ps 3γ annihilation. The detection of 1274 keV prompt γ emission from 22Ne de-excitation is the start signal for the positronium-lifetime measurement. Currently, tests of discrete symmetries and quantum entanglement of photons originating from the decay of positronium atoms are the main physics topics investigated by the J-PET group. The first data taking was conducted in 2016 and six data-taking campaigns have concluded with almost 1 PB of data. Physics studies are based on data collected with a point-like source placed in the centre of the detector and covered by a porous polymer to increase the probability of positronium formation. A test measurement with a source surrounded by an aluminium cylinder was also performed. The use of a cylindrical target (figure 1, left) allows researchers to separate in space the positronium formation and annihilation (cylinder wall) from the positron emission (source). Most recently, measurements by J-PET were also performed with a cylinder with the inner wall covered by the porous material.

Figure 1

The J-PET programme aims to beat the precision of previous measurements for C, CP and CPT symmetry tests in positronium, and to be the first to observe a potential T-symmetry violation. Tests of C symmetry, on the other hand, are conducted via searches for forbidden decays of the positronium triplet state (o-Ps) to 4γ and the singlet state (p-Ps) to 3γ. Tests of the other fundamental symmetries and their combinations will be performed by the measurement of the expectation values of symmetry-odd operators constructed using spin of o-Ps, momenta and polarisation vectors of photons originating from its annihilation (figure 1, right). The physical limit of such tests is expected at the level of about 10−9 due to photo–photon interaction, which is six orders of magnitude smaller than the present experimental limits (e.g. at the University of Tokyo and by the Gammasphere experiment).

Since J-PET is built of plastic scintillators, it provides an opportunity to determine the photon’s polarisation through the registration of primary and secondary Compton scatterings in the detector. This, in turn, enables the study of multi-partite entanglement of photons originating from the decays of positronium atoms. The survival of particular entanglement properties in the mixing scenario may make it possible to extract quantum information in the form of distinct entanglement features, e.g. from metabolic processes in human bodies.

Currently a new, fourth J-PET layer is under construction (figure 2), with a single unit of the layer comprising 13 plastic-scintillator strips. With a mass of about 2 kg per single detection unit, it is easy to transport and to build on-site a portable tomographic chamber whose radius can be adjusted for different purposes by using a given number of such units.

Figure 2

The J-PET group is a collaboration between several Polish institutions – Jagiellonian University, the National Centre for Nuclear Research Świerk and Maria Curie-Skłodowska University – as well as the University of Vienna and the National Laboratory in Frascati. The research is funded by the Polish National Centre for Research and Development, by the Polish Ministry of Science and Higher Education and by the Foundation for Polish Science. Although the general interest in improved quality of medical diagnosis was the first step towards this new detector for positron annihilation, today the basic-research programme is equally advanced. The only open question at J-PET is whether a high-resolution full human body tomographic image will be presented before the most precise test of one of nature’s fundamental symmetries.

The post J-PET’s plastic revolution appeared first on CERN Courier.

]]>
Feature A PET detector based on plastic scintillators offers whole-body imaging in addition to precision tests of fundamental symmetries. https://cerncourier.com/wp-content/uploads/2018/10/CCNov18_J-PET-frontisHR-1.png
Europe calls for advanced detector and imaging ideas https://cerncourier.com/a/europe-calls-for-advanced-detector-and-imaging-ideas/ Fri, 28 Sep 2018 13:28:51 +0000 https://preview-courier.web.cern.ch/?p=12709 The European Union (EU) has committed €17 million to help bring a total of 170 breakthrough detection and imaging ideas to market. Led by CERN and funded by the EU’s Horizon 2020 programme, the ATTRACT initiative involves several other European research infrastructures and institutes: the European Molecular Biology Laboratory, European Southern Observatory, European Synchrotron Radiation Facility, […]

The post Europe calls for advanced detector and imaging ideas appeared first on CERN Courier.

]]>
The European Union (EU) has committed €17 million to help bring a total of 170 breakthrough detection and imaging ideas to market. Led by CERN and funded by the EU’s Horizon 2020 programme, the ATTRACT initiative involves several other European research infrastructures and institutes: the European Molecular Biology Laboratory, European Southern Observatory, European Synchrotron Radiation Facility, European XFEL, Institut Laue-Langevin, Aalto University, the European Industrial Research Management Association (EIRMA) and ESADE. It will focus on the development of new radiation sensor and imaging technologies both for scientific purposes and to address broader challenges in the domains of health, sustainable materials and information, and communication technologies.

Markus Nordberg of the CERN-IPT development and innovation unit laid the foundations for ATTRACT back in 2013, observing then how detector developers found it difficult to find suitable programmes to facilitate the wider use of generic detector R&D. “The detector R&D community, for example regarding the LHC upgrades and beyond, has ideas of the potential suitability of its technologies in other fields, but limited contacts, mechanisms or resources available to follow these ideas further or to make a case,” he says. “ATTRACT builds upon the collaborative spirit of open science and co-innovation, where the experience and available infrastructure at laboratories such as CERN could turn out to be useful.”

The ATTRACT seed fund (www.attract-eu.com) is open to researchers and entrepreneurs from organisations all over Europe. The call for proposals for CERN users and other outside laboratories working on detection and imaging technologies will close on 31 October, and the successful proposals will be announced in early 2019. The 170 projects funded by ATTRACT will have one year to develop their ideas, during which business and innovation experts from Aalto University, EIRMA and ESADE Business School will help project teams transform their technology into products, services, companies and jobs.

The post Europe calls for advanced detector and imaging ideas appeared first on CERN Courier.

]]>
News https://cerncourier.com/wp-content/uploads/2018/10/CCOct18News-attract.jpg
Thin silicon sharpens STAR imaging https://cerncourier.com/a/thin-silicon-sharpens-star-imaging/ Fri, 28 Sep 2018 13:24:48 +0000 https://preview-courier.web.cern.ch/?p=12719 A new technology has enabled the STAR collaboration at Brookhaven National Laboratory’s Relativistic Heavy-Ion Collider (RHIC) to greatly expand its ability to reconstruct short-lived charm hadron decays, even in collisions containing thousands of tracks. A group of STAR collaborators, led by Lawrence Berkeley National Laboratory, used 400 Monolithic Active Pixel Sensor (MAPS) chips in its […]

The post Thin silicon sharpens STAR imaging appeared first on CERN Courier.

]]>
Gold–gold collision

A new technology has enabled the STAR collaboration at Brookhaven National Laboratory’s Relativistic Heavy-Ion Collider (RHIC) to greatly expand its ability to reconstruct short-lived charm hadron decays, even in collisions containing thousands of tracks. A group of STAR collaborators, led by Lawrence Berkeley National Laboratory, used 400 Monolithic Active Pixel Sensor (MAPS) chips in its new vertex detector, called the heavy-flavour tracker (HFT), representing the first application of this technology in a collider experiment.

The HFT reconstructs charmed hadrons over a broad momentum range by identifying their secondary decay vertices, which are a few tens to hundreds of micrometres away from the collision vertex. The charmed hadrons are used to study heavy-quark energy loss in a quark–gluon plasma (QGP) and to determine emergent QGP-medium transport parameters.

The MAPS sensor is based on the same commercial CMOS technology that is widely used in digital cameras. It comprises an array of 928 × 960 square pixels with a pitch of 20.7 × 20.7 μm2 to provide a single-hit resolution of <6 μm. The sensors are thinned to a thickness of 50 μm and mounted on a carbon-fibre mechanical support, and their relatively low power consumption (170 mW/cm2) allows the detector to be air-cooled. The thinness is important to minimise multiple scattering in the HFT, allowing for good pointing resolution even for low transverse-momentum charged tracks.

The heavy-flavour physics programme enabled by the HFT has been one of the driving forces for RHIC runs from 2014 to 2016. The first measurement with the HFT on the D0 elliptic collective flow shows that D0 mesons have significant hydrodynamic flow in gold–gold collisions, and the HFT pointing resolution also enabled the first measurement of charmed-baryon production in heavy-ion collisions.

Building on the success of the STAR HFT, the ALICE collaboration at CERN’s Large Hadron Collider is now building its own MAPS-based vertex detector – the ITS upgrade – and the sPHENIX collaboration at RHIC is also planning a MAPS-based detector. These next-generation detectors will have much faster event readout, by a factor of 20, to reduce event pileup and therefore allow physicists to reconstruct bottom hadrons more efficiently in high-luminosity, heavy-ion collision environments.

The post Thin silicon sharpens STAR imaging appeared first on CERN Courier.

]]>
News https://cerncourier.com/wp-content/uploads/2018/10/CCOct18News-star.jpg
Defeating the background in the search for dark matter https://cerncourier.com/a/defeating-the-background-in-the-search-for-dark-matter/ Fri, 28 Sep 2018 10:00:15 +0000 https://preview-courier.web.cern.ch/?p=12744 A global effort is under way to carry out a complete search for high-mass dark-matter particles using an experiment called DarkSide-20k.

The post Defeating the background in the search for dark matter appeared first on CERN Courier.

]]>
Inspecting photomultiplier tubes

Compelling cosmological and astrophysical evidence for the existence of dark matter suggests that there is a new world beyond the Standard Model of particle physics still to be discovered and explored. Yet, despite decades of effort, direct searches for dark matter at particle accelerators and underground laboratories alike have so far come up empty handed. This calls for new and improved methods to spot the mysterious substance thought to make up most of the matter in the universe.

Dark-matter searches using detectors based on liquefied noble gases such as xenon and argon have long demonstrated great discovery potential and continue to play a major role in the field. Such experiments use a large volume of material in which nuclei struck by a dark-matter particle would create a tiny burst of scintillation light, and the very low expected event rate requires that backgrounds are kept to a minimum. Searches employing argon detectors have a particular advantage because they can significantly reduce events from background sources, such as background from the abundant radioactive decays from detector materials and from electron scattering by solar neutrinos. That will leave the low-rate nuclear recoils induced by coherent scattering of atmospheric neutrinos as the sole residual background – the so-called “neutrino floor”.

Enter the Global Argon Dark Matter Collaboration (GADMC), which was formed in September 2017. Comprising more than 300 scientists from 15 countries and 60 institutions involved in four first-generation dark-matter experiments – ArDM at Laboratorio Subterráneo de Canfranc in Spain, DarkSide-50 at INFN’s Laboratori Nazionali del Gran Sasso (LNGS) in Italy, DEAP-3600 and MiniCLEAN at SNOLAB in Canada – GADMC is working towards the immediate deployment of a dark-matter detector called DarkSide-20k. The experiment would accumulate an exposure of 100 tonne × year and be followed by a much larger detector to collect more than 1000 tonne × year, both potentially with no instrumental background. These experiments promise the most complete exploration of the mass/parameter range of the present dark-matter paradigm.

Direct detection with liquid argon

One well-considered form of dark matter that matches astronomical measurements is weakly interacting massive particles (WIMPs), which would exist in our galaxy with defined numbers and velocities. In a dark-matter experiment employing a liquid-argon detector, such particles would collide with argon nuclei, causing them to recoil. These nuclear recoils produce ionised and excited argon atoms which, after a series of reactions, form short-lived argon dimers (weakly bonded molecules) that decay and emit scintillation light. The time profile of the scintillation light is significantly different from that created by argon-ionising events associated with radioactivity in the detector material, and has been shown to enable a strong rejection of background sources through a technique known as pulse-shape discrimination.

Fig. 1.

Located at LNGS, DarkSide-50 is the first physics detector of the DarkSide programme for dark-matter detection, with a fiducial mass of 50 kg. The experiment produced its first WIMP search results in December 2014 using argon harvested from the atmosphere and, in October the following year, reported the first ever WIMP search results using lower-radioactivity underground argon.

DarkSide-50 uses a detection scheme based on a dual-phase time projection chamber (TPC), which contains a small region of gaseous argon above a larger region of liquid argon (figure 1, left). In this configuration, secondary scintillation light, generated by ionisation electrons that drift up through the liquid region and are accelerated into the gaseous one, are used together with the primary scintillation light to look for a signal. Compared to single-phase detectors using only the pulse-shape discrimination technique, this search method requires even greater care in restricting the radioactive background through detector design and fabrication but provides excellent position resolution. For low-mass (<10 GeV/c2) WIMPs, the primary scintillation light is nearly absent, but the detectors remain sensitive to dark matter through the observation of the secondary scintillation light.

Fig. 2.

Argon-based dark-matter searches have had a number of successes in the past two years (figure 2). DarkSide-50 established the availability of an underground source of argon strongly depleted in the radioactive isotope 39Ar, while DEAP-3600 (figure 3), the largest (3.3 tonnes) single-phase liquid-argon running experiment, provided the best value to date on the precision of pulse-shape discrimination for scintillation light, better than 1 part in 109. In terms of measurements, DarkSide-50 released results from a 500-day detector exposure completely free of instrumental background and set the best exclusion limit yet for interactions of WIMPs with masses between 1.8 and 6 GeV/c2. Similar results to those from Darkside-50 for the mass region above 40 GeV/c2 were reported in the first paper from DEAP-3600, and results from a one-year exposure of DEAP-3600 with a fiducial mass of about 1000 kg are expected to be released in the near future.

High-sensitivity searches for WIMPs using noble-gas dual-phase TPC detectors are complementary to searches conducted at the Large Hadron Collider (LHC) in the mass region accessible at the current LHC energy of 13 TeV (which is limited to masses of a few TeV/c2) and can reach masses of 100 TeV/c2 and beyond with very good sensitivity.

Leading limits

The best limits to date on high-mass WIMPs have been provided by xenon-based dual-phase TPCs – the leading result given by the recently released XENON1T exposure of 1 tonne × year (figure 2). In spite of a small residual background, they were able to exclude WIMP-nucleon spin-independent elastic-scatter cross-sections above 4.1 × 10–47 cm2 at 30 GeV/c2 at 90% confidence level (CERN Courier July/August 2018 p9). Larger xenon detectors (XENONnT and DARWIN) are also planned by the same collaboration (CERN Courier March 2017 p35).

Fig. 3.

The next generation of xenon and argon detectors have the potential to extend the present sensitivity by about a factor of 10. But there is still a further factor of 10 to be increased before one reaches the neutrino floor – the ultimate level at which interactions of solar and atmospheric neutrinos with the detector material become the limiting background. This is where the GADMC liquid-argon detectors, which are designed to have pulse-shape discrimination capable of eliminating the background from electron scatters of solar neutrinos and internal radioactive decays, can provide an advantage.

GADMC envisages a two-step programme to explore high-mass dark matter. The first step, DarkSide-20k, has been approved for construction at LNGS by Italy’s National Institute for Nuclear Physics (INFN) and by the US National Science Foundation, with present and potentially future funding from Canada. Also a recognised experiment at CERN called RE-37, DarkSide-20k is designed to collect an exposure of 100 tonne × year in a period of five years (to be possibly extended to 200 tonne × year in 10 years), completely free of any instrumental background. The start of data taking is foreseen for 2022–2023. The second step of the programme will involve building an argon detector that is able to collect an exposure of more than 1000 tonne × year. SNOLAB in Canada is a strong candidate to host this second-stage experiment.

Argon can deliver the ultimate background-free search for dark matter, but that comes with extensive technological development. First and foremost, researchers need to extract and distill large volumes of the gas from underground deposits, as argon in the Earth’s atmosphere is unsuitable owing to its high content of the radioactive isotope 39Ar. Second, the scintillation light has to be efficiently detected, requiring innovative photodetector R&D.

Sourcing pure argon

Focusing on the first need, atmospheric argon has a radioactivity of 1 Bq/kg, which is entirely caused by the activation of 40Ar by cosmic rays. Given that the drift time of ionisation electrons over a length of 1 m is 1 ms, a dual-phase TPC detector reaches a complete pile-up condition (i.e. when the event rate exceeds the detector’s ability to read out the information), at a mass of 1 tonne. Scintillation-only detectors do not fare much better, and given that the scintillation lifetime is 10 μs, they are limited to detectors with a fiducial mass of a few tonnes. The argon road to dark matter has thus required early concentration on solving the problem of procuring large batches of argon that are much more depleted in 39Ar than atmospheric argon is. The solution came through an unlikely path: the discovery that underground sources of CO2 originating from Earth’s mantle carry sizable quantities of noble gases, in reservoirs where secondary production of 39Ar is significantly suppressed.

As part of a project called Urania, funded by INFN, GADMC will soon deploy a plant that is able to extract underground argon at a rate of 250 kg per day from the same site in Colorado, US, where argon for DarkSide-50 was extracted. Argon from this underground source is more depleted in 39Ar than atmospheric argon by a factor of at least 1400, making detectors of hundreds of tonnes possible for high-mass WIMP searches.

Not content with this gift of nature, another project called ARIA, also funded by INFN, by the Italian Ministry of University and Research (MIUR), and by the local government of the Sardinia region, is developing a further innovative plant to actively increase the depletion in 39Ar. The plant will consist of a 350 m-tall cryogenic-distillation tower called Seruci-I, which is under construction in the Monte Sinni coal mine in Sardinia operated by the Carbosulcis mining company. Seruci-I will study the active depletion of 39Ar by cryogenic distillation, which exploits the tiny dependence of the vapour pressure upon the atomic number. Seruci-I is expected to reach a production capacity of 10 kg of argon per day with a factor of 10 of 39Ar depletion per pass. This is more than sufficient to deliver – starting from the gas extracted with the Urania underground source – a one-tonne ultra-depleted-argon target that could enable a leading programme of searches for low-mass dark matter. Seruci-I is also expected to perform strong chemical purification at the rate of several tonnes per day and will be used to perform the final stage of purification for the 50 tonne underground argon batch for DarkSide-20k as well as for GADMC’s final detector.

Fig. 4.

CERN plays an important role in DarkSide-20k by carrying out vacuum tests of the 30 modules for the Seruci-I column (figure 4) and by hosting the construction of the cryogenics for DarkSide-20k. At the time of its approval in 2017, DarkSide-20k was set to be deployed within a very efficient system of neutron and cosmic-ray rejection, based on that used for DarkSide-50 and featuring a large organic liquid scintillator detector hosted within a tank of ultrapure deionised water. But with the deployment of new organic scintillator detectors now discouraged at LNGS due to tightening environmental regulations, GADMC is completing the design of a large, and more environmentally friendly, liquid-argon detector for neutron and cosmic-ray rejection based on the cryostat technology developed at CERN to support prototype detector modules for the future Deep Underground Neutrino Experiment (DUNE) in the US.

Turning now to the second need of a background-free search for dark matter – the efficient detection of the scintillation light – researchers are focusing on perfecting existing technology to make low-radioactivity silicon photomultipliers (SiPMs) and using them to build large-area photosensors that are capable of replacing the traditional 3 cryogenic photomultipliers. Plans for DarkSide-20k settled on the use of so-called NUV-HD-TripleDose SiPMs, designed by Fondazione Bruno Kessler of Trento, Italy, and produced by LFoundry of Avezzano, also in Italy. In the meantime, researchers at LNGS and other institutions succeeded in overcoming the huge capacitance per unit surface (50 pF/mm2) required to build photosensors that have an area of 25 cm2 and deliver a signal-to-noise ratio of 15 or larger. A new INFN facility, the Nuova Officina Assergi, was designed to enable the high-throughput production of SiPMs to make such photosensors for DarkSide-20k and future detectors, and it is now under construction.

GADMC’s programme is complemented by a world-class effort to calibrate noble-liquid detectors for low-energy nuclear recoils created by low-mass dark matter. On the heels of the SCENE programme that took place at the University of Notre Dame Tandem accelerator in 2013–2015, the R&D programme, developed at the University of Naples Federico II and now installed at the INFN Laboratori Nazionali del Sud, plans to improve the characterisation of the argon response to nuclear recoils. Of special interest is the extension of measurements to 1 keV, in support of searches for low-mass dark matter, and the verification of the possible dependence of the nuclear-recoil signals upon the direction of the initial recoil momentum relative to the drift electric field, which would enable measurements below the neutrino floor. Directionality in argon has already been established for alpha particles, protons and deuterons, and its presence for nuclear recoils was hinted at by the last results of the SCENE experiment.

Although only recently established, GADMC is enthusiastically pursuing this long-term, staged approach to dark-matter detection in a background-free mode, which has great discovery potential extending all the way to the neutrino floor and perhaps beyond.

The post Defeating the background in the search for dark matter appeared first on CERN Courier.

]]>
Feature A global effort is under way to carry out a complete search for high-mass dark-matter particles using an experiment called DarkSide-20k. https://cerncourier.com/wp-content/uploads/2018/09/CCOct18Dark-frontis.png
First human 3D X-ray in colour https://cerncourier.com/a/first-human-3d-x-ray-in-colour/ Fri, 31 Aug 2018 08:00:33 +0000 https://preview-courier.web.cern.ch/?p=12575 New-Zealand company MARS Bioimaging Ltd has used technology developed at CERN to perform the first colour 3D X-ray of a human body, offering more accurate medical diagnoses.

The post First human 3D X-ray in colour appeared first on CERN Courier.

]]>
3D colour x-ray image

New-Zealand company MARS Bioimaging Ltd has used technology developed at CERN to perform the first colour 3D X-ray of a human body, offering more accurate medical diagnoses. Father and son researchers Phil and Anthony Butler from Canterbury and Otago universities in New Zealand spent a decade building their product using Medipix read-out chips, which were initially developed to address the needs of particle tracking in experiments at the Large Hadron Collider.

The CMOS-based Medipix read-out chip works like a camera, detecting and counting each individual particle hitting the pixels when its shutter is open. The resulting high-resolution, high-contrast images make it unique for medical-imaging applications. Successive generations of chips have been developed during the past 20 years with many applications outside high-energy physics. The latest, Medipix3, is the third generation of the technology, developed by a collaboration of more than 20 research institutes – including the University of Canterbury.

MARS Bioimaging Ltd was established in 2007 to commercialise Medipix3 technology. The firm’s product combines spectroscopic information generated by a Medipix3-enabled X-ray detector with powerful algorithms to generate 3D images. The colours represent different energy levels of the X-ray photons as recorded by the detector, hence identifying different components of body parts such as fat, water, calcium and disease markers.

So far, researchers have been using a small version of the MARS scanner to study cancer, bone and joint health, and vascular diseases that cause heart attacks and strokes. In the coming months, however, orthopaedic and rheumatology patients in New Zealand will be scanned by the new apparatus in a world-first clinical trial. “In all of these studies, promising early results suggest that when spectral imaging is routinely used in clinics it will enable more accurate diagnosis and personalisation of treatment,” said Anthony Butler.

The post First human 3D X-ray in colour appeared first on CERN Courier.

]]>
News New-Zealand company MARS Bioimaging Ltd has used technology developed at CERN to perform the first colour 3D X-ray of a human body, offering more accurate medical diagnoses. https://cerncourier.com/wp-content/uploads/2018/08/CCSep18News-xray-1.png
Higgs centre opens for business https://cerncourier.com/a/higgs-centre-opens-for-business/ Mon, 09 Jul 2018 10:56:38 +0000 https://preview-courier.web.cern.ch/?p=12360 A new facility called the Higgs Centre for Innovation opened at the Royal Observatory in Edinburgh on 25 May as part of the UK government’s efforts to boost productivity and innovation. The centre, named after Peter Higgs of the University of Edinburgh, who shared the 2013 Nobel Prize in physics for his theoretical work on the […]

The post Higgs centre opens for business appeared first on CERN Courier.

]]>
Higgs Centre for Innovation

A new facility called the Higgs Centre for Innovation opened at the Royal Observatory in Edinburgh on 25 May as part of the UK government’s efforts to boost productivity and innovation. The centre, named after Peter Higgs of the University of Edinburgh, who shared the 2013 Nobel Prize in physics for his theoretical work on the Higgs mechanism, will offer start-up companies direct access to academics and industry experts. Space-related technology and big-data analytics are the intended focus, and up to 12 companies will be based there at any one time. According to a press release from the UK Science and Technology Facilities Council (STFC), the facility incorporates laboratories and working spaces for researchers, and includes a business incubation centre based on the successful European Space Agency model already in operation in the UK.

“Professor Higgs’ theoretical work could only be proven by collaboration in different scientific fields, using technology built through joint international ventures,” said principal and vice-chancellor of the University of Edinburgh Peter Mathieson. “This reflects the aims and values of the Higgs Centre for Innovation, which bring scientists, engineers and students together under one roof to work together for the purpose of bettering our understanding of space-related science and driving technological advancement forward.”

The Higgs Centre for Innovation was funded through a £10.7 million investment from the UK government via STFC, which is also investing £2 million over the next five years to operate the centre.

The post Higgs centre opens for business appeared first on CERN Courier.

]]>
News https://cerncourier.com/wp-content/uploads/2018/07/CCJulAug18_News-Higgs.jpg
ISOLDE mints chromium for structure studies https://cerncourier.com/a/isolde-mints-chromium-for-structure-studies/ Mon, 09 Jul 2018 10:54:22 +0000 https://preview-courier.web.cern.ch/?p=12365 CERN’s radioactive ion-beam facility ISOLDE has stamped a new coin in its impressive collection. Long considered the domain of high-energy, in-flight rare-isotope facilities, chromium has now been produced at ISOLDE in prodigious quantities, thanks to a new resonant ionisation laser-ion source (RILIS) scheme. Together with the latest calculations based on chiral effective field theory, the […]

The post ISOLDE mints chromium for structure studies appeared first on CERN Courier.

]]>
Resonant ionisation laser-ion source

CERN’s radioactive ion-beam facility ISOLDE has stamped a new coin in its impressive collection. Long considered the domain of high-energy, in-flight rare-isotope facilities, chromium has now been produced at ISOLDE in prodigious quantities, thanks to a new resonant ionisation laser-ion source (RILIS) scheme. Together with the latest calculations based on chiral effective field theory, the result provides important guidance for improving theoretical approaches that bridge the gap between nuclear matter and the low-energy extension of quantum chromodynamics (QCD).

Certain configurations of protons and neutrons are more bound than others, revealing so-called magic numbers. Chromium has 24 protons, situating it squarely between magic calcium (with 20 protons) and nickel (with 28). Of particular interest to nuclear physics are isotopes with a large excess of neutrons.

The RILIS is a chemically selective ion source which relies on resonant excitation of atomic transitions using a tunable laser. In the new ISOLDE experiment, Maxime Mougeot of CSNSM/Université Paris-Saclay and collaborators used RILIS to venture 10 neutrons further on the nuclear chart to 63Cr. With a total of 39 neutrons, 63Cr lies exactly between the magic neutron numbers 28 and 50 and has a half-life of just 130 ms.

The masses of the newly forged chromium isotopes, as measured by ISOLDE’s precision Penning-trap mass spectrometer ISOLTRAP, offer insights into its shape and structure. Magic-number nuclides have filled orbitals that favour spherical shapes, but not so the chromium nuclides weighed by ISOLTRAP, which are deformed.Whereas in some areas of the nuclear chart deformation sets in very suddenly with the addition of a further neutron, the remarkably smooth neutron binding energies of chromium show that deformation sets in very gradually – contrary to previous conclusions.

The ISOLDE measurements were compared with different theoretical results, including a very first attempt by a new ab-initio approach called valence-space in-medium similarity renormalization group (VS-IMSRG). While several ab-initio approaches exist, until now they have been restricted to the near-spherical cases that have very few valence protons and neutrons. The latest VS-IMSRG results are the first for such open-shell nuclides.

“It turns out that the ab-initio VS-IMSRG, an interaction derived from chiral effective field theory which reduces QCD to its relevant degrees of freedom at the nuclear scale, failed to predict these results,” explains Mougeot. “So the recent chromium measurements are constructive and important for advancing this promising technique, which bridges the gap between first-principle calculations and the structure of nuclei at the extremes of the nuclear landscape.”

The post ISOLDE mints chromium for structure studies appeared first on CERN Courier.

]]>
News https://cerncourier.com/wp-content/uploads/2018/07/CCJulAug18_News-IsoldeHR.jpg
Trigger-level searches for low-mass dijet resonances https://cerncourier.com/a/trigger-level-searches-for-low-mass-dijet-resonances/ Fri, 01 Jun 2018 07:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/trigger-level-searches-for-low-mass-dijet-resonances/ Dijet searches look for a resonance in the two-jet invariant mass spectrum.

The post Trigger-level searches for low-mass dijet resonances appeared first on CERN Courier.

]]>

The LHC is not only the highest-energy collider ever built, it also delivers proton–proton collisions at a much higher rate than any machine before. The LHC detectors measure each of these events in unprecedented detail, generating enormous volumes of data. To cope, the experiments apply tight online filters (triggers) that identify events of interest for subsequent analysis. Despite careful trigger design, however, it is inevitable that some potentially interesting events are discarded.

The LHC-experiment collaborations have devised strategies to get around this, allowing them to record much larger event samples for certain physics channels. One such strategy is the ATLAS trigger-object level analysis (TLA), which consists of a search for new particles with masses below the TeV scale decaying to a pair of quarks or gluons. The analysis uses selective readout to reduce the event size and therefore allow more events to be recorded, increasing the sensitivity to new physics in domains where rates of Standard Model (SM) background processes are very large.

Dijet searches look for a resonance in the two-jet invariant mass spectrum. The strong-interaction multi-jet background is expected to be smoothly falling, thus a bump-like structure would be a clear sign of a deviation from the SM prediction. As the invariant mass decreases, the rate of multi-jet events increases steeply – to the point where, in the sub-TeV mass range, the data-taking system of ATLAS cannot handle the full rate due to limited data-storage resources. Instead, the ATLAS trigger system discards most of the events in this mass range, reducing the sensitivity to low-mass dijet resonances.

By recording only the final-state objects used to make the trigger decision, however, this limitation can be bypassed. For a dijet-resonance search, the only necessary ATLAS detector signals are the calorimeter information used to reconstruct the jets. This compact data format records far less information for each event, about 1% of the usual amount, allowing ATLAS to record dijet events at a rate 20 times larger than what is possible with standard data-taking (figure, left).

While the TLA technique gives access to physics at lower thresholds, the ATLAS detector information for these events is incomplete. Dedicated reconstruction and calibration techniques had to be developed to deal with the partial event information and, as a result, the invariant mass computed from TLA jets is comparable to that using jets reconstructed from the full detector readout within 0.05%.

The data recorded by ATLAS in 2015 and 2016 at a centre-of-mass energy of 13 TeV did not reveal any bump-like structure in the TLA dijet spectrum. The unprecedented statistical precision allowed ATLAS to set its strongest limits on resonances decaying to quarks in the mass range between 450 GeV and 1 TeV (figure, right). The analysis is sensitive to new particles that could mediate interactions between the SM particles and a dark sector, and to other new resonances at the electroweak scale. This analysis probes an important mass region that could not otherwise be explored in this final state with comparable sensitivity.

ATLAS joins CMS and LHCb with an analysis technique that requires fewer storage resources to collect more LHC data. The technique will be extended in the future, with upgraded trigger farms and detectors making tracking information available at early trigger levels. It will thus play an important role at LHC Run 3 and at the high-luminosity LHC upgrade.

The post Trigger-level searches for low-mass dijet resonances appeared first on CERN Courier.

]]>
News Dijet searches look for a resonance in the two-jet invariant mass spectrum. https://cerncourier.com/wp-content/uploads/2018/06/CCJune18_News-Atlas.jpg
CERN’s prowess in nothingness https://cerncourier.com/a/cerns-prowess-in-nothingness/ Fri, 01 Jun 2018 08:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/cerns-prowess-in-nothingness/ CERN is a world-leading centre for extreme vacuum technology, thanks to a wealth of in-house expertise and a constant flow of challenging projects

The post CERN’s prowess in nothingness appeared first on CERN Courier.

]]>

From freeze-dried foods to flat-panel displays and space simulation, vacuum technology is essential in many fields of research and industry. Globally, vacuum technologies represent a multi-billion-dollar, and growing, market. However, it is only when vacuum is applied to particle accelerators for high-energy physics that the technology displays its full complexity and multidisciplinary nature – which bears little resemblance to the common perception of vacuum as being just about pumps and valves.

Particle beams require extremely low pressure in the pipes in which they travel to ensure that their lifetime is not limited by interactions with residual gas molecules and to minimise backgrounds in the physics detectors. The peculiarity of particle accelerators is that the particle beam itself is the cause of the main source of gas: ions, protons and electrons interact with the wall of the vacuum vessels and extract gas molecules, either due to direct beam losses or mediated by photons (synchrotron radiation) and electrons (for example by “multipacting”).

Nowadays, vacuum technology for particle accelerators is focused on this key challenge: understand, simulate, control and mitigate the direct and indirect effects of particle beams on material surfaces. It is thanks to major advances made at CERN and elsewhere in this area that machines such as the LHC are able to achieve the high beam stability that they do.

Since it is in the few-nanometre-thick top slice of materials that vacuum technology concentrates most effort, CERN has merged in the same group: surface-physics specialists, thin-film coating experts and galvanic-treatment professionals, together with teams of designers and colleagues dedicated to the operation of large vacuum equipment. Bringing this expertise together “under one roof” makes CERN one of the world’s leading R&D centres for extreme vacuum technology, contributing to major existing and future accelerator projects at CERN and beyond.

Intersecting history

Vacuum technology for particle accelerators has been pioneered by CERN since its early days, with the Intersecting Storage Rings (ISR) bringing the most important breakthroughs. At the turn of the 1960s and 1970s, this technological marvel – the world’s first hadron collider – required proton beams of unprecedented intensity (of the order of 10 A) and extremely low vacuum pressures in the interaction areas (below 10–11 mbar). The former challenge stimulated studies about ion instabilities and led to innovative surface treatments – for instance glow-discharge cleaning – to mitigate the effects. The low-vacuum requirement, on the other hand, drove the development of materials and their treatments – both chemical and thermal – in addition to novel high-performance cryogenic pumps and vacuum gauges that are still in use today. The technological successes of the ISR also allowed a direct measurement in the laboratory of the lowest ever achieved pressure at room temperature, 2 × 10–14 mbar, a record that still stands today.

The Large Electron Positron collider (LEP) inspired the next chapter in CERN’s vacuum story. Even though LEP’s residual gas density and current intensities were less demanding than those of the ISR, the exceptional length and the intense synchrotron-light power distributed along its 27 km ring triggered the need for unconventional solutions at reasonable cost. Responding to this challenge, the LEP vacuum team developed extruded aluminium vacuum chambers and introduced, for the first time, linear pumping by non-evaporable getter (NEG) strips.

In parallel, LEP project leader Emilio Picasso launched another fruitful development that led to the production of the first superconducting radio-frequency (RF) cavities based on niobium thin-film coating on copper substrates. The ability to attain very low vacuum gained with the ISR, the acquired knowledge in film deposition, and the impressive results obtained in surface treatments of copper were the ingredients for success. The present accelerating RF cavities of the LHC and HIE-ISOLDE (figure 1) are essentially based on the expertise assimilated for LEP (CERN Courier May 2018 p26).

The coexistence in the same team of both NEG and thin-film expertise was the seed for another breakthrough in vacuum technology: NEG thin-film coatings, driven by the LHC project requirements and the vision of LHC project leader Lyn Evans. The NEG material, a micron-thick coating made of a mixture of titanium, zirconium and vanadium, is deposited onto the inner wall of vacuum chambers and, after activation by heating in the accelerator, provides pumping for most of the gas species present in accelerators. The Low Energy Ion Ring (LEIR) was the first CERN accelerator to implement extensive NEG coating in around 2006. For the LHC, one of the technology’s key benefits is its low secondary-electron emission, which suppresses the growth of electron clouds in the room-temperature part of the machine (figure 2).

Studying clouds

Electron clouds had to be studied in depth for the LHC. CERN’s vacuum experts provided direct measurements of the effect in the Super Proton Synchrotron (SPS) with LHC beams, contributing to a deeper understanding of electron emission from technical surfaces over a large range of temperatures. New concepts for vacuum systems at cryogenic temperatures were invented, in particular the beam screen. Conceived at BINP (Russia) and further developed at CERN, this key technology is essential in keeping the gas density stable and to reduce the heat load to the 1.9 K cold-mass temperature of the magnets. This non-exhaustive series of advancements is another example of how CERN’s vacuum success is driven by the often daunting requirements of new projects to pursue fundamental research.

Preparing for the HL-LHC

As the LHC restarts this year for the final stage of Run 2 at a collision energy of 13 TeV, preparations for the high-luminosity LHC (HL-LHC) upgrade are getting under way. The more intense beams of HL-LHC will amplify the effect of electron clouds on both the beam stability and the thermal load to the cryogenic systems. While NEG coatings are very effective in eradicating electron multipacting, their application is limited for room-temperature beam pipes that needed to be heated (“bakeable” in vacuum jargon) to around 200 °C to activate them. Therefore, an alternative strategy has to be found for the parts of the accelerators that cannot be heated, for example those in the superconducting magnets of the LHC and the vacuum chambers in the SPS.

Thin-film coatings made from carbon offer a solution. The idea originated at CERN in 2006 following the observation that beam-scrubbed surfaces – those that have been cleared of trapped gas molecules which increase electron-cloud effects – are enriched in graphite-like carbon. During the past 10 years, this material has been the subject of intense study at CERN. Carbon’s characteristics at cryogenic temperatures are extremely interesting in terms of gas adsorption and electron emission, and the material has already been deposited on tens of SPS vacuum chambers within the LHC Injectors Upgrade project (CERN Courier October 2017 p32). By far, the HL-LHC project presents the most challenging activity in the coming years, namely the coating of the beam screens inserted in the triplet magnets to be situated on both sides of the four LHC experiments to squeeze the protons into tighter bunches. A dedicated sputtering source has been developed that allows alternate deposition of titanium, to improve adherence, and carbon. At the end of the process, the latter layer will be just 50 nm thick.

Another idea to fight electron clouds for the HL-LHC, originally proposed by researchers at the STFC Accelerator Science and Technology Centre (ASTeC) and the University of Dundee in the UK, involves laser-treating surfaces to make them more rough: secondary electrons are intercepted by the surrounding surfaces and cannot be accelerated by the beam. In collaboration with UK researchers and GE Inspection Robotics, CERN’s vacuum team has recently developed a miniature robot that can direct the laser onto the LHC beam screen (“Miniature robot” image). The possibility of in situ surface treatments by lasers opens new perspectives for vacuum technology in the next decades, including studies for future circular colliders.

An additional drawback of the HL-LHC’s intense beams is the higher rate of induced radioactivity in certain locations: the extremities of the detectors, owing to the higher flux of interaction debris, and the collimation areas due to the increased proton losses. To minimise the integrated radioactive dose received by personnel during interventions, it is necessary to properly design all components and define a layout that facilitates and accelerates all manual operations. Since a large fraction of the intervention time is taken up by connecting pieces of equipment, remote assembling and disassembling of flanges is a key area for potential improvements.

One interesting idea that is being developed by CERN’s vacuum team, in collaboration with the University of Calabria (Italy), concerns shape-memory alloys. Given appropriate thermomechanical pre-treatment, a ring of such materials delivers radial forces that tighten the connection between two metallic pipes: heating provokes the clamping, while cooling generates the unclamping. Both actions can be easily implemented remotely, reducing human intervention significantly. Although the invention was motivated by the HL-LHC, it has other applications that are not yet fully exploited, such as flanges for radioactive-beam accelerators and, more generally, the coupling of pipes made of different materials.

Synchrotron applications

Technology advancement sometimes verges off from its initial goals, and this phenomenon is clearly illustrated by one of our most recent innovations. In the main linac of the Compact Linear Collider (CLIC), which envisages a high-energy linear electron-positron collider, the quadrupole magnets need a beam pipe with a very small diameter (about 8 mm) and pressures in the ultra-high vacuum range. The vacuum requirement can be obtained by NEG-coating the vacuum vessel, but the coating process in such a high aspect-ratio geometry is not easy due to the very small space available for the material source and the plasma needed for its sputtering.

This troublesome issue has been solved by a complete change of the production process: the NEG material is no longer directly coated on the wall of the tiny pipe, but instead is coated on the external wall of a sacrificial mandrel made of high-purity aluminium (figure 3). On the top of the coated mandrel, the beam pipe is made by copper electroforming, a well-known electrolytic technique, and on the last production step the mandrel is dissolved chemically by a caustic soda solution. This production process has no limitations in the diameter of the coated beam pipe, and even non-cylindrical geometries can be conceived. The flanges can be assembled during electroforming so that welding or brazing is no longer necessary.

It turns out that the CLIC requirement is common with that of next-generation synchrotron-light sources. For these accelerators, future constraints for vacuum technology are quite clear: very compact magnets with magnetic poles as close as possible to the beam – to reduce costs and improve beam performance – call for very-small-diameter vacuum pipes (less than 5 mm in diameter and more than 2 m long). CERN has already produced prototypes that should fit with these requirements. Indeed, the collaboration between the CERN vacuum group and vacuum experts of light sources has a long history. It started with the need for photon beams for the study of vacuum chambers for LEP and beam screens for the LHC, and continued with NEG coating as an efficient choice for reducing residual gas density – a typical example is MAX IV, for which CERN was closely involved (CERN Courier September 2017 p38). The new way to produce small-diameter beam pipes represents another step in this fruitful collaboration.

Further technology transfer has come from the sophisticated simulations necessary for the HL-LHC and the Future Circular Collider study. A typical example is the integration of electromagnetic and thermomechanical phenomena during a magnet quench to assess the integrity of the vacuum vessel. Another example is the simulation of gas-density and photon-impingement profiles by Monte Carlo methods. These simulation codes have found a large variety of applications well beyond the accelerator field, from the coating of electronic devices to space simulation. For the latter, codes have been used to model the random motion and migration of any chemical species present on the surfaces of satellites at the time of their launch, which is a critical step for future missions to Mars looking for traces of organic compounds.

Of course, the main objective of the CERN vacuum group is the operation of CERN’s accelerators, in particular those in the LHC chain. Here, the relationship with industry is key because the vacuum industry across CERN’s Member and Associate Member states provides us with state-of-art components, valves, pumps, gauges and control equipment that have contributed to the high reliability of our vacuum systems. On the other hand, the LHC gives high visibility to industrial products that, in turn, can be beneficial for the image of our industrial partners. Collaborating with industry is a win–win situation.

The variety of projects and activities performed at CERN provide us with a continuous stimulation to improve and extend our competences in vacuum technology. The fervour of new collider concepts and experimental approaches in the physics community drives us towards innovation. Other typical examples are antimatter physics, which requires very low gas density (figure 4), and radioactive-beam physics that imposes severe controls on contamination and gas exhausting. New challenges are already visible at the horizon, for example physics with gas targets, higher-energy beams in the LHC, and coating beam pipes with high-temperature superconductors to reduce beam impedance.

An orthogonal driver of innovation is reducing the costs and operational downtime of CERN’s accelerators. In the long term, our dream is to avoid bakeout of vacuum systems so that very low pressure can be attained without the heavy operation of heating the vacuum vessels in situ, principally to remove water vapour. Such advances are possible only if the puzzling interaction between water molecules and technical materials is understood, where again only a very thin layer on top of material surfaces makes the difference. Achieving ultra-high vacuum in a matter of a few hours at a reduced cost would also have an impact well beyond the high-energy physics community. This and other challenges at CERN will guarantee that we continue to push the limits of vacuum technology well into the 21st century.

The post CERN’s prowess in nothingness appeared first on CERN Courier.

]]>
Feature CERN is a world-leading centre for extreme vacuum technology, thanks to a wealth of in-house expertise and a constant flow of challenging projects https://cerncourier.com/wp-content/uploads/2018/06/CCJune18_Vacuum-1.jpg
Industry rises to FCC conductor challenge https://cerncourier.com/a/industry-rises-to-fcc-conductor-challenge/ Thu, 19 Apr 2018 15:37:57 +0000 https://preview-courier.web.cern.ch?p=13366 Future Circular Collider (FCC) development workshop at CERN

The post Industry rises to FCC conductor challenge appeared first on CERN Courier.

]]>
Superconductivity underpins large particle accelerators such as the LHC. It is also a key enabling technology for a future circular proton–proton collider reaching energies of 100 TeV, as is currently being explored by the Future Circular Collider (FCC) study. To address the considerable challenges of this project, a conductor development workshop was held at CERN on 5 and 6 March to create momentum for the FCC study and bring together industrial and academic partners.

The alloy niobium titanium is the most successful practical superconductor to date, and has been used in all superconducting particle accelerators and detectors. But the higher magnetic fields required for the high-luminosity LHC (11 T) and FCC (16 T) call for new materials. A potential superconducting technology suitable for accelerator magnets beyond fields of 10 T is the compound niobium tin (Nb3Sn), which is the workhorse of the 16 T magnet-development programme at CERN.

The FCC conductor programme aims to develop Nb3Sn multi-filamentary wires with a critical current-density performance of at least 1500 A/mm2 at 16 T and at a temperature of 4.2 K. This is 30 to 50% higher than the conductor for the HL-LHC, and a significant R&D effort – including fundamental research on superconductors – is needed to meet the magnet requirements of future higher-energy accelerators. The FCC magnets will also require thousands of tonnes of superconductor, calling for a wire design suitable for industrial-scale production at a considerably lower cost than current high-field conductors.

CERN is engaged in collaborative conductor development activities with a number of industrial and academic partners to achieve these challenging targets, and the initial phase of the programme will last for four years. Representatives from five research institutes and seven companies from the US, Japan, Korea, Russia, China and Europe attended the March meeting to discuss progress and opportunities. Firms already producing Nb3Sn superconducting wire for the FCC programme are Kiswire Advanced Technology (KAT); the TVEL Fuel Company working with the Bochvar Institute (JSC VNIINM); and, from Japan, Furukawa Electric and Japan Superconductor Technology (JASTEC), both coordinated by the KEK laboratory. Columbus Superconductor SpA is participating in the programme for other superconducting materials, while two additional companies – Luvata and Western Superconducting Technologies (WST) – expressed their interest in the CERN conductor programme and attended the workshop.

The early involvement of industry is crucial and the event provided an environment in which industrial partners were free to discuss their proposed technical solutions openly. In the past, most companies produced a bronze-route Nb3Sn superconductor, which has no potential to reach the target for FCC. Thanks to their commitment to the programme, and with CERN’s support, companies are now investing in a transition to internal tin processes. Innovative approaches for characterising superconducting wires are also coming out of academia. Developments include the correlation of microstructures, compositional variations and superconducting properties at TU Wien, research into promising internal-oxidation routes at the University of Geneva, phase transformation studies at TU Bergakademie Freiberg and research of novel superconductors for high-fields at SPIN in Genova.

The FCC initiative is of key importance for future high-energy accelerators. Participants agreed that this could result in a new class of high-performance Nb3Sn material suitable not only for accelerator magnets, but also for other large-scale applications such as high-field NMR and laboratory solenoids.

Panos Charitos, CERN.

The post Industry rises to FCC conductor challenge appeared first on CERN Courier.

]]>
Meeting report Future Circular Collider (FCC) development workshop at CERN https://cerncourier.com/wp-content/uploads/2018/04/CCMay18_FP-fcc-1.jpg
The long march of niobium on copper https://cerncourier.com/a/the-long-march-of-niobium-on-copper/ Thu, 19 Apr 2018 10:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/the-long-march-of-niobium-on-copper/ Niobium–copper accelerating cavities are beginning to challenge their bulk-niobium counterparts

The post The long march of niobium on copper appeared first on CERN Courier.

]]>

Superconductors are poor thermal conductors. Whenever a superconducting state is established in a given material, a large fraction of the conduction electrons are frozen in Cooper pairs and become unavailable for heat transport. This can have serious practical implications: at low temperatures, even a small amount of localised heating can drive the material into a normal conducting state, triggering an avalanche process (or quench) that destroys superconductivity in the whole device. It is therefore good practice in applied superconductivity to stabilise superconductors with a high-thermal-conductivity metal. The superconducting niobium–titanium filaments in strands used for accelerator magnets such as those in the LHC, for example, are usually embedded in a copper matrix to spread out any small fluctuation in temperature.

In the world of superconducting radio-frequency (SRF) cavities, which are used to accelerate charged particles in accelerators around the globe, this technique is the exception rather than the rule. Today, mainstream SRF technology makes use of bulk niobium sheets to build the entire resonator structure, circumventing the problem of poor thermal conductivity by using material of very high purity. In the past 40 years, great leaps forward have brought bulk-niobium cavity performances close to what is considered the intrinsic limit of the material, with accelerating fields of the order 50 MV/m in elliptical structures.

However, things were not always this way. In the 1960s, lead (which is a type-I superconductor) was electroplated on copper RF resonators used for beam acceleration at several high-energy physics facilities around the world. As the RF currents only penetrate a few tens of nanometres in the cavity wall, a few-micron-thin superconducting layer on a high-thermal-conductivity copper substrate provided an elegant solution to the problem of thermal stabilisation, in perfect analogy to what happens in superconducting strands for magnets.

Coated RF cavities (figure 1) offered another advantage: they allowed the function of producing high electric fields to be easily decoupled from that of giving enough mechanical stability, which is required to control the field amplitude and phase sufficiently for beam acceleration. The main drawback of lead-plated cavities was their relatively low accelerating field, limited by the critical magnetic field of lead. A natural step forwards was to use niobium, whose critical field is about 2.5 times higher. Unfortunately, the synthesis of good-quality niobium films on copper was much more difficult, and in the 1970s the research quickly turned to using bulk niobium as a cavity material.

Niobium-coated cavities

In 1984, Cristoforo Benvenuti, Nadia Circelli and Max Hauer from CERN (figure 2) published a seminal paper on niobium films for superconducting accelerating cavities. These films were deposited on copper cavity substrates by sputtering, which was reported to give encouraging results on real cavities. It was the start of a successful development, which in only a few years led to the greatest achievement of niobium–copper technology: the SRF system of the upgraded Large Electron Positron collider (LEP), which operated at CERN in the mid- to late-1990s. This consisted of 288 four-cell elliptical cavities working at a frequency of 352 MHz, the vast majority of which were produced with niobium–copper technology in three European industries. During the early times of LEP, niobium–copper cavities could outperform their bulk niobium counterparts at LEP’s nominal fields. Besides being cheaper and free from quenches, they also revealed unexpected insensitivity to trapped magnetic flux. This is a peculiar problem of superconducting cavities that can spoil their performance unless great care is used to shield the cavity from any magnetic fields when it undergoes the superconducting transition. The need for magnetic shielding added to the cost of the bulk niobium systems, whereas LEP’s niobium–copper cavities could operate without shielding.

In the meantime, as coated-cavity technology took off at CERN, bulk niobium technology was progressing fast: all over the world in national laboratories and in industry, the SRF community removed all of the obstacles one after the other in the quest for higher accelerating fields and lower power dissipation (expressed by the unloaded quality factor, Q). Nowadays, state-of-the-art bulk niobium cavities are cutting-edge technology objects made from high residual-resistivity-ratio (RRR) niobium sheets that are shaped and electron-beam welded with the utmost precision. They must be assembled according to high cleanliness standards borrowed from the semiconductor industry to prevent electron-field emission. They need to be carefully shielded from the Earth’s and other parasitic magnetic fields, operated in superfluid helium, and employ complex feedback systems and high installed RF power to combat the effects of microphonics (vibrations that detune the cavities). The result is outstanding in terms of accelerating field and Q, now approaching the theoretical limits. Bulk niobium cavities can be produced in industry and are operating reliably in accelerators at several facilities worldwide, such as the European X-ray free electron laser (XFEL) in Hamburg (CERN Courier July/August 2017 p25).

In contrast, niobium–copper cavities suffered from a problem that had been present since the start: at high accelerating fields, the cavity Q value decreased faster than it did in the case of bulk niobium. This phenomenon is still not fully explained today. During and after the LEP era, a great deal of research was carried out at CERN on 1.5 GHz niobium–copper cavities to tackle the issue. Despite significant progress in understanding and remarkable cavity results, the gap in performance compared with bulk niobium was not bridged. This prevented niobium-coated copper cavities from being considered for the high-energy linear colliders under study at the time – namely TESLA, the technology of which has since morphed into that underpinning the European XFEL and the International Linear Collider proposal (CERN Courier September 2017 p27).

At medium accelerating fields, like those required for circular hadron machines and many other applications, the drop in Q was not a showstopper. For the LHC, the niobium–copper technology was applied to build the 16 single-cell 400.8 MHz elliptical cavities required in the machine’s accelerating sections. The LHC cavities, like their LEP ancestors, also work at a temperature of 4.5 K – the cryogenics for which is cheaper and more robust than that used for bulk niobium devices.

In the 1990s, niobium–copper cavities of another shape, adapted for heavy-ion acceleration, were employed for the ALPI linac in INFN Legnaro, Italy. Here, Vincenzo Palmieri and collaborators developed niobium-sputtered quarter-wave resonators (QWRs) on copper substrates. A total of 58 niobium–copper cavities are operational today in ALPI, replacing old lead-plated cavities and considerably extending the energy reach of the machine.

Revival at ISOLDE

During the construction of the LHC in the early 2000s, the SRF activities and infrastructures at CERN were down-sized as resources were focused on the production of the LHC’s superconducting magnets. However, the need for a new SRF system came back in a CERN project in 2009, when a proposal was approved for a high-energy upgrade of the ISOLDE facility using a superconducting linac booster for the radioactive ion beams. For this application, niobium–copper technology was considered particularly well suited because the absence of beam loading allows the stiff, niobium-coated copper cavities to be operated at very narrow RF bandwidths, leading to significant savings in installed RF power. To support the high-energy HIE-ISOLDE upgrade, CERN invested in rebuilding its SRF infrastructure and expertise.

Today, thanks to the collective effort of several CERN teams, the high-beta section of the HIE-ISOLDE linac is complete (figure 3). The work for HIE ISOLDE also offered an opportunity to advance understanding of the limitations to niobium–copper cavity performance. One particular issue was the frequent appearance of defects on the copper substrate, especially close to the electron-beam weld. To overcome this problem, towards the end of the production, a new design for the RF cavity was proposed, which made possible machining the whole resonator out of a copper billet and thus avoided any weld (figure 1). The results of the change were very encouraging.

The first two cavities, manufactured with this technique in industry and coated at CERN, were tested at the end of 2017. Their RF performance scored top of a series of 20 units. Even more strikingly, when cooled down close to superfluid helium temperatures with active shielding of the ambient field, a cavity reached unprecedented peak fields for niobium–copper technology (figure 4). Incredibly, the RF performance of this cavity is comparable to bulk niobium cavities with the same shape, at least as far as the Q-slope is concerned. This result is now the basis of exploring new possible applications, notably for the acceleration of higher-beta beams like those required in spallation sources for accelerator-driven systems.

At about the same time, another excellent result was achieved in the context of the LHC spare-cavities programme. Newly coated cavities are giving results that lie on an upward learning curve, and have already surpassed the LHC specifications. Achieving such good RF performances is only possible if high quality standards are maintained along the whole production chain, from manufacturing of the copper substrate, to chemical polishing, ultra-pure water rinsing, cleanroom assembly and coating, to the final RF test at cryogenic temperatures. This requires a close collaboration of various teams of specialists, and CERN is an ideal place for that.

These two recent achievements are proof that the potential of niobium–copper technology is not exhausted, and that these cavities could be as high performing as their bulk niobium counterparts. Indeed, this technology is already being considered within the Future Circular Collider study, led by CERN to explore the feasibility of a 100 km-circumference machine. Clearly, the long march of niobium on copper is far from over.

The post The long march of niobium on copper appeared first on CERN Courier.

]]>
Feature Niobium–copper accelerating cavities are beginning to challenge their bulk-niobium counterparts https://cerncourier.com/wp-content/uploads/2018/06/CCMay18-coating-frontis.jpg
Big science meets industry in Copenhagen https://cerncourier.com/a/big-science-meets-industry-in-copenhagen/ Fri, 23 Mar 2018 15:42:19 +0000 https://preview-courier.web.cern.ch?p=13368 The Big Science Business Forum (BSBF), held in Copenhagen, Denmark, saw delegates discuss opportunities in the current big-science landscape

The post Big science meets industry in Copenhagen appeared first on CERN Courier.

]]>

Big science equals big business, whether it is manufacturing giant superconducting magnets for particle colliders or perfecting mirror coatings for space telescopes. The Big Science Business Forum (BSBF), held in Copenhagen, Denmark, on 26–28 February, saw more than 1000 delegates from more than 500 companies and organisations spanning 30 countries discuss opportunities in the current big-science landscape.

Nine of the world’s largest research facilities – CERN, EMBL, ESA, ESO, ESRF, ESS, European XFEL, F4E and ILL – offered insights into procurement opportunities and orders totalling more than €12 billion for European companies in the coming years. These range from advisory engineering work and architectural tasks to advanced technical equipment, construction projects and radiation-resistant materials. A further nine organisations also joined the conference programme: ALBA, DESY, ELI-NP, ENEA, FAIR, MAX IV, SCK•CEN – MYRRHA, PSI and SKA, thereby gathering 18 of the world’s most advanced big-science organisations under one roof.

The big-science market is currently fragmented by the varying quality standards and procurement procedures of the different laboratories, delegates heard. BSBF aspired to offer a space to discuss the entry challenges for businesses and suppliers – including small- and medium-sized enterprises – who can be valuable business partners for big-science projects.

“The vision behind BSBF is to provide an important stepping stone towards establishing a stronger, more transparent and efficient big-science market in Europe and we hope that this will be the first of a series of BSBFs in different European cities,” said Agnete Gersing of the Danish ministry for higher education and science during the opening address.

Around 700 one-to-one business meetings took place, and delegates also visited the European Spallation Source and MAX IV facility just across the border in Lund, Sweden. Parallel sessions covered big science as a business area, addressing topics such as the investment potential and best practices of Europe’s big-science market.

“Much of the most advanced research takes place at big-science facilities, and their need for high-tech solutions provides great innovation and growth opportunities for private companies,” said Danish minister for higher education and science, Søren Pind.

The post Big science meets industry in Copenhagen appeared first on CERN Courier.

]]>
Meeting report The Big Science Business Forum (BSBF), held in Copenhagen, Denmark, saw delegates discuss opportunities in the current big-science landscape https://cerncourier.com/wp-content/uploads/2018/03/CCApr18_FP-bigscience.jpg
High-gradient X-band technology: from TeV colliders to light sources and more https://cerncourier.com/a/high-gradient-x-band-technology-from-tev-colliders-to-light-sources-and-more/ Fri, 23 Mar 2018 11:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/high-gradient-x-band-technology-from-tev-colliders-to-light-sources-and-more/ Powerfult linear-accelerator technology developed for fundamental exploration is being transferred to applications beyond high-energy physics

The post High-gradient X-band technology: from TeV colliders to light sources and more appeared first on CERN Courier.

]]>

The demanding and creative environment of fundamental science is a fertile breeding ground for new technologies, especially unexpected ones. Many significant technological advances, from X-rays to nuclear magnetic resonance and the Web, were not themselves a direct objective of the underlying research, and particle accelerators exemplify this dynamic transfer from the fundamental to the practical. From isotope separation, X-ray radiotherapy and, more recently, hadron therapy, there are now many categories of accelerators dedicated to diverse user communities across the sciences, academia and industry. These include synchrotron light sources, X-ray free-electron lasers (XFELs) and neutron spallation sources, and enable research that often has direct societal and economic implications.

During the past decade or so, high-gradient linear accelerator technology developed for fundamental exploration has matured to the point where it is being transferred to applications beyond high-energy physics. Specifically, the unique requirements for the Compact Linear Collider (CLIC) project at CERN have led to a new high-gradient “X-band” accelerator technology that is attracting the interest of light-source and medical communities, and which would have been difficult for those communities to advance themselves due to their diverse nature.

Set to operate until the mid-2030s, the Large Hadron Collider (LHC) collides protons at an energy of 13 TeV. One possible path forward for particle physics in the post-LHC, “beyond the Standard Model”, era is a high-energy linear electron–positron collider. CLIC envisions an initial-energy 380 GeV centre-of-mass facility focused on precision measurements of the Higgs boson and the top quark, which are promising targets to search for deviations from the Standard Model (CERN Courier November 2016 p20). The machine could then, guided by the results from the LHC and the initial-stage linear collider, be lengthened to reach energies up to 3 TeV for detailed studies of this high energy regime. CLIC is overseen by the Linear Collider Collaboration along with the International Linear Collider (ILC), a lower energy electron–positron machine envisaged to operate initially at 250 GeV (CERN Courier January/February 2018 p7).

The accelerator technology required by CLIC has been under development for around 30 years and the project’s current goals are to provide a robust and detailed design for the update of the European Strategy for Particle Physics, with a technical design report by 2026 if resources permit. One of the main challenges in making CLIC’s 380 GeV initial energy stage cost effective, while guaranteeing its reach to 3 TeV, is generating very high accelerating gradients. The gradient needed for the high-energy stage of CLIC is 100 MV/m, which equates to 30 km of active acceleration. For this reason, the CLIC project has made a major investment in developing high-­gradient radio-frequency (RF) technology that is feasible, reliable and cheap.

Evading obstacles

Maximising the accelerating gradient leads to a shorter linac and thus a less expensive facility. But there are two main limiting factors: the increasing need of peak RF power and the limitation of accelerating-structure surfaces to support increasingly strong electromagnetic fields. Circumventing these obstacles has been the focus of CLIC activities for several years.

One way to mitigate the increasing demand for peak power is to use higher frequency accelerating structures (figure 1), since the power needed for fixed-beam energy goes up linearly with gradient but goes down approximately with the inverse square root of the RF frequency. The latest XFELs SACLA in Japan and SwissFEL in Switzerland operate at “C-band” frequencies of 5.7 GHz, which enables a gradient of around 30 MV/m and a peak power requirement of around 12 MW/m in the case of SwissFEL. This increase in frequency required a significant technological investment, but CLIC’s demand for 3 TeV energies and high beam current requires a peak power per metre of 200 MW/m! This challenge has been under study since the late 1980s, with CLIC first focusing on 30 GHz structures and the Next Linear Collider/Joint Linear Collider community developing 11.4 GHz “X-band” technology. The twists and turns of these projects are many, but the NLC/JLC project ceased in 2005 and CLIC shifted to X-band technology in 2007. CLIC also generates high peak power using a two-beam scheme in which RF power is locally produced by transferring energy from a low-energy, high-current beam to a high-energy, low-current beam. In contrast to the ILC, CLIC adopts normal-conducting RF technology to go beyond the approximately 50 MV/m theoretical limit of existing superconducting cavity geometries.

The second main challenge when generating high gradients is more fundamental than the practical peak-power requirements. A number of phenomena come to life when the metal surfaces of accelerating structures are subject to very high electromagnetic fields, the most prominent being vacuum arcing or breakdown, which induces kicks to the beam that result in a loss of luminosity. A CLIC accelerating structure operating at 100 MV/m will have surface electric fields in excess of 200 MV/m, sometimes leading to the formation of a highly conductive plasma directly above the surface of the metal. Significant progress has been made in understanding how to maximise gradient despite this effect, and a key insight has been the identification of the role of local power flow. Pulsed surface heating is another troubling high-field phenomenon faced by CLIC, where ohmic losses associated with surface currents result in fatigue damage to the outer cavity wall and reduced performance. Understanding these phenomena has been essential to guide the development of an effective design and technology methodology for achieving gradients in excess of 100 MV/m.

Test-stand physics

Critical to CLIC’s development of high-gradient X-band technology has been an investment in four test stands, which allowed investigations of the complex, multi-physics effects that affect high-power behaviour in operational structures (figure 2). The test stands provided the RF klystron power, dedicated instrumentation and diagnostics to operate, measure and optimise prototype RF components. In addition, to investigate beam-related effects, one of the stands was fed by a beam of electrons from the former “CTF3” facility. This has since been replaced by the CLEAR test facility, at which experiments will come on line again next year (CERN Courier November 2017 p8).

While the initial motivation for the CLIC test stands was to test prototype components, high-gradient accelerating structures and high-power waveguides, the stands are themselves prototype RF units for linacs – the basic repeatable unit that contains all the equipment necessary to accelerate the beam. A full linac, of course, needs many other subsystems such as focusing magnets and beam monitors, but the existence of four operating units that can be easily visited at CERN has made high-gradient and X-band technology serious options for a number of linac applications in the broader accelerator community. An X-band test stand at KEK has also been operational for many years and the group there has built and tested many CLIC prototype structures.

With CLIC’s primary objective being to provide practical technology for a particle-physics facility in the multi-TeV range, it is rather astonishing that an application requiring a mere 45 MeV beam finds itself benefiting from the same technology. This small-scale project, called Smart*Light, is developing a compact X-ray source for a wide range of applications including cultural heritage, metallurgy, geology and medical, providing a practical local alternative to a beamline at a large synchrotron light source. Led by the University of Eindhoven in the Netherlands, Smart*Light produces monochromatic X-rays via inverse Compton scattering, in which X-rays are produced by “bouncing” a laser pulse off an electron beam. The project teams aims to make the equipment small and inexpensive enough to be able to integrate it in a museum or university setting, and is addressing this objective with a 50 MV/m-range linac powered by one of the two standard CLIC test-stand configurations (a 6 MW Toshiba klystron). Funding has been awarded to construct the first prototype system and, once operational, Smart*Light will pursue commercial production.

Another Compton-source application is the TTX facility at Tsinghua University in China, which is based on a 45 MeV beam. The Tsinghua group plans to increase the energy of the X-rays by upgrading the energy of their electron linac, which must be done by increasing the accelerating gradient because the facility is housed in an existing radiation-shielded building. The energy increase will occur in two steps: the first will raise the accelerating gradient by upgrading parts of the existing S-band 3 GHz RF system, and the second will be to replace sections with an X-band system to increase the gradient up to 70 MV/m. The Tsinghua X-band power source will also implement a novel “corrector cavity” system to flatten the power compressed pulse that is also now part of the 380 GeV CLIC baseline design. Tsinghua has successfully tested a standard CLIC structure to more than 100 MV/m at KEK, demonstrating that high-gradient technology can be transferred, and has taken delivery of a 50 MW X-band klystron for use in a test stand.

Perhaps the most significant X-band application is XFELs, which produce intense and short X-ray bursts by passing a very low-emittance electron beam through an undulator magnet. The electron linac represents a substantial fraction of the total facility cost and the number of XFELs is presently quite limited. Demand for facilities also exceeds the available beam time. Operational facilities include LCWS at SLAC, FERMI at Trieste and SACLA at Riken, while the European XFEL in Germany, the Pohang Light Source in Korea and SwissFEL are being commissioned (CERN Courier July/August 2017 p18), and it is expected that further facilities will be built in the coming years.

XFEL applications

CLIC technology, both the high-frequency and high-gradient aspects, has the potential to significantly reduce the cost of such X-ray facilities, allowing them to be funded at the regional and possibly even university scale. In combination with other recent advances in injectors and undulators, the European Union project CompactLight has recently received a design study grant to examine the benefits of CLIC technology and to prepare a complete technical design report for a small-scale facility (CERN Courier December 2017 p8).

A similar type of electron linac, in the 0.5–1 GeV range, is being proposed by Frascati Laboratory in Italy for XFEL development, in addition to the study of advanced plasma-acceleration techniques. To fit the accelerator in a building on the Frascati campus, the group has decided to use a high-gradient X-band for their linac and has joined forces with CLIC to develop it. The cooperation includes Frascati staff visiting CERN to help run the high-gradient test facilities and the construction of their own test stand at Frascati, which is an important advance in testing its capability to use CLIC technology.

In addition to providing a high-performance technology for acceleration, high-gradient X-band technology is the basis for two important devices that manipulate the beam in low-emittance and short-bunch electron linacs, as used in XFELs and advanced development linacs. The first is the energy-spread lineariser, which uses a harmonic of the accelerating frequency to correct the energy spread along the bunch and enable shorter bunches. A few years ago a collaboration between Trieste, PSI and CERN made a joint order for the first European X-band frequency (11.994 GHz) 50 MW klystrons from SLAC, and jointly designed and built the lineariser structures, which have significantly improved the performance of the Elettra  light source in Trieste and become an essential element of SwissFEL.

Following the CLIC test stand and lineariser developments, a new commercial X-band klystron has become available, this time at the lower power of 6 MW and supplied by Canon (formerly Toshiba). This new klystron is ideally suited for lineariser systems and one has recently been constructed at the soft X-ray XFEL at SINAP in Shanghai, which has a long-standing collaboration with CLIC on high-gradient and X-band technology. Back in Europe, Daresbury Laboratory has decided to invest in a lineariser system to provide the exceptional control of the electron bunch characteristics needed for its XFEL programme, which is being developed at its CLARA test facility. Daresbury has been working with CLIC to define the system, and is now procuring an RF power system based on the 6 MW Toshiba klystron and pulse compressor. This will certainly be a major step in the ease of adoption of X-band technology.

The second major high-gradient X-band beam manipulation application is the RF deflector, which is used at the end of an XFEL to measure the bunch characteristics as a function of position along the bunch. High-gradient X-band technology is well suited to this application and there is now widespread interest to implement such systems. Teams at FLASH2, FLASH-Forward and SINBAD at DESY, SwissFEL and CLIC are collaborating to define common hardware, including a variable polarisation deflector to allow a full 6D characterisation of the electron bunch. SINAP is also active in this domain. The facility is awaiting delivery of three 50 MW CPI klystrons to power the deflectors and will build a standard CLIC test structure for tests at CERN in addition to a prototype X-band XFEL structure in the context of CompactLight.

The rich exchange between different projects in the high-gradient community is typified by PSI and in particular the SwissFEL. Many essential features of the SwissFEL have a linear-collider heritage, such as the micron-precision diamond machining of the accelerating structures, and SwissFEL is now returning the favour. For example, a pair of CLIC X-band test accelerating structures are being tested at CERN to examine the high-gradient potential of PSI’s fabrication technology, showing excellent results: both structures can operate at more than 115 MV/m and demonstrate potential cost savings for CLIC. In addition, the SwissFEL structures have been successfully manufactured to micron precision in a large production series – a level of tolerance that has always been an important concern for CLIC. Now that the PSI fabrication technology is established, the laboratory is building high-gradient structures for other projects such as Elettra, which wishes to increase its X-ray energy and flux but has performance limitations with its 3 GHz linac.

Beyond light sources

High-gradient technology is now working its way beyond electron linacs, particularly in the treatment of cancer. The most common accelerator-based cancer treatment is X-rays, but protons and heavy ions offer many potential advantages. One drawback of hadron therapy is the high cost of the accelerators, which are currently circular. A new generation of linacs offer the potential for smaller, lower cost facilities with additional flexibility.

The TERA foundation has studied such linac-based solutions and a firm called ADAM is now commercialising a version with a view to building a compact hadron-therapy centre (CERN Courier January/February 2018 p25). To demonstrate the potential of high gradients in this domain, members of CLIC received support from the CERN knowledge transfer fund to adapt CLIC technology to accelerate protons in the relevant energy range, and the first of two structures is now under test. The predicted gradient above was 50 MV/m, but the structure has exceeded 55 MV/m and also behaves consistently when compared to the almost 20 CLIC structures. We now know that it is possible to reach high accelerating gradients even for protons, and projects based on compact linacs can now move forward with confidence.

Collaboration has driven the wider adoption of CLIC’s high-gradient technology. A key event took place in 2005 when CERN management gave CLIC a clear directive that, with LHC construction limiting available resources, the study must find outside collaborators. This was achieved thanks to a strong effort by CLIC researchers, also accompanied by a great deal of activity in electron linacs in the accelerator community.

We should not forget that the wider adoption of X-band and high-gradient technology is extremely important for CLIC itself. First, it enlarges the commercial base, driving costs down and reliability up, and making firms more likely to invest. Another benefit is the improved understanding of the technology and its operability by accelerator experts, with a broadened user base bringing new ideas. Harnessing the creative energy of a larger group has already yielded returns to the CLIC study, for instance addressing important industrialisation and cost-reduction issues.

The role of high-gradient and X-band technology is expanding steadily, with applications at a surprisingly wide range of scales. Despite having started in large linear colliders, the use of the technology now starts to be dominated by a proliferation of small-scale applications. Few of these were envisaged when CLIC was formulated in the late 1980s – XFELs were in their infancy at the time. As the technology is applied further, its performance will rise even more, perhaps even leading to the use of smaller applications to build a higher energy collider. The interplay of different communities can make advances beyond what any could on their own, and it is an exciting time to be part of this field.

The post High-gradient X-band technology: from TeV colliders to light sources and more appeared first on CERN Courier.

]]>
Feature Powerfult linear-accelerator technology developed for fundamental exploration is being transferred to applications beyond high-energy physics https://cerncourier.com/wp-content/uploads/2018/06/CCApr18_XBAND-frontis.jpg
SwissFEL carries out first experiment https://cerncourier.com/a/while-swissfel-carries-out-first-experiment/ Mon, 15 Jan 2018 09:15:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/while-swissfel-carries-out-first-experiment/ The free-electron X-ray laser SwissFEL at the Paul Scherrer Institute (PSI) in Switzerland has hosted its inaugural experiment, marking the facility’s first science result and demonstrating that its many complex components are working as expected. Construction of 740m-long SwissFEL began in April 2013, with the aim of producing extremely short X-ray laser pulses for the […]

The post SwissFEL carries out first experiment appeared first on CERN Courier.

]]>

The free-electron X-ray laser SwissFEL at the Paul Scherrer Institute (PSI) in Switzerland has hosted its inaugural experiment, marking the facility’s first science result and demonstrating that its many complex components are working as expected. Construction of 740m-long SwissFEL began in April 2013, with the aim of producing extremely short X-ray laser pulses for the study of ultrafast reactions and processes.

Between 27 November and 4 December 2017, PSI researchers and a research group from the University of Rennes in France conducted the first in a series of pilot experiments.

The high-energy X-ray light pulses enabled the team to investigate the electrical and magnetic properties of titanium pentoxide nanocrystals, which have potential applications in high-density data storage. This and further pilot experiments will help hone SwissFEL operations before regular user operations begin in January 2019.

The post SwissFEL carries out first experiment appeared first on CERN Courier.

]]>
News https://cerncourier.com/wp-content/uploads/2018/06/CCnew7_01_18.jpg
Baby MIND takes first steps https://cerncourier.com/a/baby-mind-takes-first-steps/ Fri, 10 Nov 2017 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/baby-mind-takes-first-steps/ In mid-October, a neutrino detector that was designed, built and tested at CERN was loaded onto four trucks to begin a month-long journey to Japan. Once safely installed at the J-PARC laboratory in Tokai, the “Baby MIND” detector will record muon neutrinos generated by beams from J-PARC and play an important role in understanding neutrino […]

The post Baby MIND takes first steps appeared first on CERN Courier.

]]>

In mid-October, a neutrino detector that was designed, built and tested at CERN was loaded onto four trucks to begin a month-long journey to Japan. Once safely installed at the J-PARC laboratory in Tokai, the “Baby MIND” detector will record muon neutrinos generated by beams from J-PARC and play an important role in understanding neutrino oscillations at the T2K experiment.

Weighing 75 tonnes, Baby MIND (Magnetised Iron Neutrino Detector) is bigger than its name suggests. It was initiated in 2015 as part of the CERN Neutrino Platform (CERN Courier July/August 2016 p21) and was originally conceived as a prototype for a 100 kt detector for a neutrino factory, specifically for muon-track reconstruction and charge-identification efficiency studies on a beamline at CERN (a task defined within the earlier AIDA project). Early in the design process, however, it was realised that Baby MIND was just the right size to be installed alongside the WAGASCI experiment located next to the near detectors for the T2K experiment, 280 m downstream from the proton target at J-PARC.

T2K studies the oscillation of muon (anti)neutrinos, especially their transformation into electron (anti)neutrinos, on their 295 km-long journey from J-PARC on the east coast of Japan to Kamioka on the other side of the island. The experiment discovered electron-neutrino appearance in a muon-neutrino beam in 2013 and earlier this year reported a two-sigma hint of CP violation by neutrinos, which will be explored further during the next eight years. Another major current target is to remove the ambiguity affecting the measurement of the neutrino mixing angle θ23.

Baby MIND will help in this regard by precisely tracking and identifying muons produced when muon neutrinos from the T2K beamline interact with the WAGASCI detector. This will allow the ratio of cross-sections in water and plastic scintillator (the active material in WAGASCI) to be determined, helping researchers understand  energy reconstruction biases that affect target nuclei-dependent neutrino fluxes and cross-sections. “Besides the water-to-scintillator ratio, the interest of the experiment is to measure a slightly higher-energy beam and compare the energy distribution (simply reconstructed from the muon angle and momentum, that Baby MIND measures) for the various off-axis positions relevant to the T2K and NOVA beams,” says Baby MIND spokesperson Alain Blondel.

Since its approval in December 2015, the Baby MIND collaboration – comprising CERN, the Institute for Nuclear Research of the Russian Academy of Sciences, and the universities of Geneva, Glasgow, Kyoto, Sofia, Tokyo, Uppsala, Valencia and Yokohama – has designed, prototyped, constructed and tested the Baby MIND apparatus, which includes custom designed magnet modules, electronics, scintillator sensors and support mechanics.

Significant departure

The magnet modules were the responsibility of CERN, and mark a significant departure from traditional magnetised-iron neutrino detectors, which have large coils threaded through the entire iron mass. Each of the 33 two-tonne Baby MIND iron plates is magnetised by its own aluminium coil, a feature imposed by access constraints in the shaft at J-PARC and resulting in a highly optimised magnetic field in the tracking volume. Between them, plastic scintillator slabs embedded with wavelength-shifting fibres transmit light produced by the interactions of ionising particles to silicon photomultipliers.

The fully assembled Baby MIND detector was qualified with cosmic rays prior to tests on a beamline at the experimental zone of CERN’s Proton Synchrotron in the East Area during the summer of this year, and analyses showed the detector to be working as expected. First physics data from Baby MIND are expected in 2018. “That new systems for the Baby MIND were designed, assembled and tested on a beamline in a relatively short period of time (around two years) is a great example of people coming together and optimising the detector by using the latest design tools and benefiting from the pool of experience and infrastructures available at CERN,” says Baby MIND technical co-ordinator Etam Noah.

The post Baby MIND takes first steps appeared first on CERN Courier.

]]>
News https://cerncourier.com/wp-content/uploads/2018/06/CCnew1_10_17.jpg
EU project lights up X-band technology https://cerncourier.com/a/eu-project-lights-up-x-band-technology/ Fri, 10 Nov 2017 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/eu-project-lights-up-x-band-technology/ Advanced linear-accelerator (linac) technology developed at CERN and elsewhere will be used to develop a new generation of compact X-ray free-electron lasers (XFELs), thanks to a €3 million project funded by the European Commission’s Horizon 2020 programme. Beginning in January 2018, “CompactLight” aims to design the first hard XFEL based on 12 GHz X-band technology, which originated […]

The post EU project lights up X-band technology appeared first on CERN Courier.

]]>

Advanced linear-accelerator (linac) technology developed at CERN and elsewhere will be used to develop a new generation of compact X-ray free-electron lasers (XFELs), thanks to a €3 million project funded by the European Commission’s Horizon 2020 programme. Beginning in January 2018, “CompactLight” aims to design the first hard XFEL based on 12 GHz X-band technology, which originated from research for a high-energy linear collider. A consortium of 21 leading European institutions, including Elettra, CERN, PSI, KIT and INFN, in addition to seven universities and two industry partners (Kyma and VDL), are partnering to achieve this ambitious goal within the three-year duration of the recently awarded grant.

X-band technology, which provides accelerating-gradients of 100 MV/m and above in a highly compact device, is now a reality. This is the result of many years of intense R&D carried out at SLAC (US) and KEK (Japan), for the former NLC and JLC projects, and at CERN in the context of the Compact Linear Collider (CLIC). This pioneering technology also withstood validation at the Elettra and PSI laboratories.

XFELs, the latest generation of light sources based on linacs, are particularly suitable applications for high-gradient X-band technology. Following decades of growth in the use of synchrotron X-ray facilities to study materials across a wide spectrum of sciences, technologies and applications, XFELs (as opposed to circular light sources) are capable of delivering high-intensity photon beams of unprecedented brilliance and quality. This provides novel ways to probe matter and allows researchers to make “movies” of ultrafast biological processes. Currently, three XFELs are up and running in Europe – FERMI@Elettra in Italy and FLASH and FLASH II in Germany, which operate in the soft X-ray range – while two are under commissioning: SwissFEL at PSI and the European XFEL in Germany (CERN Courier July/August 2017 p18), which operates in the hard X-ray region. Yet, the demand for such high-quality X-rays is large, as the field still has great and largely unexplored potential for science and innovation – potential that can be unlocked if the linacs that drive the X-ray generation can be made smaller and cheaper.

This is where CompactLight steps in. While most of the existing XFELs worldwide use conventional 3 GHz S-band technology (e.g. LCLS in the US and PAL in South Korea) or superconducting 1.3 GHz structures (e.g. European XFEL and LCLS-II), others use newer designs based on 6 GHz C-band technology (e.g. SCALA in Japan), which increases the accelerating gradient while reducing the linac’s length and cost. CompactLight gathers leading experts to design a hard-X-ray facility beyond today’s state of the art, using the latest concepts for bright electron-photo injectors, very-high-gradient X-band structures operating at frequencies of 12 GHz, and innovative compact short-period undulators (long devices that produce an alternating magnetic field along which relativistic electrons are deflected to produce synchrotron X-rays). Compared with existing XFELs, the proposed facility will benefit from a lower electron-beam energy (due to the enhanced undulator performance), be significantly more compact (as a consequence both of the lower energy and of the high-gradient X-band structures), have lower electrical power demand and a smaller footprint.

Success for CompactLight will have a much wider impact: not just affirming X-band technology as a new standard for accelerator-based facilities, but advancing undulators to the next generation of compact photon sources. This will facilitate the widespread distribution of a new generation of compact X-band-based accelerators and light sources, with a large range of applications including medical use, and enable the development of compact cost-effective X-ray facilities at national or even university level across and beyond Europe.

The post EU project lights up X-band technology appeared first on CERN Courier.

]]>
News https://cerncourier.com/wp-content/uploads/2018/06/CCnew3_10_17.jpg
Construction of protoDUNE detector begins https://cerncourier.com/a/construction-of-protodune-detector-begins/ Fri, 22 Sep 2017 07:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/construction-of-protodune-detector-begins/ The Deep Underground Neutrino Experiment (DUNE) in the US, the cavern for which entered construction this summer, will make precision studies of neutrinos produced 1300 km away at Fermilab as part of the international Long-Baseline Neutrino Facility. The DUNE far detector will be the largest liquid-argon (LAr) neutrino detector ever built, comprising four cryostats holding 68,000 […]

The post Construction of protoDUNE detector begins appeared first on CERN Courier.

]]>

The Deep Underground Neutrino Experiment (DUNE) in the US, the cavern for which entered construction this summer, will make precision studies of neutrinos produced 1300 km away at Fermilab as part of the international Long-Baseline Neutrino Facility. The DUNE far detector will be the largest liquid-argon (LAr) neutrino detector ever built, comprising four cryostats holding 68,000 tonnes of liquid, and prototype detectors called protoDUNE are being built at CERN.

Each protoDUNE detector comprises a 10 × 10 × 10 m LAr time projection chamber with a single-phase (SP) or dual-phase (DP) configuration, containing about 800 tonnes of LAr. While the two big cryostats housing the detectors are about to be completed, the construction of the protoDUNE detectors themselves has just started. The first of six anode-plane-assembly modules for the protoDUNE-SP detector, which will detect electrons produced by ionising particles passing through the detector (pictured) recently arrived at CERN. The module will be tested, together with its electronics, and then installed in its final position inside the cryostat.

In parallel with the anode-plane-assembly, other parts of the protoDUNE-SP detector are being assembled at CERN, including the field cage, which keeps the electric field uniform inside the volume of the detector. Around a quarter of the 28 field-cage modules have already been assembled and are stored in CERN’s EHN1 hall, ready to be installed. The assembly and installation of the detector parts is expected to be completed by spring next year, in order for protoDUNE-SP to take data in autumn 2018.

The protoDUNE detectors are among several major activities taking place at the CERN neutrino platform, which was initiated in 2013 to develop detector technology for neutrino experiments in the US and Japan.

The post Construction of protoDUNE detector begins appeared first on CERN Courier.

]]>
News https://cerncourier.com/wp-content/uploads/2018/06/CCnew1_08_17-1.jpg
Powering the field forward https://cerncourier.com/a/powering-the-field-forward/ Fri, 11 Aug 2017 07:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/powering-the-field-forward/ Particle physicists try to understand the environment that existed fractions of a second after the Big Bang by studying the behaviour of particles at high energies. Early studies relied on cosmic rays emanating from extraterrestrial sources, but the invention of the circular accelerator by Ernest Lawrence in 1931 revolutionised the field. Further advances in accelerator […]

The post Powering the field forward appeared first on CERN Courier.

]]>

Particle physicists try to understand the environment that existed fractions of a second after the Big Bang by studying the behaviour of particles at high energies. Early studies relied on cosmic rays emanating from extraterrestrial sources, but the invention of the circular accelerator by Ernest Lawrence in 1931 revolutionised the field. Further advances in accelerator technology gave physicists more control over their experiments, in particular thanks to the invention of the synchrotron and the development of storage rings. By capturing particles via a ring of magnets and accelerating them with radio-frequency cavities, these facilities finally reached energies of a few hundred GeV. But storage rings are limited by the maximum magnetic field achievable with resistive magnets, which is around 2 T. To go further into the heart of matter, particle physicists required higher energies and a new technology to get them there.

The maximum field of an electromagnet is roughly determined by the amount of current in a conductor multiplied by the number of turns the conductor makes around its support structure. Over the years, the growing scale of accelerators and the large number of magnets needed to reach the highest energies demanded compact and affordable magnets. Conventional electromagnets, which are usually based on a copper conductor, are limited by two main factors: the amount of power required to operate them due to resistive losses and the size of the conductor. Typical conventional-magnet windings therefore tended to use conductors with a cross-sectional area of the order of a few square centimetres, which is not optimal for generating high magnetic fields.

Superconductivity, which allows certain materials at low temperatures to carry very high currents without any resistive loss, was just the transformational technology needed. It powered the Tevatron collider at Fermilab in the US to produce the top quark, and CERN’s Large Hadron Collider (LHC) to unearth the Higgs boson. Advanced superconducting magnets are already being developed for future collider projects that will take physicists into a new phase of subatomic exploration beyond the LHC (figure 1).

Maintaining the state

Discovered in 1911, superconductivity didn’t immediately lead to broad applications, particularly not high-field accelerator magnets. As far as accelerators were concerned, the possibility of using superconducting magnets to produce higher fields started to take root in the mid-1960s. The big challenge was to maintain the superconducting state in a bulk object in which tremendous forces are at work: the slightest microscopic movement of the conductor would cause it to transition to the normal state (a “quench”) and result in burn-up, unless the fault was detected quickly and the current turned off.

Early superconductors were mostly formed into high-aspect-ratio tapes measuring a few tenths of a millimetre thick and around 10 mm wide. These are not particularly useful for making magnets because precise geometry and current distribution are necessary to achieve a good field quality. Intense studies led to the development of multi-filamentary niobium-zirconium (NbZr), niobium-titanium (Nb-Ti) and niobium-tin (Nb3Sn) wires, propelling interest in superconducting technology. In 1961, Kunzler and colleagues at Bell Labs produced a 7 T field in a solenoid, a relatively simple coil geometry compared with the dipoles or quadrupoles needed for accelerators. This swiftly led to higher-field solenoids, and a number of efforts to utilise the benefits of superconductivity for magnets began. But it was only in the early 1970s that the first prototypes of superconducting dipoles and quadrupoles demonstrated the potential of superconducting magnet technology for accelerators.

A turning point came during a six-week-long study group at Brookhaven National Laboratory (BNL) in the US in the summer of 1968, during which 200 physicists and engineers from around the world discussed the application of superconductivity to accelerators (figure 2). Considerable focus was directed towards the possibility of using superconducting beam-handling magnets (such as dipoles and quadrupoles for transporting beams from accelerators to experimental areas) for the new 200–400 GeV accelerator being constructed at Fermilab. By that time, several high-field superconducting alloys and compounds had been produced.

Hitting the mainstream

It could be argued that the unofficial kick-off for superconducting magnets in accelerators was a panel discussion at the 1971 Particle Accelerator Conference held in Chicago, although there was a clear geographical divide on key issues. The European contingent was reluctant to delve into higher-risk technology when it was clear that conventional technology could meet their needs, while the Americans argued for the substantial cost savings promised by superconducting machines: they claimed that a 100 GeV superconducting synchrotron could be built in five or six years, while the Europeans estimated a more conservative seven to 10 years.

In the US, work on furthering the development of superconducting magnets for accelerators was concentrated in a few main laboratories: Fermilab, the Lawrence Radiation Laboratory, Brookhaven National Laboratory (BNL) and Argonne National Laboratory. In Europe, a consortium of three laboratories – CEA Saclay in France, Rutherford Appleton Laboratory in the UK and the Nuclear Research Center at Karlsruhe – was formed to enable future conversion of the recently approved 300 GeV accelerator, to become CERN’s Super Proton Synchrotron (SPS), to higher energies using superconducting magnets. Of particular historical note, a short paper written at this time referred to a “compacted fully transposed cable” produced at the Rutherford Lab, and the “Rutherford cable” has since become the standard conductor configuration for all accelerator magnets (figure 3).

Rapid progress followed, reaching a tipping point in the 1970s with the launch of several accelerator projects based on superconducting magnets and a rapidly growing R&D community worldwide. These included: the Fermilab Energy Doubler; Interaction Region (IR) quadrupoles (used to bring particles into collision for the experiments) for the Intersecting Storage Rings at CERN; and IR quadrupoles for TRISTAN at KEK in Japan and UNK in the former USSR. The UNK magnets were ambitious for their time, with a desired operating field of 5 T, but the project was cancelled in the years following the breakup of the USSR.

Although superconducting magnet technology was one of the initial options for the SPS, it was rapidly discarded in favour of resistive magnets. This was not the case at Fermilab, which at that time was pursuing a project to upgrade its Main Ring beyond 500 GeV. The project was initially presented as an Energy Doubler, but rapidly became known by the very modern name of Energy Saver, and is now known as the Tevatron collider for protons and antiprotons, which shut down in 2011. The Tevatron arc magnets were the result of years of intense and extremely effective R&D, and it was their success that triggered the application of superconductivity for accelerators.

As superconducting technology matured during the 1980s, its applications expanded. The electron–proton collider HERA was getting under way at DESY in Germany, while ISABELLE was reborn as the Relativistic Heavy Ion Collider (RHIC) at BNL. Thanks to intensive development by high-energy physics, Nb-Ti was readily available from industry. This allowed the construction of magnets with fields in the 5 T range, while multi-filamentary conductors made from niobium-titanium-tantalum (Nb-Ti-Ta) and Nb3Sn were being pursued for fields up to 10 T. The first papers on the proposed Superconducting Super Collider (SSC) in the US were published in the mid-1980s, with R&D for the SSC ramping up substantially by the start of the 1990s. Then, in 1991, the first papers on R&D for the LHC were presented. The LHC’s 8 T Nb-Ti dipole magnets operate close to the practical limit of the conductor, and the collider now represents the largest and most sophisticated use of superconducting magnets in an accelerator.

The niobium-tin challenge

With the success of the LHC, the international high-energy physics community has again turned its attention to further exploration of the energy frontier. CERN has launched a Future Circular Collider (FCC) study that envisages a 100 TeV proton–proton collider as the next step for particle physics, which would require a 100 km-circumference ring of superconducting magnets with operating fields of 16 T. This will be an unprecedented challenge for the magnet community, but one that they are eager to take on. Other future machines are based on linear accelerators that do not require magnets to keep the beams on track, but demand advanced superconducting radio-frequency structures to accelerate them over short distances.

Thanks to superconducting accelerator magnets wound with strands and cables made of Cu/Nb-Ti composites, the energy reach of particle colliders has steadily increased. After nearly half a century of dominance by Nb-Ti, however, other superconducting materials are finally making their way into accelerator magnets. Quadrupoles and dipoles using Nb3Sn will be installed as part of the high-luminosity upgrade for the LHC (the HL-LHC) in the next few years, for example, and the high-temperature superconductor Bi2Sr2CaCu2O8 (BSCCO), iron-based superconductors and rare-earth bismuth copper oxide (REBCO) have recently been added to the list of candidate materials. Proposals for new large circular colliders has boosted interest in high-field dipole magnets but, despite the tantalising potential for achieving dipole fields more than twice that of Nb-Ti, there are many problems that still need to be overcome.

Although Nb3Sn was one of the early candidates for high-field magnets, and has much better performance at high fields than Nb-Ti, its processing requirements, mechanical properties and costs present difficulties when building practical magnets. Nb3Sn comes as a round wire from industry vendors, which is excellent for making multi-wire cables but requires the reaction of a copper, niobium and tin composite at 650 °C to develop the superconducting Nb3Sn cable. Unfortunately, Nb3Sn is a brittle ceramic, unlike Nb-Ti, which requires only modest heat treatment and drawing steps and is mechanically very strong. Years of effort worldwide have overcome these limitations and fields in the range of 16 T have recently been achieved – first in 2004 by a US R&D programme and more recently at CERN – and this is close to the practical limit for this conductor. In addition to the near-term use in the HL-LHC, and despite currently costing 10 times more than Nb-Ti, it is the material of choice for a future high-energy hadron collider, and is also being used in enormous quantities for the toroidal-field magnets and central solenoid of the ITER fusion experiment (see “ITER’s massive magnets enter production”).

High-temperature superconductors represent a further leap in magnet performance, but they also raise major difficulties and could cost an additional factor of 10 more than Nb3Sn. For fields above 16 T there are currently only two choices for accelerator magnets: BSCCO and REBCO. Although these materials become superconductors at a higher temperature than niobium-based materials, their maximum current density is achieved at low temperatures (in the vicinity of 4.2 K). BSCCO has the advantage of being obtainable in round wire, which is perfect for making high-current cables but requires a fairly precise heat treatment at close to 900 °C in oxygen at high pressures. This is not a simple engineering task, especially when dealing with large coils. Much progress has been made recently, however, and there is a vibrant programme in industry and academia to tackle these challenges. REBCO has excellent high-field performance, high current density and requires no heat treatment, but it only comes in tape form, presenting difficulties in winding the required coil shapes and producing acceptable field quality. Nevertheless, the performance of this high-temperature superconductor is too tantalising to abandon it, and many people are working on it. Even after half a century, progress in the development of high-field accelerator magnet R&D continues, and indeed is critical for future discoveries in particle physics.

CERN breaks records with high-field magnets for High-Luminosity LHC

To keep the protons on a circular track at the record-breaking luminosities planned for the LHC upgrade (the HL-LHC) and achieve higher collision energies in future circular colliders, particle physicists need to design and demonstrate the most powerful accelerator magnets ever. The development of the niobium-titatnium LHC magnets, currently the highest-field dipole magnets used in a particle accelerator, followed a long road that offered valuable lessons. The HL-LHC is about to change this landscape by relying on niobium tin (Nb3Sn) to build new high-field magnets for the interaction regions of the ATLAS and CMS experiments. New quadrupoles (called MQFX) and two-in-one dipoles with fields of 11 T will replace the LHC’s existing 8 T dipoles in these regions. The main challenge that has prevented the use of Nb3Sn in accelerator magnets is its brittleness, which can cause permanent degradation under very low intrinsic strain. The tremendous progress of this technology in the past decade led to the successful tests of a full-length 4.5 m-long coil that reached a record nominal field value of 13.4 T at BNL. Meanwhile at CERN, the winding of 7.15 m-long coils has begun.Several challenges are still to be faced, however, and the next few years will be decisive for declaring production readiness of the MQFX and 11 T magnets. R&D is also ongoing for the development of a Nb3Sn wire with an improved performance that would allow fields beyond 11 T. It is foreseen that a 14–15 T magnet with real physical aperture will be tested in the US, and this could drive technology for a 16 T magnet for a future circular collider. Based on current experience from the LHC and HL-LHC, we know that the performance requirements for Nb3Sn for a future circular collider require a large industrial effort to make very large-scale production viable.
• Panagiotis Charitos, CERN.

The post Powering the field forward appeared first on CERN Courier.

]]>
Feature https://cerncourier.com/wp-content/uploads/2018/06/CChig1_07_17.jpg
Unique magnets https://cerncourier.com/a/unique-magnets/ Fri, 11 Aug 2017 07:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/unique-magnets/ To identify particles emerging from high-energy interactions between a beam and a fixed target, or between two counter-rotating beams, experimental physicists need to measure the particle tracks with high precision. Since charged particles are deflected in a magnetic field, incorporating a magnet in the detector system serves to determine both the charge and momentum of […]

The post Unique magnets appeared first on CERN Courier.

]]>

To identify particles emerging from high-energy interactions between a beam and a fixed target, or between two counter-rotating beams, experimental physicists need to measure the particle tracks with high precision. Since charged particles are deflected in a magnetic field, incorporating a magnet in the detector system serves to determine both the charge and momentum of a particle. Momentum resolution is proportional to the sagitta of the detected track, which is proportional to the magnetic field and the square of the length of the track, so larger magnets and larger fields tend to deliver better performance. While being as large and as strong as possible, however, the magnet should not get in the way of the active detector materials.

These general constraints in high-energy physics experiments point to a need for more compact superconducting devices. But additional constraints such as cost, complexity and experiment schedules can lead to the choice of a conventional “warm” magnet if sufficient field and volume can be provided for acceptable power consumption. A detector magnet is one of a kind, and a field accuracy of one part in 1000 is usually sufficient. In contrast, accelerator magnets are typically many of a kind, and are required to deliver the highest possible field with an accuracy of one part in 10,000 or better in a long and narrow aperture. This leads to substantially different technological choices.

Following the discovery of superconductivity, people immediately thought of using it to produce magnetic fields. But the pure materials concerned (later to be called type-I superconductors) only worked up to a critical field of about 0.1 T. The discovery in 1961 of more practical (type-II) superconductivity in certain alloys and compounds which, unlike type-I, allow penetration of magnetic flux but exhibit critical fields of 10–20 T, immediately led to renewed interest. Physics laboratories in Europe and the US started R&D programmes to understand how to make superconducting magnets and to explore possible applications.

The first four years were difficult: small magnets were built but it was not possible to get scaled-up versions to operate at currents anywhere close to the level obtained for short samples of the superconducting wire available at the time. A breakthrough was presented at the first Particle Accelerator Conference in 1965, in a seminal paper by Steckly and Zar on cryogenic stability. Cryogenic stability ensures that, if a superconductor becomes normal due to coil motion or a flux jump (when magnetic flux penetrates a thick type-II material leading to instability, resistance and increased temperature), it will recover its superconductivity provided enough heat can be conducted away to the coolant for the material to drop back below its critical temperature in the region where superconductivity was lost. Several laboratories immediately started to build large helium-bath-cooled bubble-chamber magnets.

The bubble chamber, invented by Donald Glaser in 1952, consists of a tank of liquid hydrogen surrounded by a pair of Helmholtz coils: particles leave tracks in the supercritical liquid and their curvatures reveal the particle’s momentum. The first large (72 inch) bubble-chamber magnet at the University of California Radiation Laboratory was equipped with a 1.8 T water-cooled copper coil weighing 20 tonnes and dissipating a power of 2.5 MW. Larger magnets were desirable for improved resolution, but were clearly unrealistic with room-temperature copper coils due to the costs involved. This was therefore an obvious application for superconductivity, and the concept of cryogenic stability allowed large magnets to be built using a superconductor that was otherwise inherently unstable.

Recall that this was before seminal work at the Rutherford Appleton Laboratory (RAL) had revealed the need for fine filaments and twisting to ensure stability, and before we knew that practical superconductors had to be made in that way. Indeed, it is striking to observe the audacity of high-energy physicists in the late 1960s and the early 1970s in embarking on the construction of such large and costly devices so rapidly, based on so little experience and knowledge.

Thick filaments of niobium-titanium in a copper matrix were the superconducting material of choice at the time, with coils being cooled in a bath of liquid helium. Achievements included: the 1.8 T magnet at Argonne National Laboratory for its bubble-chamber facility; a 3 T magnet for a facility at Fermilab; and the 3.5 T Big European Bubble Chamber (BEBC) magnet at CERN. The stored energy of the BEBC magnet was almost 800 MJ – a level not exceeded for a large magnet until the Large Helical Device came on stream in Japan (for fusion experiments) in the late 1990s. This use of superconducting magnets for experiments preceded by several years their practical application to accelerators.

Discoveries

Following early experiments at CERN’s Intersecting Storage Rings, which were not well equipped to observe particles having large transverse momentum, the importance of detecting all of the particles produced in beam collisions in colliders was recognised, and a need emerged for magnets covering close to a full 4π solid angle. To improve momentum resolution it was also desirable to extend the measurement of tracks beyond the magnet winding, calling for thin coils. The goal was less than one radiation length in thickness, for which a high-performance superconductor with intrinsic stability was needed. This pointed towards a design based on the type of superconducting wire that had been developed in the accelerator community and had by now become a commodity for making MRI magnets (an industry that now consumes more than 90% of the superconductors produced), with the attendant reduction in cost.

Therefore by the early 1980s the development of detector magnets had shifted to conductors made of by then standard superconducting wires consisting of twisted fine filaments in a copper matrix, single or cabled, co-extruded with ultra-pure aluminium to provide stabilization, and wound in solenoidal coils inside a hard aluminium alloy mandrel for support. Pure aluminium is an excellent conductor at low temperature, and far more transparent than the copper that had been used previously. Moreover, rather than being bath cooled, these constant field magnets were indirectly cooled to about 5 K with helium flowing in pipes in good thermal contact with the mandrel. This allowed the 1–2 T detector solenoids to become larger, without power dissipation in the winding and with a low inventory of liquid helium. In this way the coils can be made thin and relatively transparent to certain classes of particles such as muons, so that detectors can be located both inside and outside. Examples of these magnets are those used for the ALEPH and DELPHI experiments at CERN’s Large Electron–Positron (LEP) collider, the D0 experiment at Fermilab and the BELLE experiment at KEK. Other prominent experiments over the years based on superconducting magnets include VENUS at KEK, ZEUS at DESY, and BaBAR at SLAC.

To the Higgs boson and beyond

While this had become the standard approach to detector magnet design, the magnets in the ATLAS and CMS experiments at the LHC occupy new territory. ATLAS uses a large toroidal coil structure surrounding a thin 2 T solenoid, and the solenoid for CMS delivers an unprecedented 3.8 T (but is not required to be very thin). While both the CMS and ATLAS solenoids use the now traditional technology based on niobium-titanium superconductor  co-extruded in aluminium, to allow the structure to withstand the substantial forces the pure aluminium stabiliser is reinforced. This is done either by welding aluminium-alloy flanges to the pure aluminium (CMS) or by strengthening the pure aluminium with a precipitate that improves its strength while not increasing inordinately the resistivity of the aluminium (ATLAS solenoid).

The next generation of magnets planned for the Compact Linear Collider (CLIC), the International Linear Collider (ILC) and Future Circular Colliders (FCC) will be larger, and may require more technological development to reach the desired magnetic fields. Based on a single detector at the interaction point, a new unified detector model has been developed for CLIC and the concepts explored for this detector are also of interest to the high-luminosity, as well as for a future circular electron–positron collider. Like the LHC with ATLAS and CMS, a future circular collider requires a “general-purpose” detector. Previous studies for a detector for a 100 TeV circular hadron collider were based on a twin solenoid paired with two forward dipoles, but these have now been dropped in favour of a simpler system comprising one main solenoid enclosed by an active shielding coil. This design achieves a similar performance while being much lighter and more compact, resulting in a significant scaling down in the stored energy of the magnet from 65 GJ to 11 GJ. The total diameter of the magnet is around 18 m, and the new design could benefit from the important lessons from the construction and installation of the LHC detectors.

Key to the choice of such magnets, in addition to their cost and complexity, is their ability to allow high-quality muon tracking. This is crucial for studying the properties of the Higgs boson, for example, and any additional new fundamental particles that await discovery. If the lengthy discussions surrounding the design of the ATLAS and CMS magnets many years ago are anything to go by we can look forward to intense and interesting debates about how to push these one-off magnet designs to the next level.

The post Unique magnets appeared first on CERN Courier.

]]>
Feature https://cerncourier.com/wp-content/uploads/2018/06/CCdet1_0717.jpg
Get on board with EASITrain https://cerncourier.com/a/get-on-board-with-easitrain/ Fri, 11 Aug 2017 07:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/get-on-board-with-easitrain/ Heike Kamerlingh Onnes won his Nobel prize back in 1913 two years after the discovery of superconductivity; Georg Bednorz and Alexander Müller won theirs in 1987, just a year after discovering high-temperature superconductors. Putting these major discoveries into use, however, has been a lengthy affair, and it is only in the past 30 years or […]

The post Get on board with EASITrain appeared first on CERN Courier.

]]>

Heike Kamerlingh Onnes won his Nobel prize back in 1913 two years after the discovery of superconductivity; Georg Bednorz and Alexander Müller won theirs in 1987, just a year after discovering high-temperature superconductors. Putting these major discoveries into use, however, has been a lengthy affair, and it is only in the past 30 years or so that demand has emerged. Today, superconductors represent an annual market of around $1.5 billion, with a high growth rate, yet a plethora of opportunities remains untapped.

Developing new superconducting materials is essential for a possible successor to the LHC currently being explored by the Future Circular Collider (FCC) study, which is driving a considerable effort to improve the performance and feasibility of large-scale magnet production. Beyond fundamental research, superconducting materials are the natural choice for any application where strong magnetic fields are needed. They are used in applications as diverse as magnetic resonance imaging (MRI), the magnetic separation of minerals in the mining industry and efficient power transmission across long distances (currently being explored by the LIPA project in the US and AmpaCity in Germany).

The promise for future technologies is even greater, and overcoming our limited understanding of the fundamental principles of superconductivity and enabling large-quantity production of high-quality conductors at affordable prices will open new business opportunities. To help bring this future closer, CERN has initiated the European Advanced Superconductivity Innovation and Training project (EASITrain) to prepare the next generation of researchers, develop innovative materials and improve large-scale cryogenics (easitrain.web.cern.ch). From January next year, 15 early stage researchers will work on the project for three years, with the CERN-coordinated FCC study providing the necessary research infrastructure.

Global network

EASITrain establishes a global network of research institutes and industrial partners, transferring the latest knowledge while also equipping participants with business skills. The network will join forces with other EU projects such as ARIES, EUROTAPES (superconductors), INNWIND (a 10–20 MW wind turbine), EcoSWING (superconducting wind generator), S-PULSE (superconducting electronics) and FuSuMaTech (a working group approved in June devoted to the high-impact potential of R&D for the HL-LHC and FCC), and aims to profit from the well-established Test Infrastructure and Accelerator Research Area Preparatory Phase (TIARA) platform. EASITrain also links with the Marie Curie training networks STREAM and RADSAGA, both hosted by CERN.

Operating within the EU’s H2020 framework, one of EASITrain’s targets is energy sustainability. Performance and efficiency increases in the production and operation of superconductors could lead to 10–20 MW wind turbines, for example, while new efficient cryogenics could reduce the carbon footprint of industries, gas production and transport. EASITrain will also explore the use of novel superconductors, including high-temperature superconductors, in advanced materials for power-grid and medical applications, and bring together technical experts, industrial representatives and specialists in business and marketing to identify new superconductor applications. Following an extensive study, three specific application areas have been identified: uninterruptible power supplies; sorting machines for the fruit industry; and large loudspeaker systems. These will be further explored during a three-day “superconductivity hackathon” satellite event at EUCAS17, organised jointly with CERN’s KT group, IdeaSquare, WU Vienna and the Fraunhofer Institute.

Together with the impact that superconductors have had on fundamental research, these examples show the unexpected transformative potential of these still mysterious materials and emphasise the importance of preparing the next generation for the challenges ahead.

Hackathon application destinations

Uninterruptible power supply (UPS). UPS systems are energy-storage technologies that can take on and deliver power when necessary. Cloud-based applications are leading to soaring data volumes and an increasing need for secure storage, driving growth among large data centres and a shift towards more efficient UPS solutions that are expected to carve a slice of an almost $1 billion and growing market. Current versions are based on batteries with a maximum efficiency of 90%, but superconductor-based implementations based on flywheels will ensure a continuous and longer-lived power supply, minimising data loss and maximising server stability.

Sorting machines for the fruit industry. Tonnes of fruit have to be disposed of worldwide because current technologies based on spectroscopy are not able to determine the maturity level of fruit sufficiently accurately, with techniques also offering limited information about small-sized fruit. Superconductors would enable NMR-based scanning systems that allow producers to accurately and non-destructively determine valuable properties such as ripeness, absence of seeds and, crucially, the maturity of fruit. In 2016, sorting-machine manufacturers made profits of $360 million selling products analysing apples, pears and citrus fruit, and the market has experienced a growth of about 20% per year.

Large loudspeaker systems. The sound quality of powerful loudspeakers, particularly PA systems for music festivals and stadiums, could enter new dimensions by using superconductors. Higher electrical resistance leads to poorer sound quality, since speakers need to modify the strength of a magnetic field rapidly to adapt to different frequency ranges. Superconductivity also allows smaller magnets to be used, making them more compact and transportable. A major concern among European manufacturers has been the search for the next big step in loudspeaker evolution, to defend against competition from Asia, and the size and quality of large speakers is now a major driver of the $500 million industry.

The post Get on board with EASITrain appeared first on CERN Courier.

]]>
Feature https://cerncourier.com/wp-content/uploads/2018/06/CCeas1_07_17-1.jpg
Superconductors and particle physics entwined https://cerncourier.com/a/superconductors-and-particle-physics-entwined/ Fri, 11 Aug 2017 07:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/superconductors-and-particle-physics-entwined/ Superconductivity is a mischievous phenomenon. Countless superconducting materials were discovered following Onnes’ 1911 breakthrough, but none with the right engineering properties. Even today, more than a century later, the basic underlying superconducting material from which magnet coils are made is a bespoke product that has to be developed for specific applications. This presents both a […]

The post Superconductors and particle physics entwined appeared first on CERN Courier.

]]>

Superconductivity is a mischievous phenomenon. Countless superconducting materials were discovered following Onnes’ 1911 breakthrough, but none with the right engineering properties. Even today, more than a century later, the basic underlying superconducting material from which magnet coils are made is a bespoke product that has to be developed for specific applications. This presents both a challenge and an opportunity for consumers and producers of superconducting materials.

According to trade statistics from 2013, the global market for superconducting products is dominated by the demands of magnetic resonance imaging (MRI) to the tune of approximately €3.5 bn per year, all of which is based on low-temperature superconductors such as niobium-titanium. Large laboratory facilities make up just under €1 bn of global demand, and there is a hint of a demand for high-temperature superconductors at around €0.3 bn.

Understanding the relationship between industry and big science, in particular large particle accelerators, is vital for such projects to succeed. When the first superconducting accelerator – the Tevatron proton–antiproton collider at Fermilab in the US, employing 774 dipole magnets to bend the beams and 216 quadrupoles to focus them – was constructed in the early 1980s, it is said to have consumed somewhere between 80–90% of all the niobium-titanium superconductor ever made. CERN’s Large Hadron Collider (LHC), by far the largest superconducting device ever built, also had a significant impact on industry: its construction in the early 2000s doubled the world output of niobium-titanium for a period of five to six years. The learning curve of high-field superconducting magnet production has been one of the core drivers of progress in high-energy physics (HEP) for the past few decades, and future collider projects are going to test the HEP–industry model to its limits.

The first manufacturers

About a month after the publication of the Bell Laboratories work on high-field superconductivity at the end of January 1961 describing the properties of niobium-tin, it was realised that the experimental conductor – despite being a very small coil consisting of merely a few centimetres of wire – could, with a lot of imagination, be described as an engineering material. The discovery catalysed research into other superconducting metallic alloys and compounds. Just four years later, in 1965, Avco-Everett in co-operation with 14 other companies built a 10 foot, 4 T superconducting magnet using a niobium-zirconium conductor embedded in a copper strip.

By the end of 1966, an improved material consisting of niobium-titanium was offered at $9 per foot bare and $13 when insulated. That same year, RCA also announced with great fanfare its entry into commercial high-field superconducting magnet manufacture using the newly developed niobium-tin “Vapodep” ribbon at $4.40 per metre. General Electric was not far behind, offering unvarnished “22CY030” tape at $2.90 per foot in quantities up to 10,000 feet. Kawecki Chemical Company, now Kawecki-Berylco, advertised “superconductive columbium-tin tape in an economical, usable form” in varied widths and minimum unit lengths of 200 m, while in Europe the former French firm CSF marketed the Kawecki product. In the US, Airco claimed the “Kryoconductor” to be pioneering the development of multi-strand fine-filament superconductors for use primarily in low- or medium-field superconducting magnets. Intermagnetics General (IGC) and Supercon were the two other companies with resources adequate to fulfil reasonably sized orders, the latter in particular providing 47,800 kg of copper-clad niobium-titanium conductor for the Argonne National Laboratory’s 12 foot-diameter hydrogen bubble chamber. The industrialisation of superconductor production was in full swing.

Niobium-tin in tape form was the first true engineering superconducting material, and was extensively used by the research community to build and experiment with superconducting magnets. With adequate funds, it was even possible to purchase a magnet built to one’s specifications. One interesting application, which did not see the light of day until many years later, was the use of superconducting tape to exclude magnetic fields from those regions in a beamline through which particle beams had to pass undeviated. As a footnote to this exciting period, in 1962 Martin Wood and his wife founded Oxford Instruments, and four years later delivered the first nuclear magnetic resonance spectroscopy system. In November last year, the firm sold its superconducting wire business to Bruker Energy and Supercon Technologies, a subsidiary of Bruker Corporation, for $17.5 m.

Beginning of a new industry

One might trace the beginning of the superconducting-magnet revolution to a five-week-long “summer study” at Brookhaven National Laboratory in 1968. Bringing the who’s who in the world of superconductivity together resulted not only in a burst of understanding of the many failures experienced in prior years by magnet builders, but also a deeper appreciation of the arcana of superconducting materials. Researchers at Rutherford Laboratory in the UK, in a series of seminal papers, sufficiently explained the underlying properties and proposed a collaboration with the laboratories at Karlsruhe and Saclay to develop superconducting accelerator magnets. The GESSS (Group for European Superconducting Synchrotron Studies) was to make the Super Proton Synchrotron (SPS) at CERN a superconducting machine, and this project was large enough to attract the interest of industry – in particular IMI in England. Although GESSS achieved many advances in filamentary conductors and magnet design, the SPS went ahead as a conventional warm-magnet machine. IMI stopped all wire production, but in the US the number of small wire entrepreneurs grew. Niobium-tin tape products gradually disappeared from the market as this superconductor was deemed to be unsuitable for all magnets and especially for accelerator magnet use.

In 1972 the 400 GeV synchrotron at Fermilab, constructed with standard copper-based magnets, became operational, and almost immediately there were plans for an upgrade – this time with superconducting magnets. This project changed the industrial scale, requiring a major effort from manufacturers. To work around the proprietary alloys and processing techniques developed by strand manufacturers, Fermilab settled on an Nb46.5Ti alloy, which was an arithmetic average of existing commercial alloys. This enabled the lab to save around one year in its project schedule.

At the same time, the Stanford Linear Accelerator Center was building a large superconducting solenoid for a meson detector, while CERN was undertaking the Big European Bubble Chamber (BEBC) and the Omega Project. This gave industry a reliable view of the future. Numerous large magnets were planned by the various research arms of governments and diverse industry. For example, under the leadership of the Oak Ridge National Laboratory a consortium of six firms constructed a large-scale model of a tokamak reactor magnet assembly using six differently designed coils, each with different superconducting materials: five with niobium-titanium and one with niobium-tin. At the Lawrence Livermore National Laboratory work was in progress to develop a tokamak-like fusion device whose coils were again made from niobium-titanium conductor. The US Navy had major plans for electric ship drives, while the Department of Defense was funding the exploration of isotope separation by means of cyclotron resonance, which required superconducting solenoids of substantial size.

It appeared that there would be no dearth of succulent orders from the HEP community, with the result that even more companies around the world ventured into the manufacture of superconductors. When the Tevatron was commissioned in 1984, two manufacturers were involved: Intermagnetics General Corporation (IGC) and Magnetic Corporation of America (MCA), in an 80/20 per cent proportion. As is common in particle physics, no sooner had the machine become operational than the need for an upgrade became obvious. However, the planning for such a new larger and more complex device took considerable time, during which the superconductor manufacturers effectively made no sales and hence no profits. This led to the disappearance of less well capitalised companies, unless they had other products to market, as did Supercon and Oxford Instruments. The latter expanded into MRI, and its first prototype MRI magnet built in 1979 became the foundation of a current annual world production that totals around 3500 units. MRI production ramped up as the Tevatron demand declined and the correspondingly large amount of niobium-titanium conductor that it required has been stable since then.

The demise of ISABELLE, a 400 GeV proton–proton collider at Brookhaven, in 1983, and then the Superconducting Super Collider a decade later, resulted in a further retrenchment of the superconductor industry, with a number of pioneering establishments either disappearing or being bought out. The industrial involvement in the construction of the superconducting machines HERA at DESY and RHIC at BNL somewhat alleviated the situation. The discovery of high-temperature superconductivity (HTS) in 1986 also helped, although it is not clear that great profits, if any, have been made so far in the HTS arena.

A cloudy crystal ball

The superconducting wire business in the Western world has undergone significant consolidation in recent years. Niobium-titanium wire is now a commodity with a very low profit margin because it has become a standard, off-the-shelf product used primarily for MRI applications. There are now more companies than the market can support for this conductor, but for HEP and other research applications the market is shifting to its higher-performing cousin: niobium-tin.

Following the completion of the LHC in the early 2000s, the US Department of Energy looked toward the next generation of accelerator magnets. LHC technology had pushed the performance of niobium-titanium to its limits, so investment was directed towards niobium-tin. This conductor was also being developed for the fusion community ITER (“ITER’s massive magnets enter production”), but HEP required a higher performance for use in accelerators. Over a period of a few years, the critical-current performance of niobium-tin almost doubled and the conductor is now a technological basis of the High Luminosity LHC (see “Powering the field forward”). Although this major upgrade is proceeding as planned, as always all eyes are on the next step – perhaps an even larger machine based on even more innovative magnet technology. For example, a 100 TeV proton collider under consideration by the Future Circular Collider study, co-ordinated by CERN, will require global-scale procurement of niobium-tin strands and cable similar in scale to the demands of ITER.

Beyond that, the view of the superconductor industry is into a cloudy crystal ball. The current political and economic environment does not give grounds for hope, at least not in the Western world, that a major superconducting project is to be built in the near future. More generally, other than MRI, the commercial applications of superconductivity have not caught on due to customer impressions of additional complexity and risk against marginal increases in performance. We also have the consequences of the challenges that ITER has faced regarding its costs, which can attract the undeserved opinion that scientists cannot manage large projects.

One facet of the superconductor industry that seems to be thriving is small-venture establishments, sometimes university departments, which carry out superconductor R&D quasi-independently of major industrial concerns. These establishments maintain themselves under various government-sponsored support, such as the SBIR and STTR programmes in the US, and stepwise and without much fanfare they are responsible for the improvement of current superconductors, be they low- or high-temperature. As long as such arrangements are maintained, healthy progress in the science is assured, and these results feed directly to industry. And as far as HEP is concerned, as long as there are beams to guide, bend and focus, we will continue to need manufacturers to make the wires and fabricate the superconducting magnet coils.

Snapshot: manufacturing the LHC magnets

The production of the niobium-titanium conductor for the LHC’s 1800 or so superconducting magnets was of the highest standard, involving hundreds of individual superconducting strands assembled into a cable that had to be shaped to accommodate the geometry of the magnet coil. Three firms manufactured the 1232 main dipole magnets (each 15 m long and weighing 35 tonnes): the French consortium Alstom MSA–Jeumont Industries; Ansaldo Superconduttori in Italy; and Babcok Noell Nuclear in Germany. For the 400 main quadrupoles, full-length prototyping was developed in the laboratory (CEA–CERN) and the tender assigned to Accel in Germany. Once LHC construction was completed, the superconductor market dropped back to meet the base demands of MRI. There has been a similar experience with the niobium-tin conductor used for the ITER fusion experiment under construction in France: more than six companies worldwide made the strands before the procurement was over, after which demand dropped back to pre-project levels.

Transforming brittle conductors into high-performance coils at CERN

The manufacture of superconductors for HEP applications is in many ways a standard industrial flow process with specialised steps. The superconductor in round rod form is inserted into copper tubes, which have a round inside and a hexagonal outside perimeter (the image inset shows such a “billet” for the former HERA electron–proton collider at DESY). A number of these units are then stacked into a copper can that is vacuum sealed and extruded in a hydraulic press, and this extrusion is processed on a draw bench where it is progressively reduced in diameter.

The greatly reduced product is then drawn through a series of dies until the desired wire diameter in reached, and a number of these wires are formed into cables ready for use. The overall process is highly complex and often involves several countries and dozens of specialised industries before the reel of wire or cable arrives at the magnet factory. Each step must ultimately be accounted for and any sudden change to a customer’s source of funds can land the manufacturer with unsaleable stock. Superconductors are specified precisely for their intended end use, and only in rare instances is a stocked product applicable to another application.

 

The post Superconductors and particle physics entwined appeared first on CERN Courier.

]]>
Feature https://cerncourier.com/wp-content/uploads/2018/06/CChep1_07_17.jpg
Taming high-temperature superconductivity https://cerncourier.com/a/taming-high-temperature-superconductivity/ Fri, 11 Aug 2017 07:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/taming-high-temperature-superconductivity/ After 30 years, a theory is within reach for high-T superconductors.

The post Taming high-temperature superconductivity appeared first on CERN Courier.

]]>

Superconductivity is perhaps the most remarkable manifestation of quantum physics on the macroscopic scale. Discovered in 1911 by Kamerlingh Onnes, it preoccupied the most prominent physicists of the 20th century and remains at the forefront of condensed-matter physics today. The interest is partly driven by potential applications – superconductivity at room temperature would surely revolutionise technology – but to a large extent it reflects an intellectual fascination. Many ideas that emerged from the study of superconductivity, such as the generation of a photon mass in a superconductor, were later extended to other fields of physics, famously serving as paradigms to explain the generation of a Higgs mass of the electroweak W and Z gauge bosons in particle physics.

Put simply, superconductivity is the ability of a system of fermions to carry electric current without dissipation. Normally, fermions such as electrons scatter off any obstacle, including each other. But if they find a way to form bound pairs, these pairs may condense into a macroscopic state with a non-dissipative current. Quantum mechanics is the only way to explain this phenomenon, but it took 46 years after the discovery of superconductivity for Bardeen, Cooper and Schrieffer (BCS) to develop a verifiable theory. Winning the 1972 Nobel Prize in Physics for their efforts, they figured out that the exchange of phonons leads to an effective attraction between pairs of electrons of opposite momentum if the electron energy is less than the characteristic phonon energy (figure 1). Although electrons still repel each other, the effective Coulomb interaction becomes smaller at such frequencies (in a manner opposite to asymptotic freedom in high-energy physics). If the reduction is strong enough, the phonon-induced electron–electron attraction wins over Coulomb repulsion and the total interaction becomes attractive. There is no threshold for the magnitude of the attraction because low-energy fermions live at the boundary of the Fermi sea, in which case an arbitrary weak attraction is enough to create bound states of fermions at some critical temperature, Tc.

The formation of bound states, called Cooper pairs, is one necessary ingredient for superconductivity. The other is for the pairs to condense, or more specifically to acquire a common phase corresponding to a single macroscopic wave function. Within BCS theory, pair formation and locking of the phases of the pairs occur simultaneously at the same Tc, while in more recent strong-coupling theories bound pairs exist above this temperature. The common phase of the pairs can have an arbitrary value, and the fact that the system chooses a particular one below Tc is a manifestation of spontaneous symmetry breaking. The phase coherence throughout the sample is the most important physical aspect of the superconducting state below Tc, as it can give rise to a “supercurrent” that flows without resistance. Superconductivity can also be viewed as an emergent phenomenon.

The BCS electron–phonon mechanism of superconductivity has since been successfully applied to explain pairing in a large variety of materials

While BCS theory was a big success, it is a mean-field theory, which neglects fluctuations. To really trust that the electron–phonon mechanism was correct, it was necessary to develop theoretical tools based on Green functions and field-theory methods, and to move beyond weak coupling. The BCS electron–phonon mechanism of superconductivity has since been successfully applied to explain pairing in a large variety of materials (figure 2), from simple mercury and aluminium to the niobium-titanium and niobium-tin alloys used in the magnets for the Large Hadron Collider (LHC), in addition to the recently discovered sulphur hydrides, which become superconductors at a temperature of around 200 K under high pressure. But the discovery of high-temperature superconductors drove condensed-matter theorists to explore new explanations for the superconducting state.

Unconventional superconductors

In the early 1980s, when the record critical temperature for superconductors was of the order 20 K, the dream of a superconductor that works at liquid-nitrogen temperatures (77 K) seemed far off. In 1986, however, Bednorz and Müller made the breakthrough discovery of superconductivity in La1−xBaxCuO4 with Tc of around 40 K. Shortly after, a material with a similar copper-oxide-based structure with Tc of 92 K was discovered. These copper-based superconductors, known as cuprates, have a distinctive structure comprising weakly coupled layers made of copper and oxygen. In all the cuprates, the building blocks for superconductivity are the CuO2 planes, with the other atoms providing a charge reservoir that either supplies additional electrons to the layers or takes electrons out to leave additional hole states (figure 3).

From a theoretical perspective, the high Tc of the cuprates is only one important aspect of their behaviour. More intriguing is what mechanism binds the fermions into pairs. The vast majority of researchers working in this area think that, unlike low-temperature superconductors, phonons are not responsible. The most compelling reason is that the cuprates possess “unconventional” symmetry of the pair wave function. Namely, in all known phonon-mediated superconductors, the pair wave function has s-wave symmetry, or in other words, its angular dependence is isotropic. For the cuprates, it was proven in the early 1990s that the pair wave function changes sign under rotation by 90°, leading to an excitation spectrum that has zeros at particular points on the Fermi surface. Such symmetry is often called “d-wave”. This is the first symmetry beyond s-wave that is allowed by the antisymmetric nature of the electron wave functions when the total spin of the pair is zero. The observation of a d-wave symmetry in the cuprates was extremely surprising because, unlike s-wave pairs, d-wave Cooper pairs can potentially be broken by impurities.

The cuprates hold the record for the highest Tc for materials with an unconventional pair wave-function symmetry: 133 K in mercury-based HgBa2Ca2Cu3O8 at ambient pressure. They were not, however, the first materials of this kind: a “heavy fermion” superconductor CeCu2Si2 discovered in 1979 by Steglich, and an organic superconductor discovered by Jerome the following year, also had an unconventional pair symmetry. After the discovery of cuprates, a set of unconventional iron-based superconductors was discovered with Tc up to 60 K in bulk systems, followed by the discovery of superconductivity with an even higher Tc in a monolayer of FeSe. But even low-Tc, unconventional materials can be interesting. For example, some experiments suggest that Cooper pairs in Sr2RuO4 have total spin-one and p-wave symmetry, leading to the intriguing possibility that they can support edge modes that are Majorana particles, which have potential applications in quantum computing.

If phonon-mediated electron–electron interactions are ineffective for the pairing in unconventional superconductors, then what binds fermions together? The only other possibility is a nominally repulsive electron–electron interaction, but for this to allow pairing, the electrons must screen their own Coulomb repulsion to make it effectively attractive in at least one pairing channel (e.g. d-wave). Interestingly, quantum mechanics actually allows such schizophrenic behaviour of electrons: a d-wave component of a screened Coulomb interaction becomes attractive in certain cases.

Cuprate conundrums

There are several families of high-temperature cuprate superconductors. Some, like LaSrCuO, YBaCuO and BSCCO, show superconductivity upon hole doping; others, like NdCeCuO, show superconductivity upon electron doping. The phase diagram of a representative cuprate contains regions of superconductivity, regions of magnetic order, and a region (called the pseudogap) where Tc decreases but the system’s behaviour above Tc is qualitatively different from that in an ordinary metal (figure 4). At zero doping, standard solid-state physics says that the system should be a metal, but experiments show that it is an insulator. This is taken as an indication that the effective interaction between electrons is large, and such an interaction-driven insulator is called a Mott insulator. Upon doping, some states become empty and the system eventually recovers metallic behaviour. A Mott insulator at zero doping has another interesting property: spins of localised electrons order antiferromagnetically. Upon doping, the long-range antiferromagnetic order quickly disappears, while short-range magnetic correlations survive.

Since the superconducting region of the phase diagram is sandwiched between the Mott and metallic regimes, there are two ways to think about HTS: either it emerges upon doping of a Mott insulator (if one departs from zero doping), or it emerges from a metal with increased antiferromagnetic correlations if one departs from larger dopings. Even though it was known before the discovery of high-temperature superconductors that antiferromagnetically mediated interaction is attractive in the d-wave channel, it took time to develop various computational approaches, and today the computed value of Tc is in the range consistent with experiments. At smaller dopings, a more reliable approach is to start from a Mott insulator. This approach also gives d-wave superconductivity, with the value of Tc most likely determined by phase fluctuations and decreasing as a function of decreased doping. Because both approaches give d-wave superconductivity with comparable values of Tc, the majority of researchers believe that the mechanism of superconductivity in the cuprates is understood, at least qualitatively.

A more subtle issue is how to explain the so-called pseudogap phase in hole-doped cuprates (figure 4). Here, the system is neither magnetic nor superconducting, yet it displays properties that clearly distinguish it from a normal, even strongly correlated metal. One natural idea, pioneered by Philip Anderson, is that the pseudogap phase is a precursor to a Mott insulator that contains a soup of local singlet pairs of fermions: superconductivity arises if the phases of all singlet pairs are ordered, whereas antiferromagnetism arises if the system develops a mixture of spin singlets and spin triplets. Several theoretical approaches, most notably dynamical mean-field theory, have been developed to quantitatively describe the precursors to a Mott insulator.

The understanding of the pseudogap as the phase where electron states progressively get localised, leading to a reduction of Tc, is accepted by many in the HTS community. Yet, new experimental results show that the pseudogap phase in hole-doped cuprates may actually be a state with a broken symmetry, or at least becomes unstable to such a state at a lower temperature. Evidence has been reported for the breaking of time-reversal, inversion and lattice rotational symmetry. Improved instrumentation in recent years also led to the discovery of a charge-density wave and pair-density wave order in the phase diagram and perhaps even loop-current order. Many of us believe that the additional orders observed in the pseudogap phase are relevant to the understanding of the full phase diagram, but that these do not change the two key pillars of our understanding: superconductivity is mediated by short-range magnetic excitations, and the reduction of Tc at smaller dopings is due the existence of a Mott insulator near zero doping.

Woodstock physics

Participants at a special session of the 1987 March meeting of the American Physical Society in New York devoted to the newly discovered high-temperature superconductors. The hastily organised session, which later became known as the “Woodstock of Physics” lasted from the early evening to 3.30 a.m. the following morning, with 51 presenters and more than 1800 physicists in attendance. Bednorz and Müller received the Nobel prize in December 1987, one year after the discovery, which was the fastest award in the Nobel’s history.

Why cuprates still matter

The cuprates have motivated incredible advances in instrumentation and experimental techniques, with 1000-fold increases in accuracy in many cases. On the theoretical side, they have also led to the development of new methods to deal with strong interactions – dynamical mean-field theory and various metallic quantum-critical theories are examples. These experimental and theoretical methods have found their way into the study of other materials and are adding new chapters to standard solid-state physics books. Some of them may even one day find their way into other fields, such as strongly interacting quark–gluon matter. We can now theoretically understand a host of the phenomena in high-temperature superconductors, but there are still some important points to clarify, such as the mysterious linear temperature dependence of the resistivity.

The community is coming together to solve these remaining issues. Yet, the cynical view of the cuprate problem is that it lacks an obvious small parameter, and hence a universally accepted theory – the analogue of BCS – will never be developed. While it is true that serendipity will always have its place in science, we believe that the key criterion for “the theory” of the cuprates should not be a perfect quantitative agreement with experiments (even though this is still a desirable objective). Rather, a theory of cuprates should be judged by its ability to explain both superconductivity and a host of concomitant phenomena, such as the pseudogap, and its ability to provide design principles for new superconductors. Indeed, this is precisely the approach that allowed the recent discovery of the highest-Tc superconductor to date: hydrogen sulphide. At present, powerful algorithms and supercomputers allow us to predict quite accurately the properties of materials before they are synthesised. For strongly correlated materials such as the cuprates, these calculations profit from physical insight and vice versa.

From a broader perspective, studies of HTS have led to renewed thinking about perturbative and non-perturbative approaches to physics. Physicists like to understand particles or waves and how they interact with each other, like we do in classical mechanics, and perturbation theory is the tool that takes us there – QED is a great example that works because the fine-structure constant is small. In a single-band solid where interactions are not too strong, it is natural to think of superconductivity as being mediated by, for example, the exchange of antiferromagnetic spin fluctuations. When interactions are so strong that the wave functions become extremely entangled, it still makes sense to look at the internal dynamics of a Cooper pair to check whether one can detect traces of spin, charge or even orbital fluctuations. At the same time, perturbation theory in the usual sense does not work. Instead, we have to rely more heavily on large-scale computer calculations, variational approaches and effective theories. The question of what “binds” fermions into a Cooper pair still makes sense in this new paradigm, but the answer is often more nuanced than in a weak coupling limit.

Many challenges are left in the HTS field, but progress is rapid and there is much more consensus now than there was even a few years ago. Finally, after 30 years, it seems we are closing in on a theoretical understanding of this both useful and fascinating macroscopic quantum state.

CERN puts high-temperature superconductors to use

A few years ago, triggered by conceptual studies for a post-LHC collider, CERN launched a collaboration to explore the use of high-temperature superconductors (HTS) for accelerator magnets. In 2013 CERN partnered with a European particle accelerator R&D project called EuCARD-2 to develop a HTS insert for a 20 T magnet. The project came to an end in April this year, with CERN having built an HTS demonstration magnet based on an “aligned-block” concept for which coil-winding and quench-detection technology had to be developed. Called Feather2, the magnet has a field of 3 T based on low-performance REBCO (rare-earth barium-copper-oxide) tape. The next magnet, based on high-performance REBCO tape, will approach a stand-alone field of 8 T. Then, once it is placed inside the aperture of the 13 T “Fresca2” magnet, the field should go beyond 20 T.

Now the collaborative European spirit of EuCARD-2 lives on in the ARIES project (Accelerator Research and Innovation for European Science and Society), which kicked off at CERN in May. ARIES brings together 41 participants from 18 European countries, including seven industrial partners, to help bring down the cost of the conductor, and is co-funded via a contribution of €10 million from the European Commission. 

In addition, CERN is developing HTS-based transfer lines to feed the new superconducting magnets of the High Luminosity LHC based on magnesium diboride (MgB2), which can be operated in helium gas at temperatures of up to around 30 K and must be flexible enough to allow the power converters to be installed hundreds of metres away from the accelerator. The relatively low cost of MgB2 led CERN’s Amalia Ballarino to enter a collaboration with industry, which resulted in a method to produce MgB2 in wire form for the first time. The team has since achieved record currents that reached 20 kA at a temperature above 20 K, thereby proving that MgB2 technology is a viable solution for long-distance power transmission. The new superconducting lines could also find applications in the Future Circular Collider initiative.

• Matthew Chalmers, CERN

The post Taming high-temperature superconductivity appeared first on CERN Courier.

]]>
Feature After 30 years, a theory is within reach for high-T superconductors. https://cerncourier.com/wp-content/uploads/2018/06/CChts1_07_17.jpg
Celebrating a super partnership https://cerncourier.com/a/viewpoint-celebrating-a-super-partnership/ Fri, 11 Aug 2017 07:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/viewpoint-celebrating-a-super-partnership/ The virtuous spiral between high-energy physics and superconductivity is never ending, says Lucio Rossi.

The post Celebrating a super partnership appeared first on CERN Courier.

]]>

This month more than 1000 scientists and engineers are gathering in Geneva to attend the biennial European Conference on Applied Superconductivity (EUCAS 2017). This international event covers all aspects of the field, from electronics and large-scale devices to basic superconducting materials and cables. The organisation has been assigned to CERN, home to the largest superconducting system in operation (the Large Hadron Collider, LHC) and where next-generation superconductors are being developed for the high-luminosity LHC upgrade (HL-LHC) and Future Circular Collider (FCC) projects.

When Karl H Onnes discovered superconductivity in 1911, Ernest Rutherford was just publishing his famous paper unveiling the structure of the atom. But superconductivity and nuclear physics, both with their own harvests of Nobel prizes, were unconnected for many years. Accelerators have brought the fields together, as this issue of CERN Courier demonstrates.

The constant evolution of high-voltage radio-frequency (RF) cavities and powerful magnets to accelerate and guide particles around accelerators drove a transformation of our understanding of fundamental physics. But by the 1970s, the limit of RF power and magnetic-field strength had nearly been reached and gigantism seemed the only option to reach higher energies. In the meantime, a few practical superconductors had become available: niobium-zirconium alloy, niobium-tin compound (Nb3Sn) and niobium-titanium alloy (Nb-Ti). Its reliability in processing and uniformity of production made Nb-Ti the superconductor of choice for all projects.

The first large application of Nb-Ti was for high-energy physics, driving the bubble-chamber solenoids for Argonne National Laboratory in the US (see “Unique magnets”). But it was accelerators, even more than detectors or fusion applications, that drove the development of technical superconductors. Following the birth of the modern Nb-Ti superconductor in 1968, rapid R&D took place for large high-energy physics projects such as the proposed but never born Superconducting SPS at CERN, the ill-fated Isabelle/CBA collider at BNL and the Tevatron at Fermilab (see “Powering the field forwards”). By the end of the 1980s, superconductors had to be produced on industrial scales, as did the niobium RF accelerating cavities (see “Souped up RF”) for LEPII and other projects. MRI, based on 0.5–3 T superconducting magnets, also took off at that time, today dominating the market with around 3000 items built per year.

The LHC is the summit of 30 years of improvement in Nb-Ti-based conductors. Its 8.3 T dipole fields are generated by 10 km-long, 1 mm-diameter wires containing 6000 well-separated Nb-Ti filaments, each 6 μm thick and protected by a thin Nb barrier, all embedded in pure copper and then coated with a film of oxidised tin-silver alloy. The LHC contains 1200 tonnes of this material, made by six companies worldwide, and five years ago it powered the LHC to produce the Higgs boson.

But the story is not finished. The increased collision rate of the HL-LHC requires us to go beyond the 10 T wall and, despite its brittleness, we are now able to exploit the superior intrinsic properties of Nb3Sn to reach 11 T in a dipole and almost 12 T peak field in a quadrupole. Wire developed for the LHC upgrade is also being used for high-resolution NMR spectroscopy and advanced proton therapy, and Nb3Sn is being used in vast quantities for the ITER fusion project (see “ITER’s massive magnets enter production”). Testing the Nb3Sn technology for the HL-LHC is also critical for the next jump in energy: 100 TeV, as envisaged by the CERN-coordinated FCC study. This requires a dipole field of 16 T, pushing Nb3Sn beyond its present limits, but the superconducting industry has taken up the challenge. Training young researchers will further boost this technology – for example, via the CERN-coordinated EASITrain network on advanced superconductivity for PhD students, due to begin in October this year (see “Get on board with EASITtrain”).

The virtuous spiral between high-energy physics and superconductivity is never ending (see “Superconductors and particle physics entwined”), with pioneering research also taking place at CERN to test the practicalities of high-temperature superconductors (see “Taming high-temperature superconductivity”) based on yttrium or iron. This may lead us to dream about a 20–25 T dipole magnet – an immense challenge that will not only give us access to unconquered lands of particle physics but expand the use of superconductors in medicine, energy and other areas of our daily lives.

The post Celebrating a super partnership appeared first on CERN Courier.

]]>
Opinion The virtuous spiral between high-energy physics and superconductivity is never ending, says Lucio Rossi. https://cerncourier.com/wp-content/uploads/2018/06/CCvie1_07_17.jpg
Discovering diamonds https://cerncourier.com/a/discovering-diamonds/ Mon, 10 Jul 2017 07:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/discovering-diamonds/ Natural diamonds are old, almost as old as the planet itself. They mostly originated in the Earth’s mantle around 1 to 3.5 billion years ago and typically were brought to the surface during deep and violent volcanic eruptions some tens of millions of years ago. Diamonds have been sought after for millennia and still hold […]

The post Discovering diamonds appeared first on CERN Courier.

]]>

Natural diamonds are old, almost as old as the planet itself. They mostly originated in the Earth’s mantle around 1 to 3.5 billion years ago and typically were brought to the surface during deep and violent volcanic eruptions some tens of millions of years ago. Diamonds have been sought after for millennia and still hold status. They are also one of our best windows into our planet’s dynamics and can, in what is essentially a galactic narrative, convey a rich story of planetary science. Each diamond is unique in its chemical and crystallographic detail, with micro-inclusions and impurities within them having been protected over vast timescales.

Diamonds are usually found in or near the volcanic pipe that brought them to the surface. It was at one of these, in 1871 near Kimberley, South Africa, where the diamond rush first began – and where the mineral that hosts most diamonds got its name: kimberlite. Many diamond sources have since been discovered and there are now more than 6000 known kimberlite pipes (figure 1 overleaf). However, with current mining extraction technology, which generally involves breaking up raw kimberlite to see what’s inside, diamonds are often damaged and are steadily becoming mined out. Today, a diamond mine typically lasts for a few decades, and it costs around $10–26 to process each tonne of rock. With the number of new, economically viable diamond sources declining – combined with high rates of diamonds being extracted, ageing mines and increasing costs – most forecasts predict a decline in rough diamond production compared to demand, starting as soon as 2020.

A new diamond-discovery technology called MinPET (mineral positron emission tomography) could help to ensure that precious sources of natural diamonds last for much longer. Inspired by the same principles adapted in modern, high-rate, high-granularity detectors commonly found in high-energy physics experiments, MinPET uses a high-energy photon beam and PET imaging to scan mined kimberlite for large diamonds, before the rocks are smashed to pieces.   

From eagle eyes to camera vision

Over millennia, humans have invented numerous ways to look for diamonds. Early techniques to recover loose diamonds used the principle that diamonds are hydrophobic, so resist water but stick readily to grease or fat. Some stories even tell of eagles recovering diamonds from deep, inaccessible valleys, when fatty meat thrown onto a valley floor might stick to a gem: a bird would fly down, devour the meat, and return to its nest, where the diamond could be recovered from its droppings. Today, technology hasn’t evolved much. Grease tables are still used to sort diamond from rock, and the current most popular technique for recovering diamonds (a process called dense media separation) relies on the principle that kimberlite particles float in a special slurry while diamonds sink. The excessive processing required with these older technologies wastes water, takes up huge amounts of land, releases dust into the surrounding atmosphere, and also leads to severe diamond breakage.    

Just 1% of the world’s diamond sources have economically viable grades of diamond and are worth mining. At most sites the gemstones are hidden within the kimberlite, so diamond-recovery techniques must first crush each rock into gravel. The more barren rock there is compared to diamonds, the more sorting has to be done. This varies from mine to mine, but typically is under one carat per tonne – more dilute than gold ores. Global production was around 127 million carats in 2015, meaning that mines are wasting millions of dollars crushing and processing about 100 million tonnes of kimberlite per year that contains no diamonds. We therefore have an extreme case of a very high value particle within a large amount of worthless material – making it an excellent candidate for sensor-based sorting.

Early forms of sensor-based sorting, which have only been in use since 2010, use a technique called X-ray stimulated optical fluorescence, which essentially targets the micro impurities and imperfections in each diamond (figure 2). Using this method, the mined rocks are dropped during the extraction process at the plant, and the curtain of falling rock is illuminated by X-rays, allowing a proportion of liberated or exposed diamonds to fluoresce and then be automatically extracted. The transparency of diamond makes this approach quite effective. When Petra Diamonds Ltd introduced this technique with several X-ray sorting machines costing around $6 million, the apparatus paid for itself in just a few months when the firm recovered four large diamonds worth around $43 million. These diamonds, presumed to be fragments of a larger single one, were 508, 168, 58 and 53 carats, in comparison to the average one-carat engagement ring.

Very pure diamonds that do not fluoresce, and gems completely surrounded by rock, can remain hidden to these sensors. As such, a newer sensor-based sorting technique that uses an enhanced form of dual-energy X-ray transmission (XRT), similar to the technology for screening baggage in airports, has been invented to get around this problem. It can recover liberated diamonds down to 5 mm diameter, where 1 mm is usually the smallest size recovered commercially, and, unlike the fluorescing technique, can detect some locked diamonds. These two techniques have brought the benefits of sensor-based sorting into sharp focus for more efficient, greener mines and for reducing breakage.

Recent innovations in particle-accelerator and particle-detector technology, in conjunction with high-throughput electronics, image-processing algorithms and high-performance computing, have greatly enhanced the economic viability of a new diamond-sensing technology using PET imaging. PET, which has strongly benefitted from many innovations in detector development at CERN, such as BGO scintillating crystals for the LEP experiments, has traditionally been used to observe processes inside the body. A patient must first absorb a small amount of a positron-emitting isotope; the ensuing annihilations produce patterns of gamma rays that can be reconstructed to build a 3D picture of metabolic activity. Since a rock cannot be injected with such a tracer, MinPET requires us to irradiate rocks with a high-energy photon beam and generate the positron emitter via transmutation.

The birth of MinPET

The idea to apply PET imaging to mining began in 1988, in Johannesburg, South Africa, where our small research group of physicists used PET emitters and positron spectroscopy to study the crystal lattice of diamonds. We learnt of the need for intelligent sensor-based sorting from colleagues in the diamond mining industry and naturally began discussing how to create an integrated positron-emitting source.

Advances in PET imaging over the next two decades led to increased interest from industry, and in 2007 MinPET achieved its first major success in an experiment at Karolinska hospital in Stockholm, Sweden. With a kimberlite rock playing the role of a patient, irradiation was performed at the hospital’s photon-based cancer therapy facility and the kimberlite was then imaged at the small-animal PET facility in the same hospital. The images clearly revealed the diamond within, with PET imaging of diamond in kimberlite reaching an activity contrast of more than 50 (figure 3). This result led to a working technology demonstrator involving a conveyor belt that presented phantoms (rocks doped with a sodium PET-emitter were used to represent the kimberlite, some of which contained a sodium hotspot to represent a hidden diamond) to a PET camera. These promising results attracted funding, staff and students, enabling the team to develop a MinPET research laboratory at iThemba LABS in Johannesburg. The work also provided an important early contribution to South Africa’s involvement in the ATLAS experiment at CERN’s Large Hadron Collider.

By 2015 the technology was ready to move out of the lab and into a diamond mine. The MinPET process (figure 4) involves using a high-energy photon beam of some tens of MeV to irradiate a kimberlite rock stream, turning some of the light stable isotopes within the kimberlite into transient positron emitters, or PET isotopes, which can be imaged in a similar way to PET imaging for medical diagnostics. The rock stream is buffered for a period of 20 minutes before imaging the rock, because by then carbon is the dominant PET isotope. Since non-diamond sources of carbon have a much lower carbon concentration than diamond, or are diluted and finely dispersed within the kimberlite, diamonds show up on the image as a carbon-concentration hotspot.

The speed of imaging is crucial to the viability of MinPET. The detector system must process up to 1000 tonnes of rock per hour to meet the rate of commercial rock processing, with PET images acquired in just two seconds and image processing taking just five seconds. This is far in excess of medical-imaging needs and required the development of a very high-rate PET camera, which was optimised, designed and manufactured in a joint collaboration between the present authors and a nuclear electronic technology start-up called NeT Instruments. MinPET must also take into account rate capacity, granularity, power consumption, thermal footprints and improvements in photon detectors. The technology demonstrator is therefore still used to continually improve MinPET’s performance, from the camera to raw data event building and fast-imaging algorithms.

An important consideration when dealing with PET technology is that radiation remains within safe limits. If diamonds are exposed to extremely high doses of radiation, their colour can change – something that can be done deliberately to alter the gems, but which reduces customer confidence in a gem’s history. Despite being irradiated, the dose exposure to the diamonds during the MinPET activation process is well below the level it would receive from nature’s own background. It has turned out, quite amazingly, that MinPET offers a uniquely radiologically clean scenario. The carbon PET activity and a small amount of sodium activity are the only significant activations, and these have relatively short half-lives of 20 minutes and 15 hours, respectively. The irradiated kimberlite stream soon becomes indistinguishable from non-irradiated kimberlite, and therefore has a low activity and allows normal mine operation.

Currently, XRT imaging techniques require each particle of kimberlite rock being processed to be isolated and smaller than 75 mm; within this stream only liberated diamonds that are at least 5 mm wide can be detected and XRT can only provide 2D images. MinPET is far more efficient because it is currently able to image locked diamonds with a width of 4 mm within a 100 mm particle of rock, with full 3D imaging. The size of diamonds MinPET detects means it is currently ideally suited for mines that make their revenue predominantly from large diamonds (in some mines breakage is thought to cause up to a 50% drop in revenue). There is no upper limit for finding a liberated diamond particle using MinPET, and it is expected that larger diamonds could be detected in up to 160 mm-diameter kimberlite particles.

To crumble or shine

MinPET has now evolved from a small-scale university experiment to a novel commercial technology, and negotiations with a major financial partner are currently at an advanced stage. Discussions are also under way with several accelerator manufacturers to produce a 40 MeV beam of electrons with a power of 40–200 kW, which is needed to produce the original photon beam that kick-starts the MinPET detection system.

Although the MinPET detection system costs slightly more than other sorting techniques, overall expenditure is less because processing costs are reduced. Envisaged MinPET improvements over the next year are expected to take the lower limit of discovery down to as little as 1.5 mm for locked diamonds. The ability to reveal entire diamonds in 3D, and locating them before the rocks are crushed, means that MinPET also eliminates much of the breakage and damage that occurs to large diamonds. The technique also requires less plant, energy and water – all without causing any impact on normal mine activity.

The world’s diamond mines are increasingly required to be greener and more efficient. But the industry is also under pressure to become safer, and the ethics of mining operations are a growing concern among consumers. In a world increasingly favouring transparency and disclosure, the future of diamond mining has to be in using intelligent, sensor-based sorting that can separate diamonds from rock. MinPET is the obvious solution – eventually allowing marginal mines to become profitable and the lifetime of existing mines to be extended. And although today’s synthetic diamonds offer serious competition, natural stones are unique, billions of years old, and came to the surface in a violent fiery eruption as part of a galactic narrative. They will always hold their romantic appeal, and so will always be sought after.

The post Discovering diamonds appeared first on CERN Courier.

]]>
Feature https://cerncourier.com/wp-content/uploads/2018/06/CCdia1_06_17.jpg
CMS undergoes tracker transplant https://cerncourier.com/a/cms-undergoes-tracker-transplant/ Fri, 17 Mar 2017 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/cms-undergoes-tracker-transplant/ At the beginning of March, the CMS collaboration successfully replaced the heart of its detector: the pixel tracker. This innermost layer of the CMS detector, a cylindrical device containing 124 million sensitive silicon sensors that record the trajectories of charged particles, is the first to be encountered by debris from the LHC’s collisions. The original […]

The post CMS undergoes tracker transplant appeared first on CERN Courier.

]]>

At the beginning of March, the CMS collaboration successfully replaced the heart of its detector: the pixel tracker. This innermost layer of the CMS detector, a cylindrical device containing 124 million sensitive silicon sensors that record the trajectories of charged particles, is the first to be encountered by debris from the LHC’s collisions.

CMS

The original three-layer 64 Mpix tracker, which has been in place since the LHC started operations in 2008, was designed for a lower collision rate than the LHC will deliver in the coming years. Its replacement contains an additional layer and has its first layer placed closer to the interaction point. This will enable CMS to cope with the harsher collision environment of future LHC runs, for which the detector has to simultaneously handle the products from a large number of simultaneous collisions. The new pixel detector will also be better at pinpointing where individual collisions occurred and will therefore enhance the precision with which predictions of the Standard Model can be tested.


After a week of intense activity, and a few frayed nerves, the new subdetector was safely in place by 8 March. After testing is complete, CMS will be closed ready for the LHC to return to action in May.

The post CMS undergoes tracker transplant appeared first on CERN Courier.

]]>
News https://cerncourier.com/wp-content/uploads/2018/06/CCnew6_03_17.jpg
Looking forward to photon–photon physics https://cerncourier.com/a/looking-forward-to-photon-photon-physics/ Fri, 17 Mar 2017 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/looking-forward-to-photon-photon-physics/ As its name suggests, the Large Hadron Collider (LHC) at CERN smashes hadrons into one another – protons, to be precise. The energy from these collisions gets converted into matter, producing new particles that allow us to explore matter at the smallest scales. The LHC does not fire protons into one another individually; instead, they […]

The post Looking forward to photon–photon physics appeared first on CERN Courier.

]]>

As its name suggests, the Large Hadron Collider (LHC) at CERN smashes hadrons into one another – protons, to be precise. The energy from these collisions gets converted into matter, producing new particles that allow us to explore matter at the smallest scales. The LHC does not fire protons into one another individually; instead, they are circulated in approximately 2000 bunches each containing around 100 billion protons. When two bunches are focused magnetically to cross each other in the centre of detectors such as CMS and ATLAS, only 30 or so protons actually collide. The rest continue to fly through the LHC unimpeded until the next time that two bunches cross.

Occasionally, something very different happens. If two protons travelling in opposite directions pass very close to one another, photons radiated from each proton can collide and produce new particles. The two parent protons remain completely intact, continuing their path in the LHC, but the photon–photon interaction removes a fraction of their initial energy and causes them to be slightly deflected from their original trajectories. By identifying the deflected protons, one can determine whether such photon interactions took place and effectively turn the LHC into a photon collider. It is also possible for the two protons to exchange pairs of gluons, which is another interesting process.

The idea of tagging deflected protons has been pursued at previous colliders, and also at the LHC back in 2012 and 2015 using only low-intensity beams. The proposal to pursue this type of physics with the LHC’s CMS and/or ATLAS experiments was first presented many years ago, but the project (under the name FP420) did not materialise.

A new project called the CMS-TOTEM Precision Proton Spectrometer (CT-PPS) has now taken up the challenge of making photon–photon physics possible at the LHC when operating at nominal luminosity. While CMS is a general-purpose detector for LHC physics, CT-PPS uses two sets of detectors placed 200 m either side of the CMS interaction point to measure protons in the forward direction. A parallel project called ATLAS Forward Physics (AFP) is also being developed by ATLAS, and both experiments aim to be in operation throughout this year’s LHC proton–proton run.

Light collisions

Despite photons being electrically neutral, the Standard Model (SM) allows two photons to interact via the exchange of virtual charged particles. Several final states are possible (figure 1), including a pair of photons. The latter process (γγ → γγ, or “light-by-light scattering”) has been known since the development of quantum electrodynamics (QED) and tested indirectly in several experiments, but the first direct evidence came last year from ATLAS in low-luminosity measurements of lead–lead collisions (CERN Courier December 2016 p9). Since the probability of emitting photons scales with the square of the electrical charge, the cross-section for lead–lead collisions is significantly higher than for proton–proton collisions. By searching for two photons and nothing else in the central detector and using kinematics cut to suppress backgrounds, the invariant mass of the two photons was in the region of 10 GeV. The measured cross-section was compatible with the QED prediction and, since no deviations are expected in this low-mass range, the ATLAS result was interesting but somewhat expected.

In forward experiments such as CT-PPS and AFP, however, the high-luminosity proton collisions allow a much higher mass region to be probed – between 300 GeV and 2 TeV in the case of CT-PPS. Proton tagging is possible because centrally produced high-mass systems cause the protons to lose enough energy to be deflected into the CT-PPS detectors. The study of photon interactions in this region could therefore provide new insights about the electroweak interaction, in particular the quartic gauge couplings predicted by the SM. These are interactions where two photons annihilate upon collision to produce two W bosons, implying four particles at the same vertex in a Feynman diagram (figure 1). Deviations from the SM prediction would point to new physics in the same way as the observations of deviations from the quartic coupling in Fermi’s beta-decay theory in the 1930s were the forerunner to the discovery of the W boson 50 years later.

If there are new particles with masses above 300 GeV, CT-PPS could also improve CMS’s general discovery potential. For example, diphoton resonances at high mass have a very clean signature almost free of any background. Thus, in addition to precision electroweak tests, forward experiments such as CT-PPS provide an important cross-check of “bumps” in invariant mass distributions by offering complementary information about the production mechanism, coupling and quantum numbers of a possible new resonance. An example of this complementarity concerns the now-infamous 750 GeV bump in the diphoton invariant-mass distributions from the LHC’s 2015 data set. Although the bump turned out to be a statistical effect, it provided strong motivation to advance the CT-PPS physics programme at the time. Were similar bumps to be observed by CMS and ATLAS in future, CT-PPS and AFP will play an important role in determining whether a real resonance is responsible for the excesses seen in the data.

Forward thinking

Given their potential for revealing new physics, photon–photon collisions have been a topic of some interest for many decades. For example, photon–photon collisions were studied at CERN’s Large Electron Positron (LEP), while studies at DESY’s HERA and Fermilab’s Tevatron colliders concentrated on interactions of protons through the exchange of gluons to probe quantum chromodynamics in the non-perturbative regime. The LHC achieves a much higher energy and luminosity than LEP, but at the price of colliding particles that are not elementary. Therefore, the elementary interactions between gluons and quarks do not have well-defined energies and the interaction products include the remnants of the two protons, making physics analyses more difficult in general.

Proton-tagged photon collisions at the LHC, on the other hand, are very clean. Since photons are elementary particles and there are no proton remnants, the photon–photon collision energy at the LHC is precisely defined by the kinematics of the two tagged protons. In conjunction with CT-PPS, CMS can therefore probe anomalous quartic couplings with much better sensitivity than before.

The physics we are interested in corresponds to the process pp  ppX, where the “pp” part is measured by the CT-PPS detectors and the system “X” is measured in the other CMS sub-detectors. In the case of the quartic coupling γγWW, for instance, the process is pp  ppWW. The two photons that merge into two W are not measured directly, but energy-momentum conservation allows all of the kinematic properties of the WW pair to be deduced much more precisely from the CT-PPS proton measurements than could be achieved from the measurements of W decay products with the CMS detector alone.

The CT-PPS detectors are located on either side of CMS, 200 m from the interaction point. They rely on objects called Roman Pots (RP), which are cylinders that allow small detectors to be moved into the LHC beam pipe so that they sit a mere few mm from the beam. The RPs of TOTEM are designed to operate under special LHC runs with a small number of collisions per second. However, the physics goals of CT-PPS require the RPs to operate during normal CMS data-taking, when the LHC provides a much higher number of collisions per second. The first and most important goal of the CT-PPS project was therefore to demonstrate that the detectors could operate successfully only a few millimetres from the LHC’s high-intensity beams. The final demonstration happened between April and May 2016, and the green light for CT-PPS operation in regular high-luminosity LHC running was given the following month.

Success so far

The CT-PPS project redesigned the RPs to suit these harsh operating conditions. In collaboration with LHC teams, they also conducted a thorough programme of RP insertions at increasingly closer distances to the beam, measuring its impact on beam monitors. Great care must be taken not to disrupt the beam, since if the protons start to scrape the RPs there would be an increase in secondary particles that would trigger a beam dump. In 2016, CT-PPS used non-final detectors to collect 15.2 fb–1 of data integrated in the CMS data set. CT-PPS has proven for the first time the feasibility of operating a near-beam proton spectrometer at high luminosity on a regular basis and has paved the way for other such spectrometers.

CT-PPS is also facing big challenges in the development of the final detectors. The tracking detectors have a surface area of just 2 cm2 and reside in two RPs located 10 m apart on either side of the collision point (for a total of four stations). Six planes of silicon pixels on each station will detect the track of the flying protons to provide direction information, and the magnetic field of the LHC’s magnets will serve as the proton-deflecting field. The devices themselves have to sustain exceedingly high radiation fluxes given their proximity to the beam: a proton fluence in excess of 5 × 1015 particles/cm2 is expected after an integrated luminosity of 100 fb–1. CMS’s own tracker will not face these radiation conditions until the HL-LHC enters operation in the mid 2020s.

From 2017 onwards, CT-PPS will be using new 3D pixel technology that has been developed in view of upgrades to the CMS tracker and therefore provide valuable experience with the new sensors. The project also relies on high-precision timing detectors. CT-PPS matches the primary vertex of the collision measured in the central detector with the vertex position obtained from the difference of the time-of-arrival to the two protons, so that it can reject the background from spurious collisions piling up in the same bunch crossing. A time precision of 20 ps makes it possible to estimate the z-vertex with the 3 mm accuracy needed to reduce the background sufficiently. The timing detectors had used diamond sensors in 2016 and will add silicon low-gain avalanche diodes this year. Again, the experience acquired in a high-rate and high-radiation environment will be most valuable for CMS upgrades for HL-LHC.

Meanwhile, the ATLAS collaboration installed one arm of the AFP experiment in early 2016 and has taken data in special low-luminosity runs to study diffraction. The second AFP arm, with horizontal RP stations similar to those of CT-PPS, has also since been installed and its four-layer 3D silicon pixel detectors and new Cherenkov-based time-of-flight detectors are being assembled. They will be installed and commissioned before the LHC restarts in May this year. Like CT-PPS, AFP aims to participate in high-luminosity running throughout the year, with both operating in tandem to enhance the LHC’s search for new physics.

The post Looking forward to photon–photon physics appeared first on CERN Courier.

]]>
Feature https://cerncourier.com/wp-content/uploads/2018/06/CCctp1_03_17-1.jpg
Funding injection for SNOLAB https://cerncourier.com/a/funding-injection-for-snolab/ Wed, 15 Feb 2017 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/funding-injection-for-snolab/ The SNOLAB laboratory in Ontario, Canada, has received a grant of $28.6m to help secure its next three years of operations. The facility is one of 17 research facilities to receive support through Canada’s Major Science Initiative (MSI) fund, which exists to secure state-of-the-art national research facilities. SNOLAB, which is located in a mine 2 km beneath […]

The post Funding injection for SNOLAB appeared first on CERN Courier.

]]>

The SNOLAB laboratory in Ontario, Canada, has received a grant of $28.6m to help secure its next three years of operations. The facility is one of 17 research facilities to receive support through Canada’s Major Science Initiative (MSI) fund, which exists to secure state-of-the-art national research facilities.

SNOLAB, which is located in a mine 2 km beneath the surface, specialises in neutrino and dark-matter physics and claims to be the deepest cleanroom facility in the world. Current experiments located there include: PICO and DEAP-3600, which search for dark matter using bubble-chamber and liquid-argon technology, respectively; EXO, which aims to measure the mass and nature of the neutrino; HALO, designed to detect supernovae; and a new neutrino experiment SNO+ based on the existing SNO detector.

The new funds will be used to employ the 96-strong SNOLAB staff and support the operations and maintenance of the lab’s facilities.

The post Funding injection for SNOLAB appeared first on CERN Courier.

]]>
News https://cerncourier.com/wp-content/uploads/2018/06/CCnew15_02_17.jpg
ProtoDUNE revealed https://cerncourier.com/a/protodune-revealed/ Wed, 15 Feb 2017 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/protodune-revealed/ This 11 m-high structure with thick steel walls will soon contain a prototype detector for the Deep Underground Neutrino Experiment (DUNE), a major international project based in the US for studying neutrinos and proton decay. It is being assembled in conjunction with CERN’s Neutrino Platform, which was established in 2014 to support neutrino experiments hosted in […]

The post ProtoDUNE revealed appeared first on CERN Courier.

]]>

This 11 m-high structure with thick steel walls will soon contain a prototype detector for the Deep Underground Neutrino Experiment (DUNE), a major international project based in the US for studying neutrinos and proton decay. It is being assembled in conjunction with CERN’s Neutrino Platform, which was established in 2014 to support neutrino experiments hosted in Japan and the US (CERN Courier July/August 2016 p21), and is pictured here in December as the roof of the structure was lowered into place. Another almost identical structure is under construction nearby and will house a second prototype detector for DUNE. Both are being built at CERN’s new “EHN1” test facility, which was completed last year at the north area of the laboratory’s Prévessin site.

DUNE, which is due to start operations in the next decade, will address key outstanding questions about neutrinos. In addition to determining the ordering of the neutrino masses, it will search for leptonic CP violation by precisely measuring differences between the oscillations of muon-type neutrinos and antineutrinos into electron-type neutrinos and antineutrinos, respectively (CERN Courier December 2015 p19). To do so, DUNE will consist of two advanced detectors placed in an intense neutrino beam produced at Fermilab’s Long-Baseline Neutrino Facility (LBNF). One will record particle interactions near the source of the beam before the neutrinos have had time to oscillate, while a second, much larger detector will be installed deep underground at the Sanford Underground Research Laboratory in Lead, South Dakota, 1300 km away.


In collaboration with CERN, the DUNE team is testing technology for DUNE’s far detector based on large liquid-argon (LAr) time-projection chambers (TPCs). Two different technologies are being considered – single-phase and double-phase LAr TPCs – and the eventual DUNE detectors will comprise four modules, each with a total LAr mass of 17 kt. The single-phase technique is well established, having been deployed in the ICARUS experiment at Gran Sasso, while the double-phase concept offers potential advantages. Both may be used in the final DUNE far detector. Scaling LAr technology to such industrial levels presents several challenges – in particular the very large cryostats required, which has led the DUNE collaboration to use technological solutions inspired by the liquified-natural-gas (LNG) shipping industry.

The outer structure of the cryostat  (red, pictured at top) for the single-phase protoDUNE module is now complete, and an equivalent structure for the double-phase module is taking shape just a few metres away and is expected to be complete by March. In addition, a smaller technology demonstrator for the double-phase protoDUNE detector is complete and is currently being cooled down at a separate facility on the CERN site (image above). The 3 × 1 × 1 m3 module will allow the CERN and DUNE teams to perfect the double-phase concept, in which a region of gaseous argon situated above the usual liquid phase provides additional signal amplification.

The large protoDUNE modules are planned to be ready for test beam by autumn 2018 at the EHN1 facility using dedicated beams from the Super Proton Synchrotron. Given the intensity of the future LBNF beam, for which Fermilab’s Main Injector recently passed an important milestone by generating a 700 kW, 120 GeV proton beam for a period of more than one hour, the rate and volume of data produced by the DUNE detectors will be substantial. Meanwhile, the DUNE collaboration continues to attract new members and discussions are now under way to share responsibilities for the numerous components of the project’s vast far detectors (see “DUNE collaboration meeting comes to CERN” in this month’s Faces & Places).

The post ProtoDUNE revealed appeared first on CERN Courier.

]]>
Feature https://cerncourier.com/wp-content/uploads/2018/06/CCpic1_02_17.jpg
Crystal Clear celebrates 25 years of success https://cerncourier.com/a/crystal-clear-celebrates-25-years-of-success/ https://cerncourier.com/a/crystal-clear-celebrates-25-years-of-success/#respond Fri, 14 Oct 2016 07:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/crystal-clear-celebrates-25-years-of-success/ Advanced scintillating materials have found their way into novel detectors for physics and medicine.

The post Crystal Clear celebrates 25 years of success appeared first on CERN Courier.

]]>
3D PET/CT image

The Crystal Clear (CC) collaboration was approved by CERN’s Detector Research and Development Committee in April 1991 as experiment RD18. Its objective was to develop new inorganic scintillators that would be suitable for electromagnetic calorimeters in future LHC detectors. The main goal was to find dense and radiation-hard scintillating material with a fast light emission that can be produced in large quantities. This challenge required a large multidisciplinary effort involving world experts in different aspects of material sciences – including crystallography, solid-state physics, luminescence and defects in solids.

From 1991 to 1994, the CC collaboration carried out intensive studies to identify the most adequate scintillator material for the LHC experiments. Three candidates were identified and extensively studied: cerium fluoride (CeF3), lead tungstate (PbWO4) and heavy scintillating glass. In 1994, lead tungstate was chosen by the CMS and ALICE experiments as the most cost-effective crystal compliant with the operational conditions at the LHC. Today, 75,848 lead-tungstate crystals are installed in CMS electromagnetic calorimeters and 17,920 in ALICE. The former contributed to the discovery of the Higgs boson, which was identified in 2012 by CMS and the ATLAS experiment via its decay, among others, into two photons. The CC collaboration’s generic R&D on scintillating materials has brought a deep understanding of cerium ions for scintillating activators and seen the development of lutetium and yttrium aluminium perovskite crystals for both physics and medical applications.

From physics to medicine

In 1997, the CC collaboration made its expertise in scintillators available to industry and society at large. Among the most promising sectors were medical functional imaging and, in particular, positron emission tomography (PET), due to its growing importance in cancer diagnostics and similarities with the functionality of electromagnetic calorimeters (the principle of detecting gamma rays in a PET scanner is identical to that in high-energy physics detectors).

Following this, CC collaboration members developed and constructed several dedicated PET prototypes. The first, which was later commercialised by Raytest GmbH in Germany under the trademark ClearPET, was a small-animal PET machine used for radiopharmaceutical research. At the turn of the millennium, five ClearPET prototypes characterised by a spatial resolution of 1.5 mm were built by the CC collaboration, which represented a major breakthrough in functional imaging at that time. The same crystal modules were also developed by the CC team at Forschungszentrum Jülich, Germany, to image plants in order to study carbon transport. A modified ClearPET geometry was also combined with X-ray single-photon detectors by CC researchers at CPPM Marseille, offering simultaneous PET and computed-tomography (CT) acquisition, and providing the first PET/CT simultaneous images of a mouse in 2015 (see image above). The simultaneous use of CT and PET allows the excellent position resolution of anatomic imaging (providing detailed images of the structure of tissues) to be combined with functional imaging, which is sensitive to the tissue’s metabolic activity.

After the success of ClearPET, in 2002, CC developed a dedicated PET camera for breast imaging called ClearPEM. This system had a spatial resolution of 1.3 mm and represented the first PET imaging based on avalanche photodiodes, which were initially developed for the CMS electromagnetic calorimeter. The machine was installed in Coimbra, Portugal, where clinical trials were performed. In 2005, a second ClearPEM machine combined with 3D ultrasound and elastography was developed with the aim of providing anatomical and metabolic information to allow better identification of tumours. This machine was installed in Hôpital Nord in Marseille, France, in December 2010 for clinical evaluations of 10 patients, and three years later it was moved to the San Girardo hospital in Monza, Italy, to undertake larger clinical trials, which are ongoing.

In 2011, a European FP7 project called EndoTOFPET-US, which was a consortium of three hospitals, three companies and six institutes, began the development of a prototype for a novel bi-modal time-of-flight PET and ultrasound endoscope with a spatial resolution better than 1 mm and a time resolution of 200 ps. This was aimed at the detection of early stage pancreatic or prostatic tumours and the development of new biomarkers for pancreatic and prostatic cancers. Two prototypes have been produced (one for pancreatic and one for prostate cancers) and the first tests on a phantom-prostate prototype were performed in spring 2015 at the CERIMED centre in Marseille. Work is now ongoing to improve the two prototypes, in view of preclinical and clinical operation.

In addition to the development of ClearPET detectors, members of the collaboration have initiated the development of the Monte Carlo simulation software-package GATE, a GEANT4-based simulation tool allowing the simulation of full PET detector systems.

Clear impact

In 1992, the CC collaboration organised the first international conference on inorganic scintillators and their applications, which led to a global scientific community of around 300 people. Today, this community comes together every two years at the SCINT conferences, the next instalment of which will take place in Chamonix, France, from 18 to 22 September 2017.

To this day, the CC collaboration continues its investigations into new scintillators and understanding their underlying scintillation mechanisms and radiation-hardness characteristics – in addition to the development of detectors. Among its most recent activities is the investigation of key parameters in scintillating detectors that enable very precise timing information for various applications. These include mitigating the effect of “pile-up” caused by the high event rate at particle accelerators operating at high peak luminosities, and also medical applications in time-of-flight PET imaging. This research requires the study of new materials and processes to identify ultrafast scintillation mechanisms such as “hot intraband luminescence” or quantum-confined excitonic emission with sub-picosecond rise time and sub-nanosecond decay time. It also involves investigating the enhancement of the scintillator light collection by using various surface treatments, such as nano-patterning with photonic crystals. CC recently initiated a European COST Action called Fast Advanced Scintillator Timing (FAST) to bring together European experts from academia and industry to ultimately achieve scintillator-based detectors with a time precision better than 100 ps, which provides an excellent training opportunity for researchers interested in this domain.

Among other recent activities of the CC collaboration are new crystal-production methods. Micro-pulling-down techniques, which allow inorganic scintillating crystals to be grown in the shape of fibres with diameters ranging from 0.3 to 3 mm, open the way to attractive detector designs for future high-energy physics experiments by replacing a block of crystals with a bundle of fibres. A Horizon 2020 European RISE Marie Skłodowska-Curie project called Intelum has been set up by the CC collaboration to explore the cost-effective production of large quantities of fibres. More recently, the development of new PET crystal modules has been launched by CC collaborators. These make use of new photodetector silicon photomultipliers and have a high spatial resolution (1.5 mm), depth-of-interaction capability (better than 3 mm) and a fast timing resolution (better than 200 ps).

Future directions

For the past 25 years, the CC collaboration has actively carried out R&D on scintillating materials, and investigated their use in novel ionising radiation-detecting devices (including read-out electronics and data acquisition) for use in particle-physics and medical-imaging applications. In addition to significant progress made in the understanding of scintillation mechanisms and radiation hardness of different materials, the choice of lead tungstate for the CMS electromagnetic calorimeter and the realisation of various prototypes for medical imaging are among the CC collaboration’s highlights so far. It is now making important contributions to understanding the key parameters for fast-timing detectors.

The various activities of the CC collaboration, which today has 29 institutional members, have resulted in more than 650 publications and 72 PhD theses. The motivation of CC collaboration members and the momentum generated throughout its many projects open up promising perspectives for the future of inorganic scintillators and their use in HEP and other applications.

• An event to celebrate the 25th anniversary of the CC collaboration will take place at CERN on 24 November.

The post Crystal Clear celebrates 25 years of success appeared first on CERN Courier.

]]>
https://cerncourier.com/a/crystal-clear-celebrates-25-years-of-success/feed/ 0 Feature Advanced scintillating materials have found their way into novel detectors for physics and medicine. https://cerncourier.com/wp-content/uploads/2016/10/CCcry1_09_16.jpg
CERN explores opportunities for physics beyond colliders https://cerncourier.com/a/cern-explores-opportunities-for-physics-beyond-colliders/ https://cerncourier.com/a/cern-explores-opportunities-for-physics-beyond-colliders/#respond Fri, 14 Oct 2016 07:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/cern-explores-opportunities-for-physics-beyond-colliders/ Our understanding of nature’s fundamental constituents owes much to particle colliders. Notable discoveries include the W and Z bosons at CERN’s Super Proton Synchrotron in the 1980s, the top quark at Fermilab’s Tevatron collider in the 1990s and the Higgs boson at CERN’s LHC in 2012. While colliding particles at ever higher energies is still one […]

The post CERN explores opportunities for physics beyond colliders appeared first on CERN Courier.

]]>

Our understanding of nature’s fundamental constituents owes much to particle colliders. Notable discoveries include the W and Z bosons at CERN’s Super Proton Synchrotron in the 1980s, the top quark at Fermilab’s Tevatron collider in the 1990s and the Higgs boson at CERN’s LHC in 2012. While colliding particles at ever higher energies is still one of the best ways to search for new phenomena, experiments at lower energies can also address fundamental-physics questions.

The Physics Beyond Colliders kick-off workshop, which was held at CERN on 6–7 September, brought together a wide range of physicists from the theory, experiment and accelerator communities to explore the full range of research opportunities presented by the CERN complex. The considered timescale for such activities reaches as far as 2040, corresponding roughly to the operational lifetime of the LHC and its high-luminosity upgrade. The study group has been charged with pulling together interested parties and exploring the options in appropriate depth, with the aim of providing input to the next update to the European Strategy for Particle Physics towards the end of the decade.

As the name of the workshop and study group suggests, a lot of interesting physics can be tested in experiments that are complementary to colliders. Ideas discussed at the September event ranged from searching for particles with masses far below an eV up to more than 1015 eV, to prospects for dark matter and even dark-energy studies.

Theoretical motivation

Searches for electric and magnetic dipole moments in elementary particles are a rich experimental playground, and the enormous precision of such experiments allows a wide range of new physics to be tested. The long-standing deviation of the muon magnetic moment (g-2) from the Standard Model prediction could indicate the presence of relatively heavy supersymmetric particles, but also the presence of relatively light “dark photons”, which are also a possible messenger to the dark-matter sector. A confirmation, or not, of the original g-2 measurement and experimental tests of other models will provide important input to this issue.

Electric dipole moments are inherently linked to the violation of charge–parity (CP) symmetry, which is a necessary ingredient to explain the origin of the baryon asymmetry of the universe. While CP violation has been observed in weak interactions, it is notably absent in strong interactions. For example, no electric dipole moment of the neutron has been observed so far. Eliminating this so-called strong-CP problem gives significant motivation for hypothesising the existence of a new elementary particle called the axion. Indeed, axion-like particles are not only natural dark-matter candidates but they turn out to be one of the features that are abundant in well-motivated extensions of the Standard Model, such as string theory. Axions could help to explain a number of astrophysical puzzles such as dark matter. They may also be connected to inflation in the very early universe and to the generation of neutrino masses, and potentially are even involved with the hierarchy problem.

Neutrinos are also the source of a large range of puzzles, but also opportunities. Interestingly, essentially all current experiments and observations – including that of dark matter – can be explained by a very minimal extension of the Standard Model: the addition of three right-handed neutrinos. In fact, theorists’ ideas range far beyond that, motivating the existence of whole sectors of weakly coupled particles below the Fermi scale.

Ambitions may even lead to tackling one of the most challenging of questions: dark energy. While the effective couplings between ordinary matter and dark energy must be quite small, there is still significant room for observable effects in low-energy experiments, for example using atom interferometry.

Experimental opportunities

It is clear that CERN’s priority over the coming years is the full exploitation of the LHC – first in its present guise and then, from 2026, as the High-Luminosity LHC (HL-LHC). The HL-LHC places stringent demands on intensity and related characteristics, and a major upgrade of the LHC injectors is planned during Long Shutdown 2 (LS2) beginning in 2019 to provide beams in the HL-LHC era. Despite this, the LHC doesn’t actually use many protons. This leaves the other facilities at CERN open to exploit the considerable beam-production capabilities of the accelerator complex.

CERN already has a diverse and far-sighted experimental programme based on the LHC injectors. This spans the ISOLDE radioactive beam facility, the neutron time-of-flight facility (nTOF), the Antiproton Decelerator (AD), the High-Radiation to Materials (HiRadMat) facility, the plasma-wakefield experiment AWAKE, and the North and East experimental areas. CERN’s proton-production capabilities are already heavily used and will continue to be well-solicited in the coming years. A preliminary forecast shows that there is potential capacity to support one more major SPS experiment after the injector upgrade.

The AD is a classic example of CERN’s existing non-collider-based facilities. This unique antimatter factory has several experiments studying the properties of antiprotons and anti-hydrogen atoms in detail. Here, in the experimental domain, the time constant for technological evolution is much shorter than it is for large high-energy detectors. The AD is currently being upgraded with the ELENA ring, which will increase by two orders of magnitude the trapping efficiency of anti-hydrogen atoms and will allow different experiments to operate in parallel. After LS2, ELENA will serve all AD experiments and will secure CERN’s antimatter research into the next decade. The ISOLDE and nTOF facilities also offer opportunities to investigate fundamental questions such as the unitarity of the quark-mixing matrix, parity violation or the masses of the neutrinos.

The three main experiments of the North Area – NA61, COMPASS and NA62 – have well-defined programmes until the time of LS2 and all have longer term plans. After completion of its search for a QCD critical point, NA61 plans to further study QCD deconfinement with emphasis on charm signals. It will also remain a unique facility to constrain hadron production in primary proton targets for future neutrino beams in the US and Japan. The Common Muon and Proton Apparatus for Structure and Spectroscopy (COMPASS) experiment, meanwhile, intends to further study the hadron structure and spectroscopy with RF-separated beams of higher intensity in order to study fundamental physics linked to quantum chromodynamics.

An independent proposal submitted to the workshop involved using muon beams from the SPS to make precision measurements of μ–e elastic scattering, which could reduce by a factor of two the present theoretical hadronic uncertainty on g-2 for future precision experiments. Once NA62 reaches its intended precision on its measurement of the rare decay K+π+νν, the collaboration plans comprehensive measurements in the K sector in addition to one year of operation in beam-dump mode to search for heavy neutral leptons such as massive right-handed neutrinos. In the longer term, NA62 aims to study the rare decay K0π0νν, which would require a similar but expanded apparatus and a high-intensity K0 beam. In general, rare decays might reveal deviations from the Standard Model that indicate the presence of new heavy particles that alter the decay rate.

Fixed ambitions

The September workshop heard proposals for new ambitious fixed-target facilities that would complement existing experiments at CERN. A completely new development at CERN’s North Area is the proposed SPS beam-dump facility (BDF). Beam dump in this context implies a target that absorbs all incident protons and contains most of the cascade generated by the primary-beam interaction. The aim is for a general-purpose fixed-target facility, which in the initial phase will facilitate a general search for weakly interacting “hidden” particles. The Search for HIdden Particles (SHiP) experiment plans to exploit the unique high-energy, high-intensity features of the SPS beam to perform a comprehensive investigation of the dark sector in the few-GeV mass range (CERN Courier March 2016 p25). A complementary approach, based on observing missing energy in the products of high-energy interactions, is currently being explored by NA64 on an electron test beam, and the experiment team has proposed to extend its programme to muon and hadron beams in the future.

From an accelerator perspective, the BDF is a challenging undertaking and will involve the development of a new extraction line and a sophisticated target and target complex with due regard to radiation-protection issues. More generally, the foreseen North Area programme requires high intensity and slow extraction from the SPS, and this poses some serious accelerator challenges. A closer look at these reveals the need for a concerted programme of studies and improvements to minimise extraction beam loss and associated activation of hardware with its attendant risks.

Fixed-target experiments with LHC beams could be carried out using either crystal extraction or an internal gas jet, and initially these might operate in parasitic mode upstream from existing detectors (LHCb or ALICE). Combined with the high LHC beam energy, an internal gas target would open up a new kinematic range to hadron and heavy-ion measurements, while beam extraction using crystals was proposed to measure the magnetic moments of short-lived baryons.

New facilities to complement fixed-target experiments are also under consideration. A small all-electric storage ring would provide a precision measurement of the proton electric dipole moment (EDM) and could test for new physics at the 100 TeV scale, while a mixed electric/magnetic ring would extend such measurements to the deuteron EDM. The physics motivation for these facilities is strong, and from an accelerator standpoint such storage rings are an interesting challenge in their own right (CERN Courier September 2016 p27).

A dedicated gamma factory is another exciting option being explored. Partially stripped ions interacting with photons from a laser have the potential to provide a powerful source of gamma rays. Driven by the LHC, such a facility would increase by seven orders of magnitude the intensity currently achievable in electron-driven gamma-ray beams. The proposed nuSTORM project, meanwhile, would provide well-defined neutrino beams for precise measurements of the neutrino cross-sections and represent an intermediate step towards a neutrino factory or a muon collider.

Last but not least, there are several non-accelerator projects that stand to benefit from CERN’s technological expertise and infrastructure, in line with the existing CAST and OSQAR experiments. CAST (CERN Axion Solar Telescope) uses one of the LHC dipole magnets to search for axions produced in the Sun, while OSQAR attempts to produce axions in the laboratory. Researchers working on IAXO, the next-generation axion helioscope foreseen as a significantly more powerful successor to CAST, have expressed great interest in co-operating with CERN on the design and running of the experiment’s large toroidal magnet. The high-field magnets developed at CERN would also increase the reach of future axion searches in the laboratory as a follow-up of OSQAR at CERN or ALPS at DESY. DARKSIDE, a flagship dark-matter search to be sited in Gran Sasso, also has technological synergies with CERN in the cryogenics, liquid-argon and silicon-photomultiplier domains.

Next steps

Working groups are now being set up to assess the physics case of the proposed projects in a global context, and also their feasibility and possible implementation at CERN or elsewhere. A follow-up Physics Beyond Colliders workshop is foreseen in 2017, and the final deliverable is due towards the end of 2018. It will consist of a summary document that will help the European Strategy update group to define its orientations for non-collider fundamental-particle-physics research in the next decade.

The post CERN explores opportunities for physics beyond colliders appeared first on CERN Courier.

]]>
https://cerncourier.com/a/cern-explores-opportunities-for-physics-beyond-colliders/feed/ 0 Feature
AIDA-2020 calls for breakthrough detector technologies https://cerncourier.com/a/aida-2020-calls-for-breakthrough-detector-technologies/ https://cerncourier.com/a/aida-2020-calls-for-breakthrough-detector-technologies/#respond Fri, 12 Aug 2016 08:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/aida-2020-calls-for-breakthrough-detector-technologies/ The European Union project AIDA-2020 (Advanced European Infrastructure for Detectors and Accelerators), in which CERN is a partner, has launched a proof-of-concept fund for breakthrough projects in the field of detector development and testing. The fund will provide up to €200k in total to support innovative projects with a focus on industry-orientated applications and those […]

The post AIDA-2020 calls for breakthrough detector technologies appeared first on CERN Courier.

]]>

AIDA

The European Union project AIDA-2020 (Advanced European Infrastructure for Detectors and Accelerators), in which CERN is a partner, has launched a proof-of-concept fund for breakthrough projects in the field of detector development and testing.

The fund will provide up to €200k in total to support innovative projects with a focus on industry-orientated applications and those beyond high-energy physics. Up to four projects will be funded based on a competitive selection process, and the deadline is 20 October 2016. More information can be found at aida2020.web.cern.ch/content/poc.

The post AIDA-2020 calls for breakthrough detector technologies appeared first on CERN Courier.

]]>
https://cerncourier.com/a/aida-2020-calls-for-breakthrough-detector-technologies/feed/ 0 News
Storage ring steps up search for electric dipole moments https://cerncourier.com/a/storage-ring-steps-up-search-for-electric-dipole-moments/ https://cerncourier.com/a/storage-ring-steps-up-search-for-electric-dipole-moments/#respond Fri, 12 Aug 2016 07:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/storage-ring-steps-up-search-for-electric-dipole-moments/ The fact that we and the world around us are made of matter and only minimal amounts of antimatter is one of the fundamental puzzles in modern physics, motivating a variety of theoretical speculations and experimental investigations. The combined standard models of cosmology and particle physics suggest that at the end of the inflation epoch […]

The post Storage ring steps up search for electric dipole moments appeared first on CERN Courier.

]]>

The fact that we and the world around us are made of matter and only minimal amounts of antimatter is one of the fundamental puzzles in modern physics, motivating a variety of theoretical speculations and experimental investigations. The combined standard models of cosmology and particle physics suggest that at the end of the inflation epoch immediately following the Big Bang, the number of particles and antiparticles were almost in precise balance. Yet the laws of physics contrived to act differently on matter and antimatter to generate the apparently large imbalance that we observe today.

One of the necessary mechanisms required for this to happen – namely CP violation – is very small in the Standard Model of particle physics and therefore only able to account for a tiny fraction of the actual imbalance. New sources of CP violation are needed, and one such potential signature would be the appearance of electric dipole moments (EDMs) in fundamental particles.

Electric dipole moments

An EDM originates from a permanent charge separation inside the particle. In its centre-of-mass frame, the ground state of a subatomic particle has no direction at its disposal except its spin, which is an axial vector, while the charge separation (EDM) corresponds to a polar vector (see panel). Therefore, if such a particle with nonzero mass and spin possesses an EDM, it must violate both parity (P) and time-reversal (T) invariance. If the combined CPT symmetry is to be valid, T violation also implies breaking of the combined CP symmetry. The Standard Model predicts the existence of EDMs, but their sizes (in the range of 10–31 to 10–33 e·cm for nucleons) fall many orders of magnitude below the sensitivity of current measurements and still far below the expected levels of projected experiments. An EDM observation at a much higher value would therefore be a clear and convincing sign of new physics beyond the current Standard Model (BSM).

BSM theories such as supersymmetry (SUSY), technicolour, multi-Higgs models and left–right symmetric models generally predict nucleon EDMs in the range of 10–24 to 10–28 e·cm (part of the upper region of this range is already excluded by experiment). Although tiny, EDMs of this size would be large enough to be observed by a new generation of highly sensitive accelerator-based experiments with charged particles such as the proton and deuteron. In this respect, EDMs offer a complementary approach to searches for BSM physics at collider experiments, probing scales far beyond the reach of present high-energy machines such as the LHC. For example, in certain SUSY scenarios the present observed EDM limits provide information about physics at the TeV or even PeV scales, depending on the mass scale of the supersymmetric mechanisms and the strength of the CP-violating SUSY phase parameters (figure 1).

Researchers have been searching for EDMs in neutral particles, especially neutrons, for more than 50 years, by trapping and cooling particles in small volumes and using strong electric fields. Despite an enormous improvement in sensitivity, however, these experiments have only produced upper bounds. The current upper limit of approximately 10–26 e·cm for the EDM of the neutron is an amazingly accurate result: if we had inflated the neutron so that it had the radius of the Earth, the EDM would correspond to a separation between positive and negative charges of about 1 μm. An upper limit of less than 10–29 e·cm has also been reported for a special isotope of mercury, but the Coulomb screening by the atom’s electron cloud makes it difficult to directly relate this number to the permanent EDMs of the neutrons and protons in its nucleus. For the electron, meanwhile, the reported EDM limits on more complicated polar molecules can be used to deduce a bound of about 10–28 e·cm – which is even further away from the Standard Model prediction (10–38 e·cm) than is the case for the neutron.    

Storage-ring solution

Although these experiments provide useful constraints on BSM theories, a new class of experiments based on storage rings is needed to measure the electric dipole moment of charged particles (such as the proton, deuteron or helium-3). These highly sensitive accelerator-based experiments will allow the EDM of charged particles to be inferred from their very slow spin precession in the presence of large electric fields, and promise to reach a sensitivity of 10–29 e·cm. This is due mainly to the larger number of particles available in a stored beam compared with the ultra-cold neutrons usually found in trap experiments, and also the potentially longer observation time possible because such experiments are not limited by the particle decay time. Storage-ring experiments would span the range of EDM sizes where new CP violation is expected to lie. Furthermore, the ability to measure EDMs of more than one type of particle will help to constrain the origin of the CP-violating source because not all particles are equally sensitive to the various CP-violating mechanisms.

At the Cooler Synchrotron “COSY” located at the Forschungszentrum Jülich (FZJ), Germany, the JEDI (Jülich Electric Dipole moment Investigations) collaboration is working on a series of feasibility studies for such a measurement using an existing conventional hadron storage ring. COSY, which is able to store both polarised proton and deuteron beams with a momentum up to 3.7 GeV/c, is an ideal machine for the development and commissioning of the necessary technology. This R&D work has recently replaced COSY’s previous hadron-physics programme of particle production and rare decays, although some other service and user activities continue.

A first upper limit for an EDM directly measured in a storage ring was obtained for muons at the (g–2) experiment at Brookhaven National Laboratory (BNL) in the US, but the measurement was not optimised for sensitivity to the EDM. Subsequently, scientists at BNL began to explore what would be needed to fully exploit the potential of a storage-ring experiment. While much initial discussion for an EDM experiment also took place at Brookhaven, commitments to the Relativistic Heavy Ion Collider (RHIC) operation and planning for a potential Electron–Ion Collider have prevented further development of such a project there. Therefore the focus shifted to FZJ and the COSY storage ring in Germany, where the JEDI collaboration was formed in 2011 to address the EDM opportunity.

The measuring principle is straightforward: a radial electric field is applied to an ensemble of particles circulating in a storage ring with their polarisation vector (or spin) initially aligned with their momentum direction. Maintaining the polarisation in this direction requires a storage ring in which the bending elements are a carefully matched set of both vertical magnetic fields and radial electric fields. The field strengths must be chosen such that the precession rate of the polarisation matches the circulation rate of the beam (called the “frozen spin”). For particles such as the proton with a positive gyromagnetic anomaly, this can be achieved by using only electric fields and choosing just the right “magic” momentum value (around 0.7 GeV/c). For deuterons, which have a negative gyromagnetic anomaly, a combination of electric and magnetic fields is required, but in this case the frozen spin condition can be achieved for a wide range of momentum and electric/magnetic field combinations. Such combined fields may also be used for the proton and would allow the experiment to operate at momenta other than the magic value.

The existence of an EDM would generate a torque that slowly rotates the spin out of the plane of the storage ring and into the vertical plane (see panel opposite). This slow change in the vertical polarisation is measured by sampling the beam with elastic scattering off a carbon target and looking for a slowly increasing left–right asymmetry in the scattered particle flux. For an EDM of 10–29 e·cm and an electric field of 10 MV/m, this would happen at an angular velocity of 3·10–9 rad·s–1 (about 1/100th of a degree per day of continuous operations). This requires the measurement to be sensitive at a level never reached before in a storage ring. To obtain a statistically significant result, the polarisation in the ring plane must last for approximately 1000 s during a single fill of the ring, while the scattering asymmetry from the carbon target must reach levels above 10–6 to be measurable within a year of running.

Milestones passed

Following the commissioning of a measurement system that stores the clock time of each recorded event in the beam polarimeter with respect to the start of the accelerator cycle, the JEDI collaboration has passed a series of important milestones in recent years. Working with the deuteron beam at COSY, these time stamps make it possible to unfold for the first time the rapid rotation of the polarisation in the ring plan (which has a frequency of around 120 kHz) that arises from the gyromagnetic anomaly. In a one-second time interval, the number of polarisation revolutions may be counted and the final direction of the polarisation known to better than 0.1 rad (see figure 2). The magnitude of the polarisation may decline slowly due to decoherence effects in storage ring, as can be seen in subsequent polarisation measurements within a single fill.

Maintaining the polarisation requires the cancellation of effects that may cause the particles in the beam to differ from one another. Bunching and electron cooling serves to remove much of this spurious motion, but particle path lengths around the ring may differ if particles in the beam have transverse oscillations with different amplitudes. Recently, we demonstrated that the effect of these differences on polarisation decoherence can be removed by applying correcting sextupole fields to the ring. As a result, we can now achieve polarisation lifetimes in the horizontal plane of more than 1000 s – as required for the EDM experiment (figure 3). In the past year, the JEDI group has also shown that by determining errors in the polarisation direction and feeding this back to make small changes in the ring’s radio-frequency, the direction of the polarisation may be maintained at the level of 0.1 rad during any chosen time period. This is a further requirement for managing the polarisation in the ring for the EDM measurement.

In early 2016, the European Research Council awarded an advanced research grant to the Jülich group to support further developmental efforts. The five-year grant, starting in October, will support a consortium that also includes RWTH Aachen University in Germany and the University of Ferrara in Italy. The goal of the project is to conduct the first measurement of the deuteron EDM. Since the COSY polarisation cannot be maintained parallel to its velocity (because no combined electric and magnetic bending elements exist), a novel device called a radiofrequency Wien filter will be installed in the ring to slowly accumulate the EDM signal (the filter influences the spin motion without acting on the particle’s orbit). The idea is to exploit the electric fields created in the particle rest system by the magnetic fields of the storage-ring dipoles, which would allow the first ever measurement of the deuteron EDM.

COSY is also an important test facility for many EDM-related technologies, among them new beam-position monitoring, control and feedback systems. High electric fields and combined electric/magnetic deflectors may also find applications in other fields, such as accelerator science. Many checks for systematic errors will be undertaken, and a technical design report for a future dedicated storage ring will be prepared. The most significant challenges will come from small imperfections in the placement and orientation of ring elements, which may cause stray field components that generate the accumulation of an EDM-like signal. The experiment is most sensitive to radial magnetic fields and vertical electric fields. Similar effects may arise through the non-commutativity of spurious rotations within the ring system, and efforts are under way to model these effects via spin tracking supported with beam testing. Eventually, many such effects may be reduced or eliminated by comparing the signal accumulation rates seen with beams travelling in opposite directions in the storage ring. During the next decade, this will allow researchers to approach the design goals of the EDM search using a storage ring, adding a new opportunity to unveil physics beyond the Standard Model.

Electromagnetic gymnastics
 

Naively, an electric dipole moment (d) and a magnetic dipole moment (μ) transfer differently under P and T. In a fundamental particle, both quantities are proportional to the spin vector (s). Therefore, the interaction term (ds·E) is odd under P and T, whereas (μs·B) is even under these transformations.

 

 

In the final experiment to measure the EDM of a charged particle, a radial electric field is applied to an ensemble of particles circulating in a storage ring with polarisation vector aligned to their momentum. The existence of an EDM would generate a torque that slowly rotates the spin out of the ring plane into the vertical direction.

 

 

After rotation into the horizontal plane at COSY, the polarization vector starts to precess. At a measurement point along the ring, the rapidly rotating polarisation direction of the beam is determined by using the count-rate asymmetry of deuterons elastically scattered from a carbon target.

 

 

The post Storage ring steps up search for electric dipole moments appeared first on CERN Courier.

]]>
https://cerncourier.com/a/storage-ring-steps-up-search-for-electric-dipole-moments/feed/ 0 Feature
Belle II super-B factory experiment takes shape at KEK https://cerncourier.com/a/belle-ii-super-b-factory-experiment-takes-shape-at-kek/ Fri, 12 Aug 2016 08:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/belle-ii-super-b-factory-experiment-takes-shape-at-kek/ Following in the footsteps of Belle at the KEKB facility, a new super-B factory will search for new weak interactions in the flavour sector.

The post Belle II super-B factory experiment takes shape at KEK appeared first on CERN Courier.

]]>

Since CERN’s LHC switched on in the autumn of 2008, no new particle colliders have been built. SuperKEKB, under construction at the KEK laboratory in Tsukuba, Japan, is soon to change that. In contrast to the LHC, which is a proton–proton collider focused on producing the highest energies possible, SuperKEKB is an electron–positron collider that will operate at the intensity frontier to produce enormous quantities of B mesons.

At the intensity frontier, physicists search for signatures of new particles or processes by measuring rare or forbidden reactions, or finding deviations from Standard Model (SM) predictions. The “mass reach” for new-particle searches can be as high as 100 TeV/c2, provided the couplings of the particles are large, which is well beyond the reach of direct searches at current colliders. The flavour sector provides a particularly powerful way to address the many deficiencies of the SM: at the cosmological scale, the puzzle of the baryon–antibaryon asymmetry remains unexplained by known sources of CP violation; the SM does not explain why there should be only three generations of elementary fermions or why there is an observed hierarchy in the fermion masses; the theory falls short on accounting for the small neutrino mass, and it is also not clear whether there is only a single Higgs boson.

SuperKEKB follows in the footsteps of its predecessor KEKB, which recorded more than 1000 fb–1 (one inverse attobarn, ab–1) of data and achieved a world record for instantaneous luminosity of 2.1 × 1034 cm–2 s–1. The goals for SuperKEKB are even more ambitious. Its design luminosity is 8 × 1035 cm–2 s–1, 40 times that of previous B-factory experiments, and the machine will operate in “factory” mode with the aim of recording an unprecedented data sample of 50 ab–1.

The trillions of electron–positron collisions provided by SuperKEKB will be recorded by an upgraded detector called Belle II, which must be able to cope with the much larger beam-related backgrounds resulting from the high-luminosity environment. Belle II, which is the first “super-B factory” experiment, is designed to provide better or comparable performance to that of the previous Belle experiment at KEKB or BaBar at SLAC in Stanford, California. With the SM of weak interactions now well established, Belle II will focus on the search for new physics beyond the SM.

SuperKEKB was formally approved in October 2010, began construction in November 2011 and achieved its “first turns” in February this year (CERN Courier April 2016 p11). By the time of  completion of the initial accelerator commissioning before Belle-II roll-in (so-called “Phase 1”), the machine was storing a current of 1000 mA in its low-energy positron ring (LER) and 870 mA in the high-energy electron ring (HER). As currently scheduled, SuperKEKB will produce its first collisions in late 2017 (Phase 2), and the first physics run with the full detector in place will take place in late 2018 (Phase 3). The experiment will operate until the late 2020s.

B-physics background

The Belle experiment took data at the KEKB accelerator between 1999 and 2010. At roughly the same time, the BaBar experiment operated at SLAC’s PEP-II accelerator. In 2001, these two “B factories” established the first signals of CP violation, therefore revealing matter–antimatter asymmetries, in the B-meson sector. They also provided the experimental foundation for the 2008 Nobel Prize in Physics, which was awarded to theorists Makoto Kobayashi and Toshihide Maskawa for their explanation through complex phases in weak interactions.

In addition to the observation of large CP violation in the low-background “golden” B  J/ψ KS-type decay modes, these B-factory experiments allowed many important measurements of weak interactions involving bottom and charm quarks as well as τ leptons. The B factories also discovered an unexpected crop of new strongly interacting particles known as the X, Y and Z states. Since 2008, a third major B factory, LHCb, entered the game. One of the four main LHC detectors, LHCb has made a large number of new measurements of B and Bs mesons and B baryons produced in proton–proton collisions. The experiment has tightly constrained new physics phases in the mixing-induced weak decays of Bs mesons, confirmed Belle’s discovery of the four-quark state Z(4430), and discovered the first two clear pentaquark states. Together with LHCb, Belle II is expected to be equally prolific and may discover signals of new physics in the coming decade.

Asymmetric collisions

The accelerator technology underpinning B factories is quite different from that of high-energy hadron colliders. For the coherent production of quantum-mechanically entangled pairs of B and B mesons, measurements of time-dependent CP asymmetries require that we know the difference in the decay times between the two B mesons. With equal energy beams, the B mesons travel only tens of microns from their production point and cannot experimentally be distinguished in silicon vertex detectors. To allow the B factory experiments to observe the time difference or spatial separation of the B vertices, the beams have asymmetric energies, and the centre of mass system is therefore boosted along the axis of the detector. For example, at PEP-II, 9 GeV electron and 3.1 GeV positron beams were used, while at KEKB the beam energies were 8 GeV and 3.5 GeV.

Charged particles within a beam undergo thermal motion just like gas molecules: they scatter to generate off-momentum particles at a rate given by the density and the temperature of the beam. Such off-momentum particles reduce the beam lifetime, increase beam sizes and generate detector background. To maximise the beam lifetime and reduce intra-beam scattering, SuperKEKB will collide 7 and 4 GeV electron and positron beams, respectively.

Two strategies were employed at the B factories to separate the incoming and outgoing beams: PEP-II used magnetic separation in a strong dipole magnet near the interaction point, while KEKB used a crossing angle of 22 mrad. SuperKEKB will extend the approach of KEKB with a crossing angle of 83 mrad, with separate beamlines for the two rings and no shared magnets between them. While the beam currents will be somewhat higher at SuperKEKB than they were at KEKB, the most dramatic improvement in luminosity is the result of very flat low-emittance “cool beams” and much stronger focusing at the interaction point. Specifically, SuperKEKB uses the nano-beam scheme inspired by the design of Italian accelerator physicist Pantaleo Raimondi, which promises to reduce the vertical beam size at the interaction point to around 50 nm – 20 times smaller than at KEKB.

Although the former TRISTAN (and KEKB) tunnels were reused for the SuperKEKB facility, many of the other accelerator components are new or upgraded from KEKB. For example, the 3 km-circumference vacuum chamber of the LER is new and is equipped with an antechamber and titanium-nitride coating to fight against the problem of photoelectrons. This process, in which low-energy electrons generated as photoelectrons or by ionisation of the residual gas in the beam pipe are attracted by the positively charged beam to form a cloud around the beam, was a scourge for the B factories and is also a major problem for the LHC. Many of the LER magnets are new, while a significant number of the HER magnets were rearranged to achieve a lower emittance, powered by newly designed high-precision power supplies at the ppm level. The RF system has been rearranged to double the beam current with a new digital-control system, and many beam diagnostics and control systems were rebuilt from scratch.

During Phase 1 commissioning, after many iterations the LER optics were corrected to achieve design emittance. To achieve low-emittance positron beams, a new damping ring has been constructed that will be brought into operation in 2017. To meet the charge and emittance requirements of SuperKEKB, the linac injector complex has been upgraded and includes a new low-emittance electron gun. Key components of the accelerator – including the beam pipe, superconducting magnets, beam feedback and diagnostics – were developed in collaboration with international partners in Italy (INFN Frascati), the US (BNL), and Russia (BINP), and further joint work, which will also involve CERN, is expected.

During Phase 1, intensive efforts were made to tune the machine to minimise the vertical emittances in both rings. This was done via measurements and corrections using orbit-response matrices. The estimated vertical emittances were below 10 pm in both rings, which is close to the design values. There were discrepancies, however, with the beam sizes measured by X-ray size monitors, especially in the HER, which is under investigation.

The early days of Belle and BaBar were plagued by problems, with beam-related backgrounds resulting from the then unprecedented beam currents and strong beam focusing. In the case of Belle, the first silicon vertex detector was destroyed by an unexpected synchrotron radiation “fan” produced by an electron beam passing through a steering magnet. Fortunately, the Belle team was able to build a new replacement detector quickly and move on to compete in the race with BaBar to measure CP asymmetries in the B sector. As a result of these past experiences, we have adopted a rather conservative commissioning strategy for the SuperKEKB/Belle-II facility. This year, during the earliest Phase 1 of operation, a special-purpose device called BEAST II consisting of seven types of background measurement devices was installed at the interaction point to characterise the expected Belle-II background.

At the beginning of next year, the Belle-II outer detector will be “rolled in” to the beamline and all components except the vertex detectors will be installed. The complex quadrupole superconducting final-focusing magnets are among the most challenging parts of the accelerator. In autumn 2017, the final-focusing magnets will be integrated with Belle II and the first runs of Phase 2 will commence. A new suite of background detectors will be installed, including a cartridge containing samples of the Belle-II vertex detectors. The first goal of the Phase-2 run is to achieve a luminosity above 1034 cm–2 s–1 and to verify that the backgrounds are low enough for the vertex detector to be installed.

Belle reborn

With Belle II expected to face beam-related backgrounds 20 times higher than at Belle, the detector has been reborn to achieve the experiment’s main physics goals – namely, to measure rare or forbidden decays of B and D mesons and the τ lepton with better accuracy and sensitivity than before. While Belle II reuses Belle’s spectrometer magnet, many state-of-the-art technologies have been included in the detector upgrade. A new vertex-detector system comprising a two-layer pixel detector (PXD) based on “DEPFET” technology and a four-layer double-sided silicon-strip detector (SVD) will be installed. With the beam-pipe radius of SuperKEKB having been reduced to 10 mm, the first PXD layer can be placed just 14 mm from the interaction point to improve the vertex resolution significantly. The outermost SVD layer is located at a larger radius than the equivalent system at Belle, resulting in higher reconstruction efficiency for Ks mesons, which is important for many CP-violation measurements.

A new central drift chamber (CDC) has been built with smaller cell sizes to be more robust against the higher level of beam background hits. The new CDC has a larger outer radius (1111.4 mm as opposed to 863 mm in Belle) and 56 compared to 50 measurement layers, resulting in improved momentum resolution. Combined with the vertex detectors, Belle II has improved D* meson reconstruction and hence better full-reconstruction efficiency for B mesons, which often include D*s among their weak-interaction decay products.

Because good particle identification is vital for successfully identifying rare processes in the presence of very large background (for example, the measurement of B  Xd γ must contend with B  Xs γ background processes that are an order-of-magnitude larger), two newly developed ring-imaging Cherenkov detectors have been introduced at Belle II. The first, the time-of-propagation (TOP) counter, is installed in the barrel region and consists of a finely polished and optically flat quartz radiator and an array of pixelated micro-channel-plate photomultiplier tubes that can measure the propagation time of internally reflected Cherenkov photons with a resolution of around 50 ps. The second, the aerogel ring-imaging Cherenkov counter (A-RICH), is located in Belle II’s forward endcap region and will detect Cherenkov photons produced in an aerogel radiator with hybrid avalanche photodiode sensors.

The electromagnetic calorimeter (ECL) reuses Belle’s thallium-doped cesium-iodide crystals. New waveform-sampling read-out electronics have been implemented to resolve overlapping signals such that π0 and γ reconstruction is not degraded, even in the high-background environment. The flux return of the Belle-II solenoid magnet, which surrounds the ECL, is instrumented to detect KL mesons and muons (KLM). All of the endcap KLM layers and the innermost two layers of the barrel KLM were replaced with new scintillator-based detectors read out by solid-state photomultipliers. Signals from all of the Belle-II sub-detector components are read out through a common optical-data-transfer system and backend modules. GRID computing distributed over KEK-Asia-Australia-Europe-North America will be used to process the large data volumes produced at Belle II by high-luminosity collisions, which, like LHCb, are expected to be in the region of 1.8 GB/s.

Construction of the Belle-II experiment is in full swing, with fabrication and installation of sub-detectors progressing from the outer to the inner regions. A recent milestone was the completion of the TOP installation in June, while installation of the CDC, A-RICH and endcap ECL will follow soon. The Belle-II detector will be rolled into the SuperKEKB beamline in early 2017 and beam collisions will start later in the year, marking Phase 2. After verifying the background conditions in beam collisions, Phase 3 will see the installation of the vertex-detector system, after which the first physics run can begin towards the end of 2018.

Unique data set

As a next-generation B factory, Belle II will serve as our most powerful probe yet of new physics in the flavour sector, and may discover new strongly interacting particles such as tetraquarks, molecules or perhaps even hybrid mesons. Collisions at SuperKEKB will be tuned to centre-of-mass energies corresponding to the masses of the ϒ resonances, with most data to be collected at the Υ(4S) resonance. This is just above the threshold for producing quantum-correlated B-meson pairs with no fragmentation particles, which are optimal for measuring weak-interaction decays of B mesons.

SuperKEKB is both a super-B factory and a τ-charm factory: it will produce a total of 50 billion b b, c c and τ+ τ pairs over a period of eight years, and a team of more than 650 collaborators from 23 countries is already preparing to analyse this unique data set. The key open questions to be addressed include the search for new CP-violating phases in the quark sector, lepton-flavour violation and left–right asymmetries (see panel opposite).

Rare charged B decays to leptonic final states are the flagship measurements of the Belle-II research programme. The leptonic decay B τν occurs in the SM via a W-annihilation diagram with an expected branching fraction of 0.82+0.05–0.03 × 10−4, which would be modified if a non-standard particle such as a charged Higgs interferes with the W. Since the final state contains multiple neutrinos, it is measurable only in an electron–positron collider experiment where the centre-of-mass energy is precisely known. Belle II should reach a precision of 3% on this measurement, and observe the channel B μν for tests of lepton-flavour universality.

Perhaps the most interesting search at Belle II will be the analogous semi-leptonic decays, B  D*τν and B  Dτν, which are similarly sensitive to charged Higgs bosons. Recently, the combined measurements of these processes from Babar, Belle and LHCb have pointed to a curious 4σ deviation of the decay rates compared to the SM prediction (see figure X). Since no such deviation is seen in B τν, making it difficult to resolve the nature of the potential underlying new physics, the Belle-II data set will be required to settle the issue.

Another 4σ anomaly persists in B  K* l+l flavour-changing neutral-current loop processes observed by LHCb, which may be explained by the actions of new gauge bosons. By allowing the study of closely related processes, Belle II will be able to confirm if this really is a sign of new physics and not an artifact of theoretical predictions. More precisely calculable inclusive transitions b  sγ and b  s l+l will be compared to the exclusive ones measured by LHCb. The ultimate data set will also give access to B  K*νν and Kνν, which are experimentally challenging channels but also the most precise theoretically.

Beyond the Standard Model

There are many reasons to choose Belle II to address these and other puzzles with the SM, and in general the experiment will complement the physics reach of LHCb. The lower-background environment at Belle compared to LHCb allows researchers to reconstruct final states containing neutral particles, for instance, and to design efficient triggers for the analysis of τ particles. With asymmetric beam energies, the Lorentz boost of the electron–positron system is ideal for measurements of lifetimes, mixing parameters and CP violation.

The B factories established the existence of matter–antimatter asymmetries in the b-quark sector, in addition to the CP violation that was discovered 52 years earlier in the s-quark sector. The B factories established that a single irreducible complex phase in the weak interaction is sufficient to explain all CP-violating effects observed to date. This completed the SM description of the weak-interaction couplings of quarks.To move beyond this picture, two super-B factories were initially proposed: one at Tor Vegata near Frascati in Italy, and one at KEK in Japan. Although the former facility was not funded, there was a synergy and competition in the two designs. The super-B factory at KEK follows the legacy of the B factories, with Belle II and LHCb both vying to establish the first solid existence of new physics beyond the SM.

Key physics questions to be addressed by SuperKEKB and Belle II

• Are there new CP-violating phases in the quark sector?
The amount of CP violation (CPV) in the SM quark sector is orders-of-magnitude too small to explain the baryon–antibaryon asymmetry. New insights will come from examining the difference between B0 and B0 decay rates, namely via measurements of time-dependent CPV in penguin transitions (second-order W interactions) of b  s and b  d quarks. CPV in charm mixing, which is negligible in the SM, will also provide information on the up-type quark sector. Another key area will be to understand the mechanisms that produced large amounts of CPV in the time-integrated rates of hadronic B decays, such as B  Kπ and B  Kππ, observed by the B factories and LHCb.

• Does nature have multiple Higgs bosons?
Many extensions to the SM predict charged Higgs bosons in addition to the observed neutral SM-like Higgs. Extended Higgs sectors can also introduce extra sources of CP violation. The charged Higgs will be searched for in flavour transitions to τ leptons, including B → τν, as well as B → Dτν and B → D*τν, where 4σ anomalies have already been observed.

• Does nature have a left–right symmetry, and are there flavour-changing neutral currents beyond the SM?
The LHCb experiment finds 4σ evidence for new physics in the decay B  K*μ+μ, which is sensitive to all heavy particles in the SM. Left–right symmetry models provide interesting candidates for this anomaly. Such extensions to the SM introduce new heavy bosons that predominantly couple to right-handed fermions that allow a new pattern of flavour-changing currents, and can be used to explain neutrino mass generation. To further characterise potential new physics, here we need to examine processes with reduced theoretical uncertainty, such as inclusive b  s l+l, b  sν ν transitions and time-dependent CPV in radiative B meson decays. Complementary constraints coming from electroweak precision observables and from direct searches at the LHC have pushed the mass limit for left–right models to several TeV.

• Are there sources of lepton-flavour violation (LFV) beyond the SM?
LFV is a key prediction in many neutrino mass-generation mechanisms, and may lead to τμγ enhancement at the level of 10−8. Belle II will analyse τ lepton decays for a number of searches, which include LFV, CP violation and measurements of the electric dipole moment and (g−2) of the τ. The expected sensitivities to τ decays at Belle II will be unrivalled due to correlated production with minimal collision background. The detector will provide sensitivities seven times better than Belle for background-limited modes such as τμγ (to about 5 × 10–9) and up to 50 times better for the cleanest searches, such as τ eee (at the level of 5 × 10–10).

• Is there a dark sector of particle physics at the same mass scale as ordinary matter?
Belle II has unique sensitivity to dark matter via missing energy decays. While most searches for new physics at Belle II are indirect, there are models that predict new particles at the MeV to GeV scale – including weakly and non-weakly interacting massive particles that couple to the SM via new gauge symmetries. These models often predict a rich sector of hidden particles that include dark-matter candidates and gauge bosons. Belle II is implementing a new trigger system to capture these elusive events.

• What is the nature of the strong force in binding hadrons?
With B factories and hadron colliders having discovered a large number of states that were not predicted by the conventional meson interpretation, changing our understanding of QCD in the low-energy regime, quarkonium is high on the agenda at Belle II. A clean way of studying new particles is to produce them near resonance, achievable by adjusting the machine energy, while Belle II has good detection capabilities for all neutral and charged particles.

The post Belle II super-B factory experiment takes shape at KEK appeared first on CERN Courier.

]]>
Feature Following in the footsteps of Belle at the KEKB facility, a new super-B factory will search for new weak interactions in the flavour sector. https://cerncourier.com/wp-content/uploads/2016/08/CCbel1_07_16.jpg
IceCube seeks to expand https://cerncourier.com/a/icecube-seeks-to-expand/ https://cerncourier.com/a/icecube-seeks-to-expand/#respond Fri, 08 Jul 2016 07:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/icecube-seeks-to-expand/   The IceCube experiment at the South Pole has been one of the pioneers of the field of neutrino astronomy. During a seven-year-long construction campaign that ended in 2010, the 325 strong IceCube collaboration transformed a cubic kilometre of ultra-transparent Antarctic ice into a giant Cherenkov detector. Today, 5160 optical sensors are suspended beneath the ice […]

The post IceCube seeks to expand appeared first on CERN Courier.

]]>
 

The IceCube experiment at the South Pole has been one of the pioneers of the field of neutrino astronomy. During a seven-year-long construction campaign that ended in 2010, the 325 strong IceCube collaboration transformed a cubic kilometre of ultra-transparent Antarctic ice into a giant Cherenkov detector. Today, 5160 optical sensors are suspended beneath the ice to detect Cherenkov light from charged particles produced when high-energy neutrinos from the cosmos interact with nuclei in the detector. So far, IceCube has detected neutrinos with energies in the range 1011–1016 eV, which include the most energetic neutrinos ever recorded (see image of the proposed Gen2 array). However, we do not yet know where these neutrinos come from. For this reason, the IceCube collaboration is developing designs for an expanded “Gen2” detector.

IceCube observes astrophysical neutrinos in two ways. The first approach selects upgoing events by using the Earth to filter out the large flux of cosmic-ray muons. At low energies (below 100 TeV), the measured flux of muon neutrinos is consistent with an atmospheric origin, whereas at higher energies, a clear excess of events with a significance of 5.6σ is observed. The second approach selects neutrinos that interact inside the detector. A total of 54 cosmic-neutrino events with energies ranging from 30–2000 TeV were detected during four years of operation, excluding a purely atmospheric explanation at the level of 6.5σ. Although there is some tension between the results from the two approaches, a combined analysis finds that the data are consistent with an at-Earth flux equally shared between three neutrino flavours, as is expected for neutrinos originating in cosmic sources.

Towards a new detector

Despite multiple searches for the locations of these sources, however, the IceCube team has yet to find any statistically significant associations. Searches for neutrinos from gamma-ray bursts and some classes of galaxies have also come up empty. Although these observations have disfavoured many promising models of the origin of cosmic rays, the ultimate goal of neutrino astronomy is to detect multiple neutrinos from a single source. This requires many hundreds of events, which would take an array of the scale of IceCube at least 20 years to detect.

To speed up data collection, an expanded IceCube collaboration is planning a greatly enhanced instrument (see image of the proposed Gen2 array) with multiple elements: an enlarged array to search for high-energy astrophysical neutrinos; a dense infill array to determine the neutrino properties (PINGU); a larger surface air-shower array to veto downgoing atmospheric neutrinos; and possibly an array of radio detectors targeting neutrinos with energies above 1017 eV. Most importantly, thanks to the clarity of the Antarctic ice, we would be able to increase the instrumented volume of this next-generation array by a factor of 10 without a corresponding increase in the number of deployed sensors – or in the cost. The Gen2 proposal would therefore see an instrumented volume of approximately 10 km3 comprising strings of optical modules, but with improved hardware and deployment methods compared with IceCube.

For the in-ice component PINGU (Precision IceCube Next Generation Upgrade), the Gen2 collaboration is exploring a number of optimised designs for the optical modules, as well as longer strings deployed with improved drilling methods. Photomultipliers (PMTs) with higher quantum efficiency will be used, as is already the case for DeepCore in IceCube, and pressure spheres with improved glass and optical gel will improve sensitivity by transmitting more ultraviolet Cherenkov light. Some designs include more than one phototube per optical module (see image), while more radical concepts envision the addition of long cylindrical wavelength shifters to improve information about the photon arrival direction. Many-PMT designs were pioneered by the KM3NeT collaboration, which is proposing to build a cubic-kilometre-sized European neutrino Cherenkov telescope in the Mediterranean Sea, but are also attractive to IceCube.

The increased complexity of these approaches would be offset by new electronics, and increased computing power will allow the use of more sophisticated software algorithms that better account for the positional dependence of the optical properties of the ice and the stochastic nature of muon energy loss. This will result in improved pointing and energy resolution of both tracks and showers and better identification of tau neutrinos. IceCube has produced a white paper for the Gen2 proposal (arXiv:1412.5106) that fits well with the US National Science Foundation’s recent identification of multi-wavelength astronomy as one of six future priorities, and a formal proposal will be completed in the next few years.

Physics in order

PINGU will build on the success of DeepCore in measuring atmospheric neutrino-oscillation parameters. It consists of a dense infill array in the centre of DeepCore with a threshold of a few GeV, allowing the ordering of the neutrino masses to be determined by matter-induced oscillations of the atmospheric neutrino flux. By precisely measuring the oscillation probability as a function of neutrino energy and zenith angle, PINGU will be able to determine which neutrino is lightest.

Like the present IceTop (a surface air-shower array that covers IceCube’s surface), an expanded surface array will tag and veto downgoing atmospheric neutrinos that are accompanied by cosmic-ray air showers. Current Gen2 designs envision a 75 km2 surface array that would allow IceCube to collect a clean sample of astrophysical neutrinos over a much larger solid angle, including the galactic centre. It will also result in much improved cosmic-ray studies and more sensitive searches for PeV photons from galactic sources. To study the highest-energy (above typically 1017 eV) neutrinos, Gen2 may also include an array of radio detectors to observe the coherent radio Cherenkov emission from neutrino-induced showers. Radio detection is now pursued by the ARA (the Askaryan Radio Array at the South Pole) and ARIANNA (located on Antarctica’s Ross Ice Shelf) experiments, but coincident observations with IceCube Gen2 would be preferable.

Of course, IceCube is not the only neutrino telescope in town. ANTARES has been taking data in the Mediterranean Sea since 2008 and will be followed by KM3NeT (CERN Courier March 2016 p12), while the Gigaton Volume Detector (Baikal-GVD) is currently being built in Lake Baikal, Russia (CERN Courier July/August 2015 p23). Seawater, lake water and Antarctic ice present different challenges and advantages to cosmic-neutrino observatories, and sites in the Northern Hemisphere benefit because the galactic centre is below the horizon. While we all benefit from friendly competition and from sharing R&D resources, size has undeniable advantages. IceCube-Gen2, should the project go ahead, will be larger than any of the proposed alternatives, and is therefore well placed to write the next chapter in neutrino astronomy.

The post IceCube seeks to expand appeared first on CERN Courier.

]]>
https://cerncourier.com/a/icecube-seeks-to-expand/feed/ 0 Feature
Futures intertwined https://cerncourier.com/a/viewpoint-futures-intertwined/ Fri, 08 Jul 2016 07:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/viewpoint-futures-intertwined/ CERN and Fermilab have a rich history of scientific accomplishment. Fermilab, which is currently the only US laboratory fully devoted to particle physics, tends to favour fermions: the top and bottom quarks were discovered here, as was the tau neutrino. CERN seems to prefer bosons: the W, Z and Higgs bosons were all discovered at […]

The post Futures intertwined appeared first on CERN Courier.

]]>

CERN and Fermilab have a rich history of scientific accomplishment. Fermilab, which is currently the only US laboratory fully devoted to particle physics, tends to favour fermions: the top and bottom quarks were discovered here, as was the tau neutrino. CERN seems to prefer bosons: the W, Z and Higgs bosons were all discovered at the European lab. Both labs also have ambitious plans for the future that build on a history of close collaboration. A recent example is the successful test of a novel high-field quadrupole superconducting magnet made from Nb3Sn as part of the R&D programme for the High-Luminosity Large Hadron Collider (HL-LHC). The highly successful team behind this technology (the Fermilab-led LHC Accelerator Research Programme, which includes Berkeley and Brookhaven national labs) is also committed to developing 16 T magnets for a high-energy LHC and a possible larger circular collider.

Our laboratories and their global communities are now moving even closer together. At a ceremony held at the White House in Washington, DC in May 2015, representatives from the US Department of Energy (DOE), the US National Science Foundation and CERN signed a co-operation agreement for continued joint research in particle physics and computing, both at CERN and in the US. This was followed by a ceremony at CERN in December, at which the US ambassador to the United Nations and the former CERN Director-General signed five formal agreements that will serve as the framework for future US–CERN collaboration. The new agreements enable US scientists to continue their vital contribution to the LHC and its upgrade programme, while for the first time enabling CERN participation in experiments hosted in the US.

The US physics community and DOE are committed to the success of CERN. Physicists migrated from the US to CERN en masse following the 1993 cancellation of the Superconducting Super Collider. In 2008, lack of clarity about the future of US particle physics contributed to budget cuts, which together brought us to a low point for our field. These painful periods taught us that a unified scientific community and strong partnerships are vital to success.

Fortunately, the tides have now turned, in particular thanks to two important planning reports. The first was the 2013 European Strategy Report, which for the first time recommended that CERN supports physics programmes, particularly regarding neutrinos, outside of its laboratory. The following year, this bold proposal led the US Particle Physics Project Prioritisation panel to strongly recommended a continued partnership with CERN on the LHC and to pursue an ambitious long-baseline neutrino programme hosted by Fermilab, for which international participation and contributions are vital.

CERN’s support and European leadership are critical to the success of the ambitious Long-Baseline Neutrino Facility (LBNF) and Deep Underground Neutrino Experiment (DUNE) being hosted by Fermilab. In partnership with the Italian Institute for Nuclear Physics, CERN is also upgrading the ICARUS detector for our short-baseline neutrino programme. Thanks largely to this partnership with CERN, the US particle-physics community is now enjoying a sense of optimism and increasing budgets.

Fermilab and CERN have always worked together at some level, but the high-level agreements between CERN and the DOE will reach decades into the future. CERN recognises the extensive technical capability of Fermilab and the US community, which are currently working to help upgrade CMS and ATLAS as well as accelerator magnets for the HL-LHC, while the US recognises CERN’s leadership in high-energy collider physics, and more than 1000 US physicists call CERN their scientific home.

Yet, not everyone agrees that our laboratories should be intertwined. Some in the US think too much money is sent abroad and believe that the funds could be used for particle physics at “home”, or for other uses entirely. On the other side of the Atlantic, some might wonder why they should work outside of CERN or, worse, outside of Europe. These views are short-sighted. The best science is best achieved through collaborative global partnerships. For this reason, CERN and Fermilab will be intertwined for a long time to come.

The post Futures intertwined appeared first on CERN Courier.

]]>
Opinion
Particle flow in CMS https://cerncourier.com/a/particle-flow-in-cms/ https://cerncourier.com/a/particle-flow-in-cms/#respond Fri, 20 May 2016 07:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/particle-flow-in-cms/ In hadron-collider experiments, jets are traditionally reconstructed by clustering photon and hadron energy deposits in the calorimeters. As the information from the inner tracking system is completely ignored in the reconstruction of jet momentum, the performance of such calorimeter-based reconstruction algorithms is seriously limited. In particular, the energy deposits of all jet particles are clustered […]

The post Particle flow in CMS appeared first on CERN Courier.

]]>

In hadron-collider experiments, jets are traditionally reconstructed by clustering photon and hadron energy deposits in the calorimeters. As the information from the inner tracking system is completely ignored in the reconstruction of jet momentum, the performance of such calorimeter-based reconstruction algorithms is seriously limited. In particular, the energy deposits of all jet particles are clustered together, and the jet energy resolution is driven by the calorimeter resolution for hadrons – typically 100%/√E in CMS – and by the non-linear calorimeter response. Also, because the trajectories of low-energy charged hadrons are bent away from the jet axis in the 3.8 T field of the CMS magnet, their energy deposits in the calorimeters are often not clustered into the jets. Finally, low-energy hadrons may even be invisible if their energies lie below the calorimeter detection thresholds.

In contrast, in lepton-collider experiments, particles are identified individually through their characteristic interaction pattern in all detector layers, which allows the reconstruction of their properties (energy, direction, origin) in an optimal manner, even in highly boosted jets at the TeV scale. This approach was first introduced at LEP with great success, before being adopted as the baseline for the design of future detectors for the ILC, CLIC and the FCC-ee. The same ambitious approach has been adopted by the CMS experiment, for the first time at a hadron collider. For example, the presence of a charged hadron is signalled by a track connected to calorimeter energy deposits. The direction of the particle is indicated by the track before any deviation in the field, and its energy is calculated as a weighted average of the track momentum and the associated calorimeter energy. These particles, which typically carry about 65% of the energy of a jet, are therefore reconstructed with the best possible energy resolution. Calorimeter energy deposits not connected to a track are either identified as a photon or as a neutral hadron. Photons, which represent typically 25% of the jet energy, are reconstructed with the excellent energy resolution of the CMS electromagnetic calorimeter. Consequently, only 10% of the jet energy – the average fraction carried by neutral hadrons – needs to be reconstructed solely using the hadron calorimeter, with its 100%/√E resolution. In addition to these types of particles, the algorithm identifies and reconstructs leptons with improved efficiency and purity, especially in the busy jet environment.

Key ingredients for the success of particle flow are excellent tracking efficiency and purity, the ability to resolve the calorimeter energy deposits of neighbouring particles, and unambiguous matching of charged-particle tracks to calorimeter deposits. The CMS detector, while not designed for this purpose, turned out to be well-suited for particle flow. Charged-particle tracks are reconstructed with efficiency greater than 90% and a rate of false track reconstruction at the per cent level down to a transverse momentum of 500 MeV. Excellent separation of charged hadron and photon energy deposits is provided by the granular electromagnetic calorimeter and large magnetic-field strength. Finally, the two calorimeters are placed inside of the magnet coil, which minimises the probability for a charged particle to generate a shower before reaching the calorimeters, and therefore facilitates the matching between tracks and calorimeter deposits.

After particle flow, the list of reconstructed particles resembles that provided by an event generator. It can be used directly to reconstruct jets and the missing transverse momentum, to identify hadronic tau decays, and to quantify lepton isolation. Figure 1 illustrates, in a given event, the accuracy of the particle reconstruction by comparing the jets of reconstructed particles to the jets of generated particles. Figure 2 further demonstrates the dramatic improvement in jet-energy resolution with respect to the calorimeter-based measurement. In addition, the particle flow improves the jet angular resolution by a factor of three and reduces the systematic uncertainty in the jet energy scale by a factor of two. The influence of particle flow is, however, far from being restricted to jets with, for example, similar improvements for missing transverse-momentum reconstruction and a tau-identification background rate reduced by a factor three. This new approach to reconstruction also paved the way for particle-level pile-up mitigation methods such as the identification and masking of charged hadrons from pile-up before clustering jets or estimating lepton isolation, and the use of machine learning to estimate the contribution of pile-up to the missing transverse momentum.

The algorithm, optimised before the start of LHC Run I in 2009, remains essentially unchanged for Run II, because the reduced bunch spacing of 25 ns could be accommodated by a simple reduction of the time windows for the detector hits. The future CMS upgrades have been planned towards optimal conditions for particle flow (and therefore physics) performance. In the first phase of the upgrade programme, a new pixel layer will reduce the rate of false charged-particle tracks, while the read-out of multiple layers with low noise photodetectors in the hadron calorimeter will improve the neutral hadron measurement that limits the jet-energy resolution. The second phase includes extended tracking allowing for full particle-flow reconstruction in the forward region, and a new high-granularity endcap calorimeter with extended particle-flow capabilities. The future is therefore bright for the CMS particle-flow reconstruction concept.

• CMS Collaboration, “Particle flow and global event description in CMS”, in preparation.

The post Particle flow in CMS appeared first on CERN Courier.

]]>
https://cerncourier.com/a/particle-flow-in-cms/feed/ 0 Feature
AugerPrime looks to the highest energies https://cerncourier.com/a/augerprime-looks-to-the-highest-energies/ https://cerncourier.com/a/augerprime-looks-to-the-highest-energies/#respond Fri, 20 May 2016 07:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/augerprime-looks-to-the-highest-energies/ The world’s largest cosmic-ray experiment, the Pierre Auger Observatory, is embarking on its next phase, named AugerPrime.

The post AugerPrime looks to the highest energies appeared first on CERN Courier.

]]>
 

Since the start of its operations in 2004, the Auger Observatory has illuminated many of the open questions in cosmic-ray science. For example, it confirmed with high precision the suppression of the primary cosmic-ray energy spectrum for energies exceeding 5 × 1019 eV, as predicted by Kenneth Greisen, Georgiy Zatsepin and Vadim Kuzmin (the “GZK effect”). The collaboration has searched for possible extragalactic point sources of the highest-energy cosmic-ray particles ever observed, as well as for large-scale anisotropy of arrival directions in the sky (CERN Courier December 2007 p5). It has also published unexpected results about the specific particle types that reach the Earth from remote galaxies, referred to as the “mass composition” of the primary particles. The observatory has set the world’s most stringent upper limits on the flux of neutrinos and photons with EeV energies (1 EeV = 1018 eV). Furthermore, it contributes to our understanding of hadronic showers and interactions at centre-of-mass energies well above those accessible at the LHC, such as in its measurement of the proton–proton inelastic cross-section at √s = 57 TeV (CERN Courier September 2012 p6).

The current Auger Observatory

The Auger Observatory learns about high-energy cosmic rays from the extensive air showers they create in the atmosphere (CERN Courier July/August 2006 p12). These showers consist of billions of subatomic particles that rain down on the Earth’s surface, spread over a footprint of tens of square kilometres. Each air shower carries information about the primary cosmic-ray particle’s arrival direction, energy and particle type. An array of 1600 water-Cherenkov surface detectors, placed on a 1500 m grid covering 3000 km2, samples some of these particles, while fluorescence detectors around the observatory’s perimeter observe the faint ultraviolet light the shower creates by exciting the air molecules it passes through. The surface detectors operate 24 hours a day, and are joined by fluorescence-detector measurements on clear moonless nights. The duty cycle for the fluorescence detectors is about 10% that of the surface detectors. An additional 60 surface detectors in a region with a reduced 750 m spacing, known as the infill array, focus on detecting lower-energy air showers whose footprint is smaller than that of showers at the highest energies. Each surface-detector station (see image above) is self-powered by a solar panel, which charges batteries in a box attached to the tank (at left in the image), enabling the detectors to operate day and night. An array of 153 radio antennas, named AERA and spread over a 17 km2 area, complements the surface detectors and fluorescence detectors. The antennas are sensitive to coherent radiation emitted in the frequency range 30–80 MHz by air-shower electrons and positrons deflected in the Earth’s magnetic field.

The motivation for AugerPrime and its detector upgrades

The primary motivation for the AugerPrime detector upgrades is to understand how the suppressed energy spectrum and the mass composition of the primary cosmic-ray particles at the highest energies are related. Different primary particles, such as γ-rays, neutrinos, protons or heavier nuclei, create air showers with different average characteristics. To date, the observatory has deduced the average primary-particle mass at a given energy from measurements provided by the fluorescence detectors. These detectors are sensitive to the number of air-shower particles versus depth in the atmosphere through the varying intensity of the ultraviolet light emitted along the path of the shower. The atmospheric depth of the shower’s maximum number of particles, a quantity known as Xmax, is deeper in the atmosphere for proton-induced air showers relative to showers induced by heavier nuclei, such as iron, at a given primary energy. Owing to the 10% duty cycle of the fluorescence detectors, the mass-composition measurements using the Xmax technique do not currently extend into the energy region E > 5 × 1019 eV where the flux suppression is observed. AugerPrime will capitalise on another feature of air showers induced by different primary-mass particles, namely, the different abundances of muons, photons and electrons at the Earth’s surface. The main goal of AugerPrime is to measure the relative numbers of these shower particles to obtain a more precise handle on the primary cosmic-ray composition with increased statistics at the highest energies. This knowledge should reveal whether the flux suppression at the highest energies is a result of a GZK-like propagation effect or of astrophysical sources reaching a limit in their ability to accelerate the highest-energy primary particles.

The key to differentiating the ground-level air-shower particles lies in improving the detection capabilities of the surface array. AugerPrime will cover each of the 1660 water-Cherenkov surface detectors with planes of plastic-scintillator detectors measuring 4 m2. Surface-detector stations with scintillators above the Cherenkov detectors will allow the Auger team to determine the electron/photon versus muon abundances of air showers more precisely compared with using the Cherenkov detectors alone. The scintillator planes will be housed in light-tight, weatherproof enclosures, attached to the existing water tanks with a sturdy support frame, as shown above. The scintillator light will be read out with wavelength-shifting fibres inserted into straight extruded holes in the scintillator planes, which are bundled and attached to photomultiplier tubes. Also above, an image shows how the green wavelength-shifting fibres emerge from the scintillator planes and are grouped into bundles. Because the surface detectors operate 24 hours a day, the AugerPrime upgrade will yield mass-composition information for the full data set collected in the future.

The AugerPrime project also includes other detector improvements. The dynamic range of the Cherenkov detectors will be extended with the addition of a fourth photomultiplier tube. Its gain will be adjusted so that particle densities can be accurately measured close to the core of the highest-energy air showers. New electronics with faster sampling of the photomultiplier-tube signals will better identify the narrow peaks created by muons. New GPS receivers at each surface-detector station will provide better timing accuracy and calibration. A subproject of AugerPrime called AMIGA will consist of scintillator planes buried 1.3 m under the 60 surface detectors of the infill array. The AMIGA detectors are directly sensitive to the muon content of air showers, because the electromagnetic components are largely absorbed by the overburden.

The AugerPrime Symposium

In November 2015, the Auger scientists combined their biannual collaboration meeting in Malargüe, Argentina, with a meeting of its International Finance Board and dignitaries from many of its collaborating countries, to begin the new phase of the experiment in an AugerPrime Symposium. The Finance Board endorsed the development and construction of the AugerPrime detector upgrades, and a renewed international agreement was signed in a formal ceremony for continued operation of the experiment for an additional 10 years. The observatory’s spokesperson, Karl-Heinz Kampert from the University of Wuppertal, said: “The symposium marks a turning point for the observatory and we look forward to the exciting science that AugerPrime will enable us to pursue.”

While continuing to collect extensive air-shower data with its current detector configuration and publishing new results, the Auger Collaboration is focused on finalising the design for the upgraded AugerPrime detectors and making the transition to the construction phase at the many collaborating institutions worldwide. Subsequent installation of the new detector components on the Pampa Amarilla is no small task, with the 1660 surface detectors spread across such a large area. Each station must be accessed with all-terrain vehicles moving carefully on rough desert roads. But the collaboration is up to the challenge, and AugerPrime is foreseen to be completed in 2018 with essentially no interruption to current data-taking operations.

• For more information, see auger.org/augerprime.

The post AugerPrime looks to the highest energies appeared first on CERN Courier.

]]>
https://cerncourier.com/a/augerprime-looks-to-the-highest-energies/feed/ 0 Feature The world’s largest cosmic-ray experiment, the Pierre Auger Observatory, is embarking on its next phase, named AugerPrime. https://cerncourier.com/wp-content/uploads/2016/05/CCaug1_05_16-1.jpg
ALICE selects gas electron multipliers for its new TPC https://cerncourier.com/a/alice-selects-gas-electron-multipliers-for-its-new-tpc/ https://cerncourier.com/a/alice-selects-gas-electron-multipliers-for-its-new-tpc/#respond Fri, 18 Mar 2016 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/alice-selects-gas-electron-multipliers-for-its-new-tpc/   The ALICE experiment is devoted to the study of strongly interacting matter, where temperatures are sufficiently high to overcome hadronic confinement, and the effective degrees of freedom are governed by quasi-free quarks and gluons. This type of matter, known as quark–gluon plasma (QGP), has been produced in collisions of lead ions at the LHC […]

The post ALICE selects gas electron multipliers for its new TPC appeared first on CERN Courier.

]]>
 

The ALICE experiment is devoted to the study of strongly interacting matter, where temperatures are sufficiently high to overcome hadronic confinement, and the effective degrees of freedom are governed by quasi-free quarks and gluons. This type of matter, known as quark–gluon plasma (QGP), has been produced in collisions of lead ions at the LHC since 2010. The detectors of the ALICE central barrel aim to provide a complete reconstruction of the final state of Pb–Pb collisions, including charged-particle tracking and particle identification (PID). The latter is done by measuring the specific ionisation energy loss, dE/dx.

The main tracking and PID device is the ALICE time projection chamber (TPC). With an active volume of almost 90 m3, the ALICE TPC is the largest detector of its type ever built. During the LHC’s Runs 1 and 2, the TPC reached or even exceeded its design specifications in terms of track reconstruction, momentum resolution and PID capabilities.

ALICE is planning a substantial detector upgrade during the LHC’s second long shutdown, including a new inner tracking system and an upgrade of the TPC. This upgrade will allow the experiment to overcome the TPC’s essential limitation, which is the intrinsic dead time imposed by an active ion-gating scheme. In essence, the event rate with the upgraded TPC in LHC Run 3 will exceed the present one by about a factor of 100.

The rate limitation of the current ALICE TPC arises from the use of a so-called gating grid (GG) – a plane of wires installed in the MWPC-based read-out chambers. The GG is switched by an external pulser system from opaque to transparent mode and back. In the presence of an event trigger, the GG opens for a time window of 100 μs, which allows all ionisation electrons from the drift volume to enter the amplification region. On the other hand, slow-moving ions produced in the avalanche process head back into the drift volume. Therefore, after each event, the GG has to stay closed for 300–500 μs to keep the drift volume free of large space-charge accumulations, which would create massive drift-field distortions. This leads to an intrinsic read-out rate limitation of a few kHz for the current TPC. However, it should be noted that the read-out rate in Pb–Pb collisions is currently limited by the bandwidth of the TPC read-out electronics to a few hundred Hz.

In Run 3, the LHC is expected to deliver Pb–Pb collision rates of about 50 kHz, implying average pile-up of about five collision events within the drift time window of the TPC. Moreover, many of the key physics observables are on low-transverse-momentum scales, implying small signal-over-background ratios, which make conventional triggering schemes inappropriate. Hence, the upgrade of the TPC aims at a triggerless, continuous read-out of all collision events. Operating the TPC in such a video-like mode makes it necessary to exchange the present MWPC-based read-out chambers for a different technology, which eliminates the necessity of active ion gating, also including complete replacement of the front-end electronics and read-out system.

The main challenge for the new read-out chambers is the requirement of large opacity for back-drifting ions, combined with high efficiency to collect ionisation electrons from the drift volume into the amplification region, to maintain the necessary energy resolution. To allow for continuous operation without gating, both requirements must be fulfilled at the same potential setting. In an extensive R&D effort, conducted in close co-operation with CERN’s RD51 collaboration, it was demonstrated that these specific requirements can be reached in an amplification scheme that employs four layers of gas electron multiplier (GEM) foils, a technology that was put forward by Fabio Sauli and collaborators in the 1990s.

A schematic view of a 4-GEM stack is shown on the previous page (figure 1). Optimal performance is reached in a setting where the amplification voltages ∆V across the GEMs increase from layer 1 to 4. This maximises the average number of GEMs that the produced ions have to pass on their way towards the drift volume, hence giving rise to minimal ion-escape probability. Moreover, the electron transparency and ion opacity can be optimised by a suitable combination of high and low transfer fields ET. Finally, the hole pitch of the GEM foils has proven to be an important parameter for the electron and ion transport properties, leading to a solution where two so-called standard-pitch GEMs (S, hole-pitch 140 μm) in layers 1 and 4 sandwich two GEMs with larger pitch (LP, hole-pitch 280 μm) in layers 2 and 3.

After being developed in small-size prototype tests in the laboratory, a full-size TPC inner read-out chamber (IROC) with 4-GEM read-out was built and tested in beams at the PS and SPS. To this end, large-size GEM foils were produced at the CERN PH-DT Micro-Pattern Technologies Workshop, in so-called single-mask technology (figure 2). As a main result of the test-beam campaigns, the dE/dx performance of the 4-GEM IROC was demonstrated to be the same as for the existing MWPC IROCs, and the stability against discharge is well suited for operation at the LHC in Run 3 and beyond.

After approval of the Technical Design Report by the LHC Experiments Committee, and an in-depth Engineering Design Review of the new read-out chambers in 2015, the TPC upgrade project is presently in its pre-production phase, aiming to start mass production this summer.

The post ALICE selects gas electron multipliers for its new TPC appeared first on CERN Courier.

]]>
https://cerncourier.com/a/alice-selects-gas-electron-multipliers-for-its-new-tpc/feed/ 0 Feature
ATLAS and CMS upgrade proceeds to the next stage https://cerncourier.com/a/atlas-and-cms-upgrade-proceeds-to-the-next-stage/ https://cerncourier.com/a/atlas-and-cms-upgrade-proceeds-to-the-next-stage/#respond Fri, 15 Jan 2016 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/atlas-and-cms-upgrade-proceeds-to-the-next-stage/ Résumé Le programme d’amélioration pour ATLAS et CMS passe à la phase suivante Le projet LHC haute luminosité permettra également d’étendre le programme de physique des expériences. Des éléments clés des détecteurs devront être remplacés pour que ces instruments puissent traiter l’empilement d’interactions proton-proton – 140 à 200 en moyenne par croisement de paquets. En octobre […]

The post ATLAS and CMS upgrade proceeds to the next stage appeared first on CERN Courier.

]]>
Résumé

Le programme d’amélioration pour ATLAS et CMS passe à la phase suivante

Le projet LHC haute luminosité permettra également d’étendre le programme de physique des expériences. Des éléments clés des détecteurs devront être remplacés pour que ces instruments puissent traiter l’empilement d’interactions proton-proton – 140 à 200 en moyenne par croisement de paquets. En octobre 2015, le Comité d’examen des ressources du CERN a confirmé que les collaborations peuvent à présent élaborer des rapports de conception technique (TDR). Le franchissement de cette première étape du processus d’approbation est un grand succès pour les expériences ATLAS et CMS.

At the end of the third operational period in 2023, the LHC will have delivered 300 fb–1, and the final focussing magnets, installed at the collision points at each of four interaction regions in the LHC, will need to be replaced. By redesigning these magnets and improving the beam optics, the luminosity can be greatly increased. The High Luminosity LHC (HL-LHC) project aims to deliver 10 times the original design integrated luminosity (number of collisions) of the LHC (CERN Courier December 2015 p7). This will extend the physics programme and open a new window of discovery. But key components of the experiments will also have to be replaced to cope with the pile-up of 140–200 proton–proton interactions occurring, on average, per bunch crossing of the beams. In October 2015, the ATLAS and CMS collaborations met a major milestone in preparing these so-called Phase II detector upgrades for operation at the HL-LHC, when it was agreed at the CERN Resource Review Board that they proceed to prepare Technical Designs Reports.

New physics at the HL-LHC

The headline result of the first operation period of the LHC was the observation of a new boson in 2012. With the present data set, this boson is fully consistent with being the Higgs boson of the Standard Model of particle physics. Its couplings (interaction strengths) with other particles in the dominant decay modes are measured with an uncertainty of 15–30% by each experiment, and scale with mass as predicted (see figure 1). With the full 3000 fb–1 of the HL-LHC, the dominant couplings can be measured with a precision of 2–5%; this potential improvement is also shown in figure 1. What’s more, rare production processes and decay modes can be observed. Of particular interest is to find evidence for the production of a pair of Higgs bosons, which depends on the strength of the interaction between the Higgs bosons themselves. This will be complemented by precise measurements of other Standard Model processes and any deviations from the theoretical predictions will be indirect evidence for a new type of physics.

In parallel, the direct search for physics beyond the Standard Model will continue. The theory of supersymmetry (SUSY) introduces a heavy partner for each ordinary particle. This is very attractive in that it solves the problem of how the Higgs boson can remain relatively light, with a mass of 125 GeV, despite its interactions with heavy particles; in particular, SUSY can cancel the large corrections to the Higgs mass from the 173 GeV top quark. According to SUSY, the contributions from ordinary particles are cancelled by the contributions from the supersymmetric partners. The presence of the lightest SUSY particle can also explain the dark matter in the universe. Figure 2 compares the results achievable at the LHC and HL-LHC in a search for electroweak SUSY particles. Electroweak SUSY production has a relatively low rate, and benefits from the factor 10 increase in luminosity. Particles decaying via a W boson and a Z boson give final states with three leptons and missing transverse momentum.

Other “exotic” models will also be accessible, including those that introduce extra dimensions to explain why gravity is so weak compared with the other fundamental forces.

If a signal for new particles or new interactions begins to emerge – and this might happen in the second ongoing period of the LHC operation, which is running at higher energy compared with the first period – the experiments will have to be able to measure them precisely at the HL-LHC to distinguish between different theoretical explanations.

Experimental challenges

To achieve the physics goals, ATLAS and CMS must continue to be able to reconstruct all of the final-state particles with high efficiency and low fake rates, and to identify which ones come from the collision of interest and which come from the 140–200 additional events in the same bunch crossing. Along with this greatly increased event complexity, at the HL-LHC the detectors will suffer from unprecedented instantaneous particle flows and integrated radiation doses.

Detailed simulations of these effects were carried out to identify the sub-systems that will either not survive the high luminosity environment or not function efficiently because of the increased data rates. Entirely new tracking systems to measure charged particles will be required at the centre of the detectors, and the energy-measuring calorimeters will also need partial replacement, in the endcap region for CMS and possibly in the more forward region for ATLAS.

The possibility of efficiently selecting good events and the ability to record higher rates of data demand new triggering and data-acquisition capabilities. The main innovation will be to implement tracking information at the hardware level of the trigger decision, to provide sufficient rejection of the background signals. The new tracking devices will use silicon-sensor technology, with strips at the outer radii and pixels closer to the interaction point. The crucial role of the tracker systems in matching signals to the different collisions is illustrated in figure 3, where the event display shows the reconstruction of an interaction producing a pair of top quarks among 200 other collisions. The granularity will be increased by about a factor of five to produce a similar level of occupancies as with the current detectors and operating conditions. With reduced pixel sizes and strip pitches, the detector resolution will be improved. New thinner sensor techniques and deeper submicron technologies for the front-end read-out chips will be used to sustain the high radiation doses. And to further improve the measurements, the quantity and mass of the materials will be substantially reduced by employing lighter mechanical structures and materials, as well as new techniques for the cooling and powering schemes. The forward regions of the experiments suffer most from the high pile-up of collisions, and the tracker coverage will therefore be extended to better match the calorimetry measurements. Associating energy deposits in the calorimeters with the charged tracks over the full coverage will substantially improve jet identification and missing transverse energy measurements. The event display in figure 4 shows the example of a Higgs boson produced by the vector boson fusion (VBF) process and decaying to a pair of τ leptons.

The calorimeters in ATLAS and CMS use different technologies and require different upgrades. ATLAS is considering replacing the liquid-argon forward calorimeter with a similar detector, but with higher granularity. For further mitigation of pile-up effects, a high-granularity timing detector with a precision of a few tens of picoseconds may be added in front of the endcap LAr calorimeters. In CMS, a new high-granularity endcap calorimeter will be implemented. The detector will comprise 40 layers of silicon-pad sensors interleaved with W/Cu and brass or steel absorber to form the electromagnetic and hadronic sections, respectively. The hadronic-energy measurement will be completed with a scintillating tile section similar to the current detector. This high-granularity design introduces shower-pointing ability and high timing precision. Additionally, CMS is investigating the potential benefits of a system that is able to measure precisely the arrival time of minimum ionising particles to further improve the vertex identification for all physics objects.

The muon detectors in ATLAS and CMS are expected to survive the full HL-LHC period; however, new chambers and read-out electronics will be added to improve the trigger capabilities and to increase the robustness of the existing systems. ATLAS will add new resistive plate chambers (RPC) and small monitored drift tube chambers to the innermost layer of the barrel. The endcap trigger chambers will be replaced with small-strip thin gap chambers. CMS will complete the coverage of the current RPCs in the endcaps with high-rate capability chambers in gas electron multipliers in the front stations and RPCs in the last ones. Both experiments will install a muon tagger to benefit from the extended tracker coverage.

The trigger systems will require increased latency to allow sufficient time for the hardware track reconstruction and will also have larger throughput capability. This will require the replacement of front-end and back-end electronics for several of the calorimeter and/or muon systems that will otherwise not be replaced. Additionally, these upgrades will allow the full granularity of the detector information to be exploited at the first stage of the event selection.

Towards Technical Design Reports

To reach the first milestone in the approval process agreed with the CERN scientific committees, ATLAS and CMS prepared detailed documentation describing the entire Phase II “reference” upgrade scope and the preliminary planning and cost evaluations. This documentation includes scientific motivations for the upgrades, demonstrated through studies of the performance reach for several physics benchmarks and examined in 140 and 200 collision pile-up conditions. The performance degradation with two scenarios of reduced cost, where the upgrades are descoped or downgraded, was also investigated. After reviewing this material, the CERN LHC Committee and the Upgrade Cost Group reported to the CERN Research Board and the Resource Review Board, concluding that: “For both experiments, the reference scenario provides well-performing detectors capable of addressing the physics at the HL-LHC.”

The success of this first step of the approval process was declared, and the ATLAS and CMS collaborations are now eager to proceed with the necessary R&D and detector designs to prepare Technical Design Reports over the next two years.

• For further details, visit https://cds.cern.ch/record/2055248, https://cds.cern.ch/record/2020886 and https://cds.cern.ch/record/2055167/files/LHCC-G-165.pdf.

The post ATLAS and CMS upgrade proceeds to the next stage appeared first on CERN Courier.

]]>
https://cerncourier.com/a/atlas-and-cms-upgrade-proceeds-to-the-next-stage/feed/ 0 Feature
What’s next for OPERA’s emulsion-detection technology? https://cerncourier.com/a/whats-next-for-operas-emulsion-detection-technology/ https://cerncourier.com/a/whats-next-for-operas-emulsion-detection-technology/#respond Wed, 28 Oct 2015 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/whats-next-for-operas-emulsion-detection-technology/   Developed in the late 1990s, the OPERA detector design was based on a hybrid technology, using both real-time detectors and nuclear emulsions. The construction of the detector at the Gran Sasso underground laboratory in Italy started in 2003 and was completed in 2007 – a giant detector of around 4000 tonnes, with 2000 m3 volume and […]

The post What’s next for OPERA’s emulsion-detection technology? appeared first on CERN Courier.

]]>
 

Developed in the late 1990s, the OPERA detector design was based on a hybrid technology, using both real-time detectors and nuclear emulsions. The construction of the detector at the Gran Sasso underground laboratory in Italy started in 2003 and was completed in 2007 – a giant detector of around 4000 tonnes, with 2000 m3 volume and nine million photographic films, arranged in around 150,000 target units, the so-called bricks. The emulsion films in the bricks act as tracking devices with micrometric accuracy, and are interleaved with lead plates acting as neutrino targets. The longitudinal size of a brick is around 10 radiation lengths, allowing for the detection of electron showers and the momentum measurement through the detection of multiple Coulomb scattering. The experiment took data for five years, from June 2008 until December 2012, integrating 1.8 × 1020 protons on target.

The aim of the experiment was to perform the direct observation of the transition from muon to tau neutrinos in the neutrino beam from CERN. The distance from CERN to Gran Sasso and the SPS beam energy were just appropriate for tau-neutrino detection. In 1999, intense discussions took place between CERN management and Council delegations about the opportunity of building the CERN Neutrino to Gran Sasso (CNGS) beam facility and the way to fund it. The Italian National Institute for Nuclear Physics (INFN) was far-sighted in offering a sizable contribution. Many delegations supported the idea, and the CNGS beam was approved in December 1999. Commissioning was performed in 2006, when OPERA (at that time not fully equipped yet) detected the first muon-neutrino interactions.

With the CNGS programme, CERN was joining the global experimental effort to observe and study neutrino oscillations. The first experimental hints of neutrino oscillations were gathered from solar neutrinos in the 1970s. According to theory, neutrino oscillations originate from the fact that mass and weak-interaction eigenstates do not coincide and that neutrino masses are non-degenerate. Neutrino mixing and oscillations were introduced by Pontecorvo and by the Sakata group, assuming the existence of two sorts (flavours) of neutrinos. Neutrino oscillations with three flavours including CP and CPT violation were discussed by Cabibbo and by Bilenky and Pontecorvo, after the discovery of the tau lepton in 1975. The mixing of the three flavours of neutrinos can be described by the 3 × 3 Pontecorvo–Maki–Nakagawa–Sakata matrix with three angles – that have since been measured – and a CP-violating phase, which remains unknown at present. Two additional parameters (mass-squared differences) are needed to describe the oscillation probabilities.

Several experiments on solar, atmospheric, reactor and accelerator neutrinos have contributed to the understanding of neutrino oscillations. In the atmospheric sector, the strong deficit of muon neutrinos reported by the Super-Kamiokande experiment in 1998 was the first compelling observation of neutrino oscillations. Given that the deficit of muon neutrinos was not accompanied by an increase of electron neutrinos, the result was interpreted in terms of νμ → ντ oscillations, although in 1998 the tau neutrino had not yet been observed. The first direct evidence for tau neutrinos was announced by Fermilab’s DONuT experiment in 2000, with four reported events. In 2008, the DONuT collaboration presented its final results, reporting nine observed events and an expected background of 1.5. The Super-Kamiokande result was later confirmed by the K2K and MINOS experiments with terrestrial beams. However, for an unambiguous confirmation of three-flavour neutrino oscillations, the appearance of tau neutrinos in νμ → ντ oscillations was required.

OPERA comes into play

OPERA reported the observation of the first tau-neutrino candidate in 2010. The tau neutrino was detected by the production and decay of a τ in one of the lead targets, where τ → ρντ. A second candidate, in the τ → ππ+πντ channel, was found in 2012, followed in 2013 by a candidate in the fully leptonic τ → μνμντ decay. A fourth event was found in 2014 in the τ → hντ channel (where h is a pion or a kaon), and a fifth one was reported a few months ago in the same channel. Given the extremely low expected background of 0.25±0.05 events, the direct transition from muon to tau neutrinos has now been measured with the 5σ statistical precision conventionally required to firmly establish its observation, confirming the oscillation mechanism.

The extremely accurate detection technique provided by OPERA relies on the micrometric resolution of its nuclear emulsions, which are capable of resolving the neutrino-interaction point and the vertex-decay location of the tau lepton, a few hundred micrometres away. The tau-neutrino identification is first topological, then kinematical cuts are applied to suppress the residual background, thus giving a signal-to-noise ratio larger than 10. In general, the detection of tau neutrinos is extremely difficult, due to two conflicting requirements: a huge, massive detector and the micrometric accuracy. The concept of the OPERA detector was developed in the late 1990s with relevant contributions from Nagoya – the emulsion group led by Kimio Niwa – and from Naples, under the leadership of Paolo Strolin, who led the initial phase of the project.

The future of nuclear emulsions

Three years after the end of the CNGS programme, the OPERA collaboration – about 150 physicists from 26 research institutions in 11 countries – is finalising the analysis of the collected data. After the discovery of the appearance of tau neutrinos from the oscillation of muon neutrinos, the collaboration now plans to further exploit the capability of the emulsion detector to observe all of the three neutrino flavours at once. This unique feature will allow OPERA to constrain the oscillation matrix by measuring tau and electron appearance together with muon-neutrino disappearance.

An extensive development of fully automated optical microscopes for the scanning of nuclear-emulsion films was carried out along with the preparation and running of the OPERA experiment. These achievements pave the way for using the emulsion technologies in forthcoming experiments, including SHiP (Search for Hidden Particles), a new facility that was recently proposed to CERN. If approved, SHiP will not only search for hidden particles in the GeV mass range, but also study tau-neutrino physics and perform the first direct observation of tau antineutrinos. The tau-neutrino detector of the SHiP apparatus is designed to use nuclear emulsions similar to those used by OPERA. The detector will be able to identify all three neutrino flavours, while the study of muon-neutrino scattering with large statistics is expected to provide additional insights into the strange-quark content of the proton, through the measurement of neutrino-induced charmed hadron production.

Currently, the R&D work on emulsions continues mainly in Italy and Japan. Teams at Nagoya University have successfully produced emulsions with AgBr crystals of about 40 nm diameter – one order of magnitude smaller than those used in OPERA. In parallel, significant developments of fully automated optical-scanning systems, carried out in Italy and Japan with innovative analysis technologies, have overcome the intrinsic optical limit and achieved the unprecedented position resolution of 10 nm. Both achievements make it possible to use emulsions for the detection of sub-micrometric tracks, such as those left by nuclear recoils induced by dark-matter particles (Weakly Interacting Massive Particles, WIMPs). This paves the way for the first large-scale dark-matter experiment with directional information. The NEWS experiment (Nuclear Emulsions for WIMP Search) plans to carry out this search at the Gran Sasso underground laboratory.

Thanks to their extreme accuracy and capability of identifying particles, nuclear emulsions are also successfully employed in fields beyond particle physics. Exploiting the cosmic-ray muon radiography technique, sandwiches of OPERA-like emulsion films and passive materials were used to image the shallow-density structure beneath the Asama Volcano in Japan and, more recently, to image the crater structure of the Stromboli volcano in Italy. Detectors based on nuclear emulsions are also used in hadron therapy to characterize the carbon-ion beams and their secondary interactions in human tissues. The high detection accuracy provided by emulsions allows experts to better understand the secondary effects of radiation, and to monitor the released dose with the aim of optimizing the planning of medical treatments.

• For more information, visit http://operaweb.lngs.infn.it/.

The post What’s next for OPERA’s emulsion-detection technology? appeared first on CERN Courier.

]]>
https://cerncourier.com/a/whats-next-for-operas-emulsion-detection-technology/feed/ 0 Feature
LHCb improves trigger in Run 2 https://cerncourier.com/a/lhcb-improves-trigger-in-run-2/ https://cerncourier.com/a/lhcb-improves-trigger-in-run-2/#respond Fri, 25 Sep 2015 08:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/lhcb-improves-trigger-in-run-2/ LHCb has significantly improved the trigger for the experiment during Run 2 of the LHC. The detector is now calibrated in real time, allowing the best possible event reconstruction in the trigger, with the same performance as the Run 2 offline reconstruction. The improved trigger allows event selection at a higher rate and with better information than […]

The post LHCb improves trigger in Run 2 appeared first on CERN Courier.

]]>

LHCb has significantly improved the trigger for the experiment during Run 2 of the LHC. The detector is now calibrated in real time, allowing the best possible event reconstruction in the trigger, with the same performance as the Run 2 offline reconstruction. The improved trigger allows event selection at a higher rate and with better information than in Run 1, providing a significant advantage in the hunt for new physics in Run 2.

The trigger consists of two stages: a hardware trigger that reduces the 40 MHz bunch-crossing rate to 1 MHz, and two high-level software triggers, HLT1 and HLT2 (figure 1). In HLT1, a quick reconstruction is performed before further event selection. Here, dedicated inclusive triggers for heavy-flavour physics use multivariate approaches. HLT1 also selects an inclusive muon sample, and exclusive lines select specific decays. This trigger typically takes 35 ms/event and writes out events at about 150 kHz.

In Run 1, 20% of events were deferred and processed with the HLT between fills. For Run 2, all events that pass HLT1 are deferred while a real-time alignment is run, so minimizing the time spent using sub-optimal conditions. The spatial alignments of the vertex detector – the VELO – and the tracker systems are evaluated in a few minutes at the beginning of each fill. The VELO is reinserted for stable collisions in each fill, so the alignment could vary from one fill to another; figure 2 shows the variation for the first fills of Run 2. In addition, the calibration of the Cherenkov detectors and the outer tracker are evaluated for each run. The quality of the calibration allows the offline performance, including the offline track reconstruction, to be replicated in the trigger, thus reducing systematic uncertainties in LHCb’s results.

The second stage of the software trigger, HLT2, now writes out events for offline storage at about 12.5 kHz (compared to 5 kHz in Run 1). There are nearly 400 trigger lines. Beauty decays are typically found using multivariate analysis of displaced vertices. There is also an inclusive trigger for D* decays, and many lines for specific decays. Events containing leptons with a significant transverse momentum are also selected.

A new trigger stream – the “turbo” stream – allows candidates to be written out without further processing. Raw event data are not stored for these candidates, reducing disk usage. All of this enables a very quick data analysis. LHCb has already used data from this stream for a preliminary measurement of the J/ψ cross-section in √s = 13 TeV collisions (CERN Courier September 2015 p11).

The post LHCb improves trigger in Run 2 appeared first on CERN Courier.

]]>
https://cerncourier.com/a/lhcb-improves-trigger-in-run-2/feed/ 0 News
RD51 and the rise of micro-pattern gas detectors https://cerncourier.com/a/rd51-and-the-rise-of-micro-pattern-gas-detectors/ https://cerncourier.com/a/rd51-and-the-rise-of-micro-pattern-gas-detectors/#respond Fri, 25 Sep 2015 08:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/rd51-and-the-rise-of-micro-pattern-gas-detectors/ Résumé RD51 et l’essor des détecteurs gazeux à micropistes En 2008 a été créée au CERN la collaboration RD51, répondant ainsi au besoin de développer et d’utiliser les techniques innovantes des détecteurs gazeux à micropistes (MPGD). Si nombre de ces technologies ont été adoptées avant la création de RD51, d’autres techniques sont apparues depuis ou […]

The post RD51 and the rise of micro-pattern gas detectors appeared first on CERN Courier.

]]>
Résumé

RD51 et l’essor des détecteurs gazeux à micropistes

En 2008 a été créée au CERN la collaboration RD51, répondant ainsi au besoin de développer et d’utiliser les techniques innovantes des détecteurs gazeux à micropistes (MPGD). Si nombre de ces technologies ont été adoptées avant la création de RD51, d’autres techniques sont apparues depuis ou sont devenues accessibles, de nouveaux concepts de détection sont en cours d’adoption et des techniques actuelles font l’objet d’améliorations importantes. Parallèlement, le déploiement de détecteurs MPGD dans des expériences en exploitation s’est considérablement accru. Aujourd’hui, RD51 est au service d’une vaste communauté d’utilisateurs, veillant sur le domaine des détecteurs MPGD et sur les applications commerciales qui pourraient voir le jour.

Improvements in detector technology often come from capitalizing on industrial progress. Over the past two decades, advances in photolithography, microelectronics and printed circuits have opened the way for the production of micro-structured gas-amplification devices. By 2008, interest in the development and use of the novel micro-pattern gaseous detector (MPGD) technologies led to the establishment at CERN of the RD51 collaboration. Originally created for a five-year term, RD51 was later prolonged for another five years beyond 2013. While many of the MPGD technologies were introduced before RD51 was founded (figure 1), with more techniques becoming available or affordable, new detection concepts are still being introduced, and existing ones are substantially improved.

In the late 1980s, the development of the micro-strip gas chamber (MSGC) created great interest because of its intrinsic rate-capability, which was orders of magnitude higher than in wire chambers, and its position resolution of a few tens of micrometres at particle fluxes exceeding about 1 MHz/mm2. Developed for projects at high-luminosity colliders, MSGCs promised to fill a gap between the high-performance but expensive solid-state detectors, and cheap but rate-limited traditional wire chambers. However, detailed studies of their long-term behaviour at high rates and in hadron beams revealed two possible weaknesses of the MSGC technology: the formation of deposits on the electrodes, affecting gain and performance (“ageing effects”), and spark-induced damage to electrodes in the presence of highly ionizing particles.

These initial ideas have since led to more robust MPGD structures, in general using modern photolithographic processes on thin insulating supports. In particular, ease of manufacturing, operational stability and superior performances for charged-particle tracking, muon detection and triggering have given rise to two main designs: the gas electron-multiplier (GEM) and the micro-mesh gaseous structure (Micromegas). By using a pitch size of a few hundred micrometres, both devices exhibit intrinsic high-rate capability (> 1 MHz/mm2), excellent spatial and multi-track resolution (around 30 μm and 500 μm, respectively), and time resolution for single photoelectrons in the sub-nanosecond range.

Coupling the microelectronics industry and advanced PCB technology has been important for the development of gas detectors with increasingly smaller pitch size. An elegant example is the use of a CMOS pixel ASIC, assembled directly below the GEM or Micromegas amplification structure. Modern “wafer post-processing technology” allows for the integration of a Micromegas grid directly on top of a Medipix or Timepix chip, thus forming integrated read-out of a gaseous detector (InGrid). Using this approach, MPGD-based detectors can reach the level of integration, compactness and resolving power typical of solid-state pixel devices. For applications requiring imaging detectors with large-area coverage and moderate spatial resolution (e.g. ring-imaging Cherenkov (RICH) counters), coarser macro-patterned structures offer an interesting economic solution with relatively low mass and easy construction – thanks to the intrinsic robustness of the PCB electrodes. Such detectors are the thick GEM (THGEM), large electron multiplier (LEM), patterned resistive thick GEM (RETGEM) and the resistive-plate WELL (RPWELL).

RD51 and its working groups

The main objective of RD51 is to advance the technological development and application of MPGDs. While a number of activities have emerged related to the LHC upgrade, most importantly, RD51 serves as an access point to MPGD “know-how” for the worldwide community – a platform for sharing information, results and experience – and optimizes the cost of R&D through the sharing of resources and the creation of common projects and infrastructure. All partners are already pursuing either basic- or application-oriented R&D involving MPGD concepts. Figure 1 shows the organization of seven Working Groups (WG) that cover all of the relevant aspects of MPGD-related R&D.

WG1 Technological Aspects and Development of New Detector Structures. The objectives of WG1 are to improve the performance of existing detector structures, optimize fabrication methods, and develop new multiplier geometries and techniques. One of the most prominent activities is the development of large-area GEM, Micromegas and THGEM detectors. Only one decade ago, the largest MPGDs were around 40 × 40 cm2, limited by existing tools and materials. A big step towards the industrial manufacturing of MPGDs with a size around a square metre came with new fabrication methods – the single-mask GEM, “bulk” Micromegas, and the novel Micromegas construction scheme with a “floating mesh”. While in “bulk” Micromegas, the metallic mesh is integrated into the PCB read-out, in the “floating-mesh” scheme it is integrated in the panel containing drift electrodes and placed on pillars when the chamber is closed. The single-mask GEM technique overcomes the cumbersome practice of alignment of two masks between top and bottom films, which limits the achievable lateral size to 50 cm. This technology, together with the novel “self-stretching technique” for assembling GEMs without glue and spacers, simplifies the fabrication process to such an extent that, especially for large-volume production, the cost per unit area drops by orders of magnitude.

Another breakthrough came with the development of Micromegas with resistive electrodes for discharge mitigation. The resistive strips match the pattern of the read-out strips geometrically, but are electrically insulated from them. Large-area resistive electrodes to prevent sparks have been developed using two different techniques: screen printing and carbon sputtering. The technology of the THGEM detectors is well established in small prototypes, the major challenge is the industrial production of high-quality large-size boards. A novel MPGD-based hydrid architecture, consisting of double THGEM and Micromegas, has been developed for photon detection; the latter allows a significant reduction in the ion backflow to the photocathode. A spark-protected version of THGEM (RETGEM), where the copper-clad conductive electrodes are replaced by resistive materials, and the RPWELL detector, consisting of a single-sided THGEM coupled to the read-out electrode through a sheet of large bulk resistivity, have also been manufactured and studied. To reduce discharge probability, a micro-pixel gas chamber (μ-PIC) with resistive electrodes using sputtered carbon has been developed; this technology is easily extendable for the production of large areas up to a few square metres.

To reduce costs, further work is needed for developing radiation-hard read-out and reinventing mainstream technologies under a new paradigm of integration of electronics and detectors, as well as integration of functionality, e.g. integrating read-out electronics directly into the MPGD structure. A breakthrough here is the development of a time-projection chamber (TPC) read-out with a total of 160 InGrid detectors, each 2 cm2, corresponding to 10.5 million pixels. Despite the enormous challenges, this has demonstrated for the first time the feasibility of extending the Timepix CMOS read-out of MPGDs to large areas.

WG2 Detector Physics and Performance. The goal of WG2 is to improve understanding of the basic physics phenomena in gases, to define common test standards, which allow comparison and eventually selection among different technologies for a particular application, and to study the main physics processes that limit MPGD performance, such as sparking, charging-up effects and ageing.

Primary ionization and electron multiplication in avalanches are statistical processes that set limits to the spatial, energy and timing resolution, and so affect the overall performance of a detector. Exploiting the ability of Micromegas and GEM detectors to measure both the position and arrival time of the charge deposited in the drift gap, a novel method – the μTPC – has been developed for the case of inclined tracks, allowing for a precise segment reconstruction using a single detection plane, and significantly improving spatial resolution (well below 100 μm, even at large track angles). Excellent energy resolution is routinely achieved with “microbulk” Micromegas and InGrid devices, differing only slightly from the accuracy obtained with gaseous scintillation proportional counters and limited by the Fano factor. Moreover, “microbulk” detectors have very low levels of intrinsic radioactivity. Other recent studies have revealed that Micromegas could act as a photodetector coupled to a Cherenkov-radiator front window, in a set-up that produces a sufficient number of UV photons to convert single-photoelectron time jitter of a few hundred picoseconds into an incident-particle timing response of the order of 50 ps.

One of the central topics of WG2 is the development of effective protection against discharges in the presence of heavily ionizing particles. The limitation caused by occasional sparking is now being lifted by the use of resistive electrodes, but at the price of current-dependent charging-up effects that cause a reduction in gain. Systematic studies are needed to optimize the electrical and geometrical characteristics of resistive Micromegas in terms of the maximum particle rate. Recent ageing studies performed in view of the High-Luminosity LHC upgrades confirmed that the radiation hardness of MPGDs is comparable with solid-state sensors in harsh radiation environments. Nevertheless, it is important to develop and validate materials with resistance to ageing and radiation damage.

Many of the advances involve the use of new materials and concepts – for example, a GEM made out of crystallized glass, and a “glass piggyback” Micromegas that separates the Micromegas from the actual read-out by a ceramic layer, so that the signal is read by capacitive coupling and the read-out is immune to discharges. A completely new approach is the study of charge-transfer properties through graphene for applications in gaseous detectors.

Working at cryogenic temperatures – or even within the cryogenic liquid itself – requires optimization to achieve simultaneously high gas gain and long-term stability. Two ideas have been pursued for future large-scale noble-liquid detectors: dual-phase TPCs with cryogenic large-area gaseous photomultipliers (GPMs) and single-phase TPCs with MPGDs immersed in the noble liquid. Studies have demonstrated that the copious light yields in liquid xenon, and the resulting good energy resolution, are a result of electroluminescence occurring within xenon-gas bubbles trapped under the hole electrode.

WG3 Applications, Training and Dissemination. WG3 concentrates on the application of MPGDs and on how to optimize detectors for particularly demanding cases. Since the pioneering use of GEM and Micromegas by the COMPASS experiment at CERN – the first large-scale use of MPGDs in particle physics – they have spread to colliders. Their use in mega-projects at accelerators is very important to engage people with science and to receive public recognition. During the past five years, there have been major developments of Micromegas and GEMs for various upgrades for ATLAS, CMS and ALICE at the LHC, as well as THGEMs for the upgrade of the COMPASS RICH. Although normally used as flat detectors, MPGDs can be bent to form cylindrically curved, ultralight tracking systems as used in inner-tracker and vertex applications. Examples are cylindrical GEMs for the KLOE2 experiment at the DAFNE e+e collider and resistive Micromegas for CLAS12 at Jefferson Lab. MPGD technology can also fulfil the most stringent constraints imposed by future facilities, from the Facility for Antiproton and Ion Research to the International Linear Collider and Future Circular Collider.

MPGDs have also found numerous applications in other fields of fundamental research. They are being used or considered, for example, for X-ray and neutron imaging, neutrino–nucleus scattering experiments, dark-matter and astrophysics experiments, plasma diagnostics, material sciences, radioactive-waste monitoring and security applications, medical physics and hadron therapy.

To help in further disseminating MPGD applications beyond fundamental physics, academia–industry matching events were introduced when the continuation of the RD51 was discussed in 2013. Since then, three events have been organized by RD51 in collaboration with the HEPTech network (CERN Courier April 2015 p17), covering MPGD applications in neutron and photon detection. The events provided a platform where academic institutions, potential users and industry could meet to foster collaboration with people interested in MPGD technology. In the case of neutron detection, there is tangible mutual interest between the high-energy physics and neutron-scattering communities to advance the technology of MPGDs; GEM-based solutions for thermal-neutron detection at spallation sources, novel high-resolution neutron devices for macromolecular crystallography, and fast neutron MPGD detectors in fusion research represent a new frontier for future developments.

WG4 Modelling of Physics Processes and Software Tools. Fast and accurate simulation has become increasingly important as the complexity of instrumentation has increased. RD51’s activity on software tools and the modelling of physics processes that make MPGDs function provides an entry point for institutes that have a strong theoretical background, but do not yet have the facilities to do experimental work. One example is the development of a nearly exact boundary-element solver, which is in most aspects superior to the finite-element method for gas-detector simulations. Another example is the dedicated measurement campaign and data analysis programme that was undertaken to understand avalanche statistics and determine the Penning transfer-rates in numerous gas mixtures.

The main difference between traditional wire-based devices and MPGDs is that the electrode size of order 10 μm in MPGDs is comparable to the collision mean free path. Microscopic tracking algorithms (Garfield++) developed within WG4 have shed light on the effects of surface and space charge in GEMs, as well as on the transparency of meshes in Micromegas. The microscopic tracking technique has also led to better understanding of the avalanche-size statistics, clarifying in particular why light noble gases perform better than heavier noble gases. Significant effort has also been devoted to modelling the performance of MPGDs for particular applications – for example, studies of electron losses in Micromegas with different mesh specifications, and of GEM electron transparency, charging-up and ion-backflow processes, for the ATLAS and ALICE upgrades.

WG5 MPGD-Related Electronics. Initiated in WG5 in 2009 as a basic multichannel read-out-system for MPGDs, the scalable read-out system (SRS) electronics has evolved into a popular RD51 standard for MPGDs. Many groups contribute to SRS hardware, firmware, software and applications, and the system has already extended beyond RD51. SRS is generally considered to be an “easy-to-use” portable system from detector to data analysis, with read-out software that can be installed on a laptop for small laboratory set-ups. Its scalability principle allows systems of 100,000 channels and more to be built through the simple addition of more electronic SRS slices, and operated at very high bandwidth using the online software of the LHC experiments. The front-end adapter concept of SRS represents another degree of freedom, because basically any sensor technology typically implemented in multi-channel ASICs may be used. So far, five different ASICs have been implemented on SRS hybrids as plug-ins for MPGDs: APV25, VFAT, Beetle, VMM2 and Timepix.

The number of SRS systems deployed is now nearing 100, with more than 300,000 APV channels, corresponding to a total volume of SRS sales of around CHF1 million. SRS has been ported for the read-out of photon detectors and tracking detectors, and is being used in several of the upgrades for ALICE, ATLAS, CMS and TOTEM at the LHC. Meanwhile, CERN’s Technology Transfer group has granted SRS reproduction licences to several companies. Since 2013, SRS has been re-designed according to the ATCA industry standard, which allows for much higher channel density and output bandwidth.

WG6 Production and Industrialization. A key point that must be solved in WG6 to advance cost-effective MPGDs is the manufacturing of large-size detectors and their production by industrial processes. The CERN PCB workshop is a unique MPGD production facility, where generic R&D, detector-component production and quality control take place. Today, GEM and Micromegas detectors can reach areas of 1 m2 in a single unit and nearly 2 m2 by patching some elements inside the detectors. Thanks to the completion of the upgrade to its infrastructure in 2012, CERN is still leading in the MPGD domain in terms of maximum detector size; however, more than 10 companies are already producing detector parts of reasonable size. WG6 serves as a reference point for companies interested in MPGD manufacturing and helps them to reach the required level of competences. Contacts with some have strengthened to the extent that they have signed licence agreements and engaged in a technology-transfer programme co-ordinated within WG6. As an example, the ATLAS New Small Wheel (NSW) upgrade will be the first detector mass produced in industry using a large high-granularity MPGD, with a detecting area around 1300 m2 divided into 2 m × 0.5 m detectors.

WG7 Common Test Facilities. The development of robust and efficient MPGDs entails understanding of their performance and implies a significant investment for laboratory measurements and detector test-beam activities to study prototypes and qualify final designs. Maintenance of the RD51 lab at CERN and test-beam facilities plays a key role among the objectives of WG7. A semi-permanent common test-beam infrastructure has been installed at the H4 test-beam area at CERN’s Super Proton Synchrotron for the needs of the RD51 community. It includes three high-precision beam telescopes made of Micromegas and GEM detectors, data acquisition, services, and gas-distribution systems. One advantage of the H4 area is the “Goliath” magnet (around 1.5 T over a large area), allowing tests of MPGDs in a magnetic field. RD51 users can also use the instrumentation, services and infrastructures of the Gas Detector Development (GDD) laboratory at CERN, and clean rooms are accessible for assembly, modification and inspection of detectors. More than 30 groups use the general RD51 infrastructure every year as a part of the WG7 activities; three annual test-beam campaigns attract on average three to seven RD51 groups at a time, working in parallel.

The RD51 collaboration also advances the MPGD domain with scientific, technological and educational initiatives. Thanks to RD51’s interdisciplinary and inter-institutional co-operation, the University Antonio Nariño in Bogota has built a detector laboratory where doctoral students and researchers are trained in the science and technology of MPGDs. With this new infrastructure and international support, the university is leveraging co-operation with other Latin American institutes to build a critical mass around MPGDs in this part of the world.

Given the ever-growing interest in MPGDs, RD51 re-established an international conference series on the detectors. The first meeting in the new series took place in Crete in 2009, followed by Kobe in 2011 and Zaragoza in 2013 (CERN Courier November 2013 p33). This year, the collaboration is looking forward to holding the fourth MPGD conference in Trieste, on 12–15 October.

The vitality of the MPGD community resides in the relatively large number of young scientists, so educational events constitute an important activity. A series of specialized schools, comprising lectures and hands-on training for students, engineers and physicists from RD51 institutes, has been organized at CERN covering the assembly of MPGDs (2009), software and simulation tools (2011), and electronics (2014). This is particularly important for young people who are seeking meaningful and rewarding work in research and industry. Last year, RD51 co-organized the MPGD lecture series and the IWAD conference in Kolkata, the Danube School on Instrumentation in Novi Sad, and the special “Charpak Event” in Lviv, organized in the context of CERN’s 60th anniversary programme “60 Years of Science for Peace” (CERN Courier November 2014 p38). The latter was organized at a particularly fragile time for Ukraine, to enhance the role of science diplomacy to tackle global challenges via the development of novel technologies.

In conclusion

During the past 10 years, the deployment of MPGDs in operational experiments has increased enormously, and RD51 now serves a broad user community, driving the MPGD domain and any potential commercial applications that may arise. Because of a growing interest in the benefits of MPGDs in many fields of research, technologies are being optimized for a broad range of applications, demonstrating the capabilities of this class of detector. Today, RD51 is continuing to grow, and now has more than 90 institutes and 450 participants from more than 30 countries in Europe, America, Asia and Africa. Last year, six new institutes from Spain, Croatia, Brazil, Korea, Japan and India joined the collaboration, further enhancing the geographical diversity and expertise of the MPGD community. Since its foundation, RD51 has provided a fundamental boost from isolated developers to a world-wide MPGD network, as illustrated by collaboration-spotting software (figure 2, p29). Many opportunities are still to be exploited, and RD51 will remain committed to the quest to help shape the future of MPGD technologies and pave the way for novel applications.

• For more information about RD51, visit http://rd51-public.web.cern.ch/RD51-Public/.

The post RD51 and the rise of micro-pattern gas detectors appeared first on CERN Courier.

]]>
https://cerncourier.com/a/rd51-and-the-rise-of-micro-pattern-gas-detectors/feed/ 0 Feature
AIDA-2020 offers support to use facilities for detector development in Europe https://cerncourier.com/a/aida-2020-offers-support-to-use-facilities-for-detector-development-in-europe/ https://cerncourier.com/a/aida-2020-offers-support-to-use-facilities-for-detector-development-in-europe/#respond Wed, 26 Aug 2015 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/aida-2020-offers-support-to-use-facilities-for-detector-development-in-europe/ The Advanced European Infrastructures for Detectors and Accelerators (AIDA-2020) – the largest European-funded project for joint detector development – is making financial support available for small development teams to carry out experiments and tests at one of 10 participating European facilities. The project, which started on 1 May, will run for four years. Its main goal is […]

The post AIDA-2020 offers support to use facilities for detector development in Europe appeared first on CERN Courier.

]]>
The Advanced European Infrastructures for Detectors and Accelerators (AIDA-2020) – the largest European-funded project for joint detector development – is making financial support available for small development teams to carry out experiments and tests at one of 10 participating European facilities. The project, which started on 1 May, will run for four years. Its main goal is to bring the community together and push detector technologies beyond current limits by sharing high-quality infrastructures provided by 57 partners from 34 countries, from Europe to Asia.

Building on the experience gained with the original AIDA project (CERN Courier April 2011 p6), the transnational access (TA) activities in AIDA-2020 are to enable financial support for teams to travel from one facility to another, to share existing infrastructures for efficient and reliable detector development. The support is organized around three different themes, providing access to a range of infrastructures: the Proton Synchrotron and Super Proton Synchrotron test beams, the IRRAD proton facility and the Gamma Irradiation Facility (GIF++) at CERN; the DESY II test beam; the TRIGA reactor at the Jožef Stefan Institute; the Karlsruhe Compact Cyclotron (KAZ); the Centre de Recherches du Cyclotron at the Université catholique de Louvain (UCLouvain); the MC40 Cyclotron at the University of Birmingham; the Rudjer Boskovic Institute Accelerator Facility (RBI-AF); and the electromagnetic compatibility facility (EMClab) at the Instituto Tecnológico de Aragón (ITAINNOVA).

Access to high-energy particle beams (TA1) at CERN and DESY enables the use of test beams free-of-charge. Here the main goal is to attract more researchers to participate in beam tests, in particular supporting PhD students and postdoctoral researchers to carry out beam tests of detectors.

With the access to irradiation sources (TA2), the goal is to cover the range of particle sources needed for detector qualification for the High Luminosity LHC (HL-LHC) project. These include proton, neutron and mixed-field sources, as well as gamma irradiation. Through IRRAD, TRIGA, KAZ and MC40, it provides both the extreme fluences of up to 1017 neq/cm2 required for the forward region in HL-LHC experiments, and the lower fluences of 1015 neq/cm2 on 10 cm2 objects for the outer layers of trackers. GIF++ covers irradiation of large-scale objects such as muon chambers, while the Heavy Ion Irradiation Facility at UCLouvain is available for single-event-effects tests of electronics.

The third theme provides access to new detector-testing facilities (TA3). Semiconductor detectors will be one of the main challenges at the HL-LHC. Studying their behaviour with micro-ion beams at RBI will enhance the understanding of these detectors. Electromagnetic compatibility is a key issue when detectors have to be integrated in an experiment, and prior tests in a dedicated facility such as the EMClab at ITAINNOVA will make the commissioning of detectors more efficient.

• For more details on each facility and eligibility criteria, visit aida2020.web.cern.ch/content/transnational-access.

The post AIDA-2020 offers support to use facilities for detector development in Europe appeared first on CERN Courier.

]]>
https://cerncourier.com/a/aida-2020-offers-support-to-use-facilities-for-detector-development-in-europe/feed/ 0 News
Stable beams at 13 TeV https://cerncourier.com/a/stable-beams-at-13-tev-2/ https://cerncourier.com/a/stable-beams-at-13-tev-2/#respond Wed, 22 Jul 2015 08:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/stable-beams-at-13-tev-2/ The LHC is back in business.

The post Stable beams at 13 TeV appeared first on CERN Courier.

]]>

At 10.40 a.m. on 3 June, the LHC operators declared “stable beams” for the first time at a beam energy of 6.5 TeV. It was the signal for the LHC experiments to start taking physics data for Run 2, this time at a collision energy of 13 TeV – nearly double the 7 TeV with which Run 1 began in March 2010. After a shutdown of almost two years and several months re-commissioning without and with beam, the world’s largest particle accelerator was back in business. Under the gaze of the world via a live webcast and blog, the LHC’s two counter-circulating beams, each with three bunches of nominal intensity (about 1011 protons per bunch), were taken through the full cycle from injection to collisions. This was followed by the declaration of stable beams and the start of Run 2 data taking.

The occasion marked the nominal end of an intense eight weeks of beam commissioning (CERN Courier May 2015 p5 and June 2015 p5) and came just two weeks after the first test collisions at the new record-breaking energy. On 20 May at around 10.30 p.m., protons collided in the LHC at 13 TeV for the first time. These test collisions were to set up various systems, in particular the collimators, and were established with beams that were “de-squeezed” to make them larger at the interaction points than during standard operation. This set-up was in preparation for a special run for the LHCf experiment (“LHCf makes the most of a special run”), and for luminosity calibration measurements by the experiments where the beams are scanned across each other – the so-called “van der Meer scans”.

Progress was also made on the beam-intensity front, with up to 50 nominal bunches per beam brought into stable beams by mid-June. There were some concerns that an unidentified obstacle in the beam pipe of a dipole in sector 8-1 could be affected by the higher beam currents. This proved not to be the case – at least so far. No unusual beam losses were observed at the location of the obstacle, and the steps towards the first sustained physics run continued.

The final stages of preparation for collisions involved setting up the tertiary collimators (CERN Courier September 2013 p37). These are situated on the incoming beam about 120–140 m from the interaction points, where the beams are still in separate beam pipes. The local orbit changes in this region both during the “squeeze” to decrease the beam size at the interaction points and after the removal of the “separation bumps” (produced by corrector magnets to keep the beams separated at the interaction points during the ramp and squeeze). This means that the tertiary collimators must be set up with respect to the beam, both at the end of the squeeze and with colliding beams. In contrast, the orbit and optics at the main collimator groupings in the beam-cleaning sections at points 7 and 3 are kept constant during the squeeze and during collisions, so their set-up remains valid throughout all of the high-energy phases.

By the morning of 3 June, all was ready for the planned attempt for the first “stable beams” of Run 2, with three bunches of protons at nominal intensity per beam. At 8.25 a.m, the injection of beams of protons from the Super Proton Synchrotron to the LHC was complete, and the ramp to increase the energy of each beam to 6.5 TeV began. However, the beams were soon dumped in the ramp by the software interlock system. The interlock was related to a technical issue with the interlocked beam-position monitor system, but this was rapidly resolved. About an hour later, at 9.46 a.m, three nominal bunches were once more circulating in each beam and the ramp to 6.5 TeV had begun again.

At 10.06 a.m., the beams had reached their top energy of 6.5 TeV and the “flat top” at the end of the ramp. The next step was the “squeeze”, using quadrupole magnets on both sides of each experiment to decrease the size of the beams at the interaction point. With this successfully completed by 10.29 a.m., it was time to adjust the beam orbits to ensure an optimal interaction at the collision points. Then at 10.34 a.m., monitors showed that the two beams were colliding at a total energy of 13 TeV inside the ATLAS and CMS detectors; collisions in LHCb and ALICE followed a few minutes later.

At 10.42 a.m., the moment everyone had been waiting for arrived – the declaration of stable beams – accompanied by applause and smiles all round in the CERN Control Centre. “Congratulations to everybody, here and outside,” CERN’s director-general, Rolf Heuer, said as he spoke with evident emotion following the announcement. “We should remember this was two years of teamwork. A fantastic achievement. I am touched. I hope you are also touched. Thanks to everybody. And now time for new physics. Great work!”

The eight weeks of beam commissioning had seen a sustained effort by many teams working nights, weekends and holidays to push the programme through. Their work involved optics measurements and corrections, injection and beam-dump set-up, collimation set-up, wrestling with various types of beam instrumentation, optimization of the magnetic model, magnet aperture measurements, etc. The operations team had also tackled the intricacies of manipulating the beams through the various steps, from injection through ramp and squeeze to collision. All of this was backed up by the full validation of the various components of the machine-protection system by the groups concerned. The execution of the programme was also made possible by good machine availability and the support of other teams working on the injector complex, cryogenics, survey, technical infrastructure, access, and radiation protection.

Over the two-year shutdown, the four large experiments ALICE, ATLAS, CMS and LHCb also went through an important programme of maintenance and improvements in preparation for the new energy frontier.

Among the consolidation and improvements to 19 subdetectors, the ALICE collaboration installed a new dijet calorimeter to extend the range covered by the electromagnetic calorimeter, allowing measurement of the energy of the photons and electrons over a larger angle (CERN Courier May 2015 p35). The transition-radiation detector that detects particle tracks and identifies electrons has also been completed with the addition of five more modules.

A major step during the long shutdown for the ATLAS collaboration was the insertion of a fourth and innermost layer in the pixel detector, to provide the experiment with better precision in vertex identification (CERN Courier June 2015 p21). The collaboration also used the shutdown to improve the general ATLAS infrastructure, including electrical power, cryogenic and cooling systems. The gas system of the transition-radiation tracker, which contributes to the identification of electrons as well as to track reconstruction, was modified significantly to minimize losses. In addition, new chambers were added to the muon spectrometer, the calorimeter read-out was consolidated, the forward detectors were upgraded to provide a better measurement of the LHC luminosity, and a new aluminium beam pipe was installed to reduce the background.

To deal with the increased collision rate that will occur in Run 2 – which presents a challenge for all of the experiments – ATLAS improved the whole read-out system to be able to run at 100 kHz and re-engineered all of the data acquisition software and monitoring applications. The trigger system was redesigned, going from three levels to two, while implementing smarter and faster selection-algorithms. It was also necessary to reduce the time needed to reconstruct ATLAS events, despite the additional activity in the detector. In addition, an ambitious upgrade of simulation, reconstruction and analysis software was completed, and a new generation of data-management tools on the Grid was implemented.

The biggest priority for CMS was to mitigate the effects of radiation on the performance of the tracker, by equipping it to operate at low temperatures (down to –20 °C). This required changes to the cooling plant and extensive work on the environment control of the detector and cooling distribution to prevent condensation or icing (CERN Courier May 2015 p28). The central beam pipe was replaced by a narrower one, in preparation for the installation in 2016–2017 of a new pixel tracker that will allow better measurements of the momenta and points of origin of charged particles. Also during the shutdown, CMS added a fourth measuring station to each muon endcap, to maintain discrimination between low-momentum muons and background as the LHC beam intensity increases. Complementary to this was the installation at each end of the detector of a 125 tonne composite shielding wall to reduce neutron backgrounds. A luminosity-measuring device, the pixel luminosity telescope, was installed on either side of the collision point around the beam pipe.

Other major activities for CMS included replacing photodetectors in the hadron calorimeter with better-performing designs, moving the muon read-out to more accessible locations for maintenance, installation of the first stage of a new hardware triggering system, and consolidation of the solenoid magnet’s cryogenic system and of the power distribution. The software and computing systems underwent a significant overhaul during the shutdown to reduce the time needed to produce analysis data sets.

To make the most of the 13 TeV collisions, the LHCb collaboration installed the new HeRSCheL detector – High Rapidity Shower Counters for LHCb. This consists of a system of scintillators installed along the beamline up to 114 m from the interaction point, to define forward rapidity gaps. In addition, one section of the beryllium beam pipe was replaced and the new beam pipe support-structure is now much lighter.

The CERN Data Centre has also been preparing for the torrent of data expected from collisions at 13 TeV. The Information Technology department purchased and installed almost 60,000 new cores and more than 100 PB of additional disk storage to cope with the increased amount of data that is expected from the experiments during Run 2. Significant upgrades have also been made to the networking infrastructure, including the installation of new uninterruptible power supplies.

First stable beams was an important step for LHC Run 2, but there is still a long way to go before this year’s target of around 2500 bunches per beam is reached and the LHC starts delivering some serious integrated luminosity to the experiments. The LHC and the experiments will now run around the clock for the next three years, opening up a new frontier in high-energy particle physics.

• Complied from articles in CERN’s Bulletin and other material on CERN’s website. To keep up to date with progress with the LHC and the experiments, follow the news at bulletin.cern.ch or visit www.cern.ch.

The post Stable beams at 13 TeV appeared first on CERN Courier.

]]>
https://cerncourier.com/a/stable-beams-at-13-tev-2/feed/ 0 Feature The LHC is back in business. https://cerncourier.com/wp-content/uploads/2015/07/CCrun1_06_15.jpg
LHC and Planck: where two ends meet https://cerncourier.com/a/lhc-and-planck-where-two-ends-meet/ https://cerncourier.com/a/lhc-and-planck-where-two-ends-meet/#respond Wed, 22 Jul 2015 08:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/lhc-and-planck-where-two-ends-meet/ Links between research at opposite ends of the distance scale.

The post LHC and Planck: where two ends meet appeared first on CERN Courier.

]]>

Over the past decade and more, cosmology on one side and particle physics on the other have approached what looks like a critical turning point. The theoretical models that for many years have been the backbone of research carried out in both fields – the Standard Model for particle physics and the Lambda cold dark matter (ΛCDM) model for cosmology – are proving insufficient to describe more recent observations, including those of dark matter and dark energy. Moreover, the most important “experiment” that ever happened, the Big Bang, remains unexplained. Physicists working at both extremes of the scale – the infinitesimally small and the infinitely large – face the same problem: they know that there is much to search for, but their arms seem too short to reach still further distances. So, while researchers in the two fields maintain their specific interests and continue to build on their respective areas of expertise, they are also looking increasingly at each other’s findings to reconstitute the common mosaic.

Studies on the nature of dark matter are the most natural common ground between cosmology and particle physics. Run 2 of the LHC, which has just begun, is expected to shed some light on this area. Indeed, while the main outcome of Run 1 was undoubtedly the widely anticipated discovery of a Higgs boson, Run 2 is opening the door to uncharted territory. In practical and experimental terms, exploring the properties and the behaviour of nature at high energy consists in understanding possible signals that include “missing energy”. In the Standard Model, this energy discrepancy is associated with neutrinos, but in physics beyond the Standard Model, the missing energy could also be the signature of many undiscovered particles, including the weakly interacting massive particles (WIMPs) that are among the leading candidates for dark matter. If WIMPs exist, the LHC’s collisions at 13 TeV may reveal them, and this will be another huge breakthrough. Because supersymmetry has not yet been ruled out, the high-energy collisions might also eventually unveil the supersymmetric partners of the known particles, at least the lighter ones. Missing energy could also account for the escape of a graviton into extra dimensions, or a variety of other possibilities. Thanks to the LHC’s Run 1 and other recent studies, the Standard Model is so well known that future observation of an unknown source of missing energy could be confidently linked to new physics.

Besides the search for dark matter, another area where cosmology and particle physics meet is in neutrino physics. The most recent result that collider experiments have published for the number of standard (light) neutrino types is Nν = 2.984±0.008 (ALEPH et al. 2006). While the search for a fourth right-handed neutrino is continuing with ground-based experiments, satellite experiments have shown that they can also have their say. Indeed, recent results from ESA’s Planck mission yield Neff = 3.04±0.18 for the effective number of relativistic degrees of freedom, and the sum of neutrino masses is constrained to Σmν < 0.17 eV. These values, derived from Planck’s data of temperature and polarization CMB anisotropies in combination with data from baryonic acoustic oscillation experiments, are consistent with standard cosmological and particle-physics predictions in the neutrino sector (Planck Collaboration 2015a). Although these values do not completely rule out a sterile neutrino, especially if thermalized at a different background temperature, its existence is disfavoured by the Planck data (figure 1).

Ground-based experiments have observed the direct oscillation of neutrinos, which proves that these elusive particles have a nonzero mass.

Working out absolute neutrino masses is no easy task. Ground-based experiments have observed the direct oscillation of neutrinos, which proves that these elusive particles have a nonzero mass. However, no measurement of absolute masses has been performed yet, and the strongest upper limit (about one order of magnitude more accurate than direct-detection measurements) on their sum comes from cosmology. Because neutrinos are the most abundant particles with mass in the universe, the influence of their absolute mass on the formation of structure is as big as their role in many physics processes observed at small scales. The picture in the present Standard Model might suggest (perhaps naively) that the mass distribution among the neutrinos could be similar to the mass distribution among the other particles and their families, but only experiments such as KATRIN – the Karslruhe Tritium Neutrino experiment – are expected to shed some light on this topic.

In recent years, cosmologists and particle physicists have shown a common interest in testing Lorentz and CPT invariances. The topic seems to be particularly relevant for theorists working on string theories, which sometimes involve mechanisms that lead to a spontaneous breaking of these symmetries. To find possible clues, satellite experiments are probing the cosmic microwave background (CMB) to investigate the universe’s birefringence, which would be a clear signature of Lorentz invariance and, therefore, CPT violation. So far, the CMB experiments WMAP, QUAD and BICEP1 have found a value of α – the rotation angle of the photon-polarization plane – consistent with zero. Results from Planck on the full set of observations are expected later this year.

Since its discovery in 2012, the Higgs boson found at the LHC has been in the spotlight for physicists studying both extremes of the scale. Indeed, in addition to its confirmed role in the mass mechanism, recent papers have discussed its possible role in the inflation of the universe. Could a single particle be the Holy Grail for cosmologists and particle physicists alike? It is a fascinating question, and many studies have been published about the particle’s possible role in shaping the early history of the universe, but the theoretical situation is far from clear. On one side, the Higgs boson and the inflaton share some basic features, but on the other side, the Standard Model interactions do not seem sufficient to generate inflation unless there is an anomalously strong coupling between the Higgs boson and gravity. Such strong coupling is a highly debated point among theoreticians. Also in this case, the CMB data could help to rule out or disentangle models. Recent full mission data from Planck clearly disfavour natural inflation compared with models that predict a smaller tensor-to-scalar ratio, such as the Higgs inflationary model (Planck Collaboration 2015b). However, the question remains open, and subject to new information coming from the LHC’s future runs and from new cosmological missions.

AMS now has results based on more than 6 × 1010 cosmic-ray events.

In the meantime, astroparticle physics is positioning itself as the area where both cosmology and particle physics could find answers to the open questions. An event at CERN in April provided a showcase for experiments on cosmic rays and dark matter, in particular the latest results from the Alpha Magnetic Spectrometer (AMS) collaboration on the antiproton-to-proton ratio in cosmic rays and on the proton and helium fluxes. Following earlier measurements by PAMELA – the Payload for Antimatter Matter Exploration and Light nuclei Astrophysics – which took data in 2006–2011, AMS now has results based on more than 6 × 1010 cosmic-ray events (electrons, positrons, protons and antiprotons, as well as nuclei of helium, lithium, boron, carbon, oxygen…) collected during the first four years of AMS-02 on board the International Space Station. With events at energies up to many tera-electron-volts, and with unprecedented accuracy, the AMS data provide systematic information on the deepest nature of cosmic rays. The antiproton-to-proton ratio measured by AMS in the energy range 0–500 GeV shows a clear discrepancy with existing models (figure 2). Anomalies are also visible in the behaviour of the fluxes of electrons, positrons, protons, helium and other nuclei. However, although a large part of the scientific community tends to interpret these observations as a new signature of dark matter, the origin of such unexpected behaviour cannot be easily identified, and discussions are still ongoing within the community.

It may seem that the universe is playing hide-and-seek with cosmologists and particle physicists alike as they probe both ends of the distance scale. However, the two research communities have a new smart move up their sleeves to unveil its secrets – collaboration. Bringing together the two ends of the scales probed by the LHC and by Planck will soon bear its fruits. Watch this space!

The post LHC and Planck: where two ends meet appeared first on CERN Courier.

]]>
https://cerncourier.com/a/lhc-and-planck-where-two-ends-meet/feed/ 0 Feature Links between research at opposite ends of the distance scale. https://cerncourier.com/wp-content/uploads/2015/07/CCpla1_06_15.jpg
Frontier detectors for the future https://cerncourier.com/a/frontier-detectors-for-the-future/ https://cerncourier.com/a/frontier-detectors-for-the-future/#respond Wed, 22 Jul 2015 08:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/frontier-detectors-for-the-future/ The last week of May saw a gathering of 390 physicists from 27 countries and four continents on the island of Elba.

The post Frontier detectors for the future appeared first on CERN Courier.

]]>

The last week of May saw a gathering of 390 physicists from 27 countries and four continents on the island of Elba. The 13th edition of the Pisa Meeting on Advanced Detectors for Frontier Physics took place in the secluded Biodola area. The conference, which takes place every three years, is based on a consolidated format, aiming at an interdisciplinary exchange of ideas: all sessions are plenary, with a round table on a topic of interest (CERN Courier July/August 2006 p31). The programme for this year was built on a record number of contributions (more than 400), out of which 327 were selected for either oral (66) or poster presentations. Eight industries were present throughout the meeting, with stands to display their products and to discuss ongoing and future R&D projects.

The opening session saw an introductory talk by Toni Pich of Valencia that described the situation in frontier physics today. The discovery of a particle associated with the Brout–Englert–Higgs mechanism has opened a whole new field of investigation to explore, in addition to the “known unknowns”. Among these, revealing the nature of dark matter and of neutrino masses is the main priority. In the following talk, CERN’s Michelangelo Mangano discussed the search for supersymmetry, as well as different possibilities for signals of new physics that will be explored with high priority from the start of Run 2 at the LHC.

A key event was the round table organized on the second day of the meeting, with 13 people representing nine laboratories (CERN, the Institute of High Energy Physics (IHEP) in Beijing, Fermilab, PSI, TRIUMF, the European Spallation Source, KEK and the Japan Proton Accelerator Research Complex) and four funding agencies (the US Department of Energy, the Institut national de physique nucléaire et de physique des particules (IN2P3), the Istituto Nazionale di Fisica Nucleare (INFN) and the UK’s Science and Technology Facilities Council). The topic for discussion was “Synergies and complementarity among laboratories”, in view of the challenges of the coming decades and of the growing role of CERN as the place where the energy frontier will be explored. The presentation about the future of high-energy physics in China by Gang Chen of IHEP was particularly enlightening, giving a perspective and an impressive plan spanning the middle of this century. Representatives of the funding agencies discussed the nearer future, where – besides the High Luminosity LHC project – a strong neutrino programme is foreseen. The lively exchange among the scientists at the table and participants on the floor left everyone with a vivid perception that what Sergio Bertolucci, CERN’s director for research and computing, defined as “co-opetition” among different institutions in high-energy physics, must move forward and become part of the texture of daily work. Several participants stressed that although CERN is central, regional laboratories have an important role because they relate directly to the host nations. Demonstrating the societal impact of research in high-energy physics to politicians and to the public at large is a key point in obtaining support for the whole field.

Each Pisa meeting has a number of standard sessions on gas and solid-state detectors, particle and photon identification, calorimetry and advanced electronics, astroparticle physics, and the application of high-energy-physics techniques in other fields. The presentations, both oral and with posters, demonstrated that significant improvements in existing detectors and current techniques are still possible. The topics presented covered dedicated R&D as well as novel ideas, some developed in a beneficial crossover with other areas, ranging from material science to nanotechnology and chemistry. In a dedicated session, speakers from the LHC experiments noted that the detectors are now performing well and are ready to help harvest the physics at 13 TeV that will come from the LHC’s Run 2.

As the field keeps changing, so does the conference. This year, a new session was introduced to offer adequate space to applied superconductivity. The technique is now fundamental, not just to provide stronger magnetic fields for accelerators and spectrometers, but also in specialized detectors. The review talk by Akira Yamamoto of KEK and CERN outlined the new frontier of superconducting magnets, both in terms of achievable field and of stored energy/mass ratio. Emanuela Barzi and Alexander Zoblin presented the R&D programme for high-field superconducting magnets at Fermilab. The laboratory that pioneered the use of superconducting magnets in accelerators now aims to be able to build magnets suitable for the Future Circular Collider design study (CERN Courier April 2014 p16). The use of superconducting materials to detect photons was discussed in two talks, by Martino Calvo of CNRS Grenoble and Roberto Leoni of IFN-CNR, Rome. The use of cryogenic detectors – bolometers, kinetic-inductance detectors, transition-edge sensors, to name but a few – was discussed by Flavio Gatti of INFN Genova, in a review of the large number of posters on the subject presented at the conference.

The meeting saw the awarding of the first Aldo Menzione Prize. Among his many activities, Aldo was one of the founders of the Pisa meeting and recipient of the W K H Panokfsky Prize in 2009. He passed away in December 2012 (CERN Courier April 2013 p37), and to honour his memory, the Frontier Detectors for Frontier Physics (FDFP) association that organizes the conference series, established an award to be assigned at each meeting to “a distinguished scientist who has contributed to the development of detector techniques”. The recipients of the prize on this first occasion were David Nygren, now of the University of Texas at Arlington, for the invention of the time-projection chamber, and Fabio Sauli, now of the TERA Foundation, for the invention of the gas electron-multiplier, or GEM. The prizes were presented by Donata Foà, Aldo’s widow, and Angelo Scribano, the president of the FDFP.

At the end of the conference dinner, several awards were also assigned by an international jury. Elsevier established two Elsevier Young Scientist Awards to honour the late Glenn Knoll, who was an editor of Nuclear Instruments and Methods (NIM). These were presented by Fabio Sauli, on behalf of NIM, to Filippo Resnati of CERN and Joana Wirth of the Technische Universität München, respectively, for his talk on the “Charge transfer properties through graphene for applications in gaseous detectors”, and for her poster on “CERBEROS: a tracking system for secondary pion beams at the HADES spectrometer”. Three FDFP awards to “talented young scientists active in the development of detection techniques and contributing, by talk or poster, to the scientific programme” were conferred by Angelo Scribano to Lars Graber of the University of Göttingen for his talk on “A 3D diamond detector for particle tracking”, Roberto Acciarri of Fermilab for a poster on “Experimental study of breakdown electric fields in liquid argon” and Raffaella Donghia of INFN-LNF for her poster on “Time performances and irradiation tests of CsI crystals read-out by MPPC”.

Concluding the conference, the chair, Marco Grassi of INFN-Pisa, provided a few statistics. He remarked that 36% of the participants were below 35 years old and nearly all of them – 96% – contributed to the conference programme with oral presentations or posters. This demonstrates that the field of detector development is attractive and has a strong basis on which it can grow, as long as, at a national level, institutes can continue to recruit these young scientists. This, as Catherine Clerc from IN2P3 reminded everybody during the round table, is the most pressing challenge in many European countries.

• For further information, visit the conference website https://agenda.infn.it/conferenceDisplay.py?confId=8397, where all of the presentations (oral and posters) are available.

The post Frontier detectors for the future appeared first on CERN Courier.

]]>
https://cerncourier.com/a/frontier-detectors-for-the-future/feed/ 0 Feature The last week of May saw a gathering of 390 physicists from 27 countries and four continents on the island of Elba. https://cerncourier.com/wp-content/uploads/2015/07/CCpis1_06_15.jpg
New possibilities for particle physics with IceCube https://cerncourier.com/a/new-possibilities-for-particle-physics-with-icecube/ https://cerncourier.com/a/new-possibilities-for-particle-physics-with-icecube/#respond Mon, 27 Apr 2015 08:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/new-possibilities-for-particle-physics-with-icecube/ The IceCube Neutrino Observatory has measured neutrino oscillations via atmospheric muon-neutrino disappearance.

The post New possibilities for particle physics with IceCube appeared first on CERN Courier.

]]>
The IceCube Neutrino Observatory has measured neutrino oscillations via atmospheric muon-neutrino disappearance. This opens up new possibilities for particle physics with the experiment at the South Pole that was originally designed to detect neutrinos from distant cosmic sources.

IceCube records more than 100,000 atmospheric neutrinos a year, most of them muon neutrinos, and its sub-detector DeepCore allows the detection of neutrinos with energies from 100 GeV down to 10 GeV. These lower-energy neutrinos are key to IceCube’s oscillation studies. Based on current best-fit oscillation parameters, IceCube should see fewer muon neutrinos at energies around 25 GeV reaching the detector after passing through the Earth. Using data taken between May 2011 and April 2014, the analysis selected muon-neutrino candidates in DeepCore with energies in the region of 6–56 GeV. The detector surrounding DeepCore was used as a veto to suppress the atmospheric muon background. Nearly 5200 neutrino candidates were found, compared with the 6800 or so expected in the non-oscillation scenario. The reconstructed energy and arrival time for these events were used to obtain values for the neutrino-oscillation parameters, Δm322 = 2.72+0.19–0.20 × 10–3 ev2 and sin2 θ23 = 0.53+0.09–0.12. These results are compatible and comparable in precision to those of dedicated oscillation experiments.

The collaboration is currently planning the Precision IceCube Next Generation Upgrade (PINGU), in which a much higher density of optical modules in the whole central region will reduce the energy threshold to a few giga-electron-volts. By carefully measuring coherent neutrino interactions with electrons in the Earth (the Mikheyev–Smirnov–Wolfenstein effect), this should allow determination of the neutrino-mass hierarchy, and which neutrino flavour is heaviest.

The post New possibilities for particle physics with IceCube appeared first on CERN Courier.

]]>
https://cerncourier.com/a/new-possibilities-for-particle-physics-with-icecube/feed/ 0 News The IceCube Neutrino Observatory has measured neutrino oscillations via atmospheric muon-neutrino disappearance.
The experiment now known as DUNE https://cerncourier.com/a/the-experiment-now-known-as-dune/ https://cerncourier.com/a/the-experiment-now-known-as-dune/#respond Mon, 27 Apr 2015 08:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/the-experiment-now-known-as-dune/ The long-baseline neutrino experiment formerly known as LBNE has a new name: Deep Underground Neutrino Experiment (DUNE). Served by an intense neutrino beam from Fermilab’s Long Baseline Neutrino Facility, DUNE will have near detectors at Fermilab and four 10-kt far detectors at the Sanford Underground Research Facility in South Dakota. In March, the DUNE collaboration […]

The post The experiment now known as DUNE appeared first on CERN Courier.

]]>
The long-baseline neutrino experiment formerly known as LBNE has a new name: Deep Underground Neutrino Experiment (DUNE). Served by an intense neutrino beam from Fermilab’s Long Baseline Neutrino Facility, DUNE will have near detectors at Fermilab and four 10-kt far detectors at the Sanford Underground Research Facility in South Dakota. In March, the DUNE collaboration – now with more than 700 scientists from 148 institutions in 23 countries – elected two new spokespersons: André Rubbia from ETH Zurich, and Mark Thomson from the University of Cambridge. One will serve as spokesperson for two years, the other for three years, to provide continuity in leadership.

The post The experiment now known as DUNE appeared first on CERN Courier.

]]>
https://cerncourier.com/a/the-experiment-now-known-as-dune/feed/ 0 News
ALICE: from LS1 to readiness for Run 2 https://cerncourier.com/a/alice-from-ls1-to-readiness-for-run-2/ https://cerncourier.com/a/alice-from-ls1-to-readiness-for-run-2/#respond Mon, 27 Apr 2015 08:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/alice-from-ls1-to-readiness-for-run-2/   It is nearly two years since the beams in the LHC were switched off and Long Shutdown 1 (LS1) began. Since then, a myriad of scientists and engineers have been repairing and consolidating the accelerator and the experiments for running at the unprecedented energy of 13 TeV (or 6.5 TeV/beam) – almost twice that of 2012. In […]

The post ALICE: from LS1 to readiness for Run 2 appeared first on CERN Courier.

]]>
 

It is nearly two years since the beams in the LHC were switched off and Long Shutdown 1 (LS1) began. Since then, a myriad of scientists and engineers have been repairing and consolidating the accelerator and the experiments for running at the unprecedented energy of 13 TeV (or 6.5 TeV/beam) – almost twice that of 2012.

In terms of installation work, ALICE is now complete. The remaining five super modules of the transition radiation detector (TRD), which were missing in Run 1, have been produced and installed. At the same time, the low-voltage distribution system for the TRD was re-worked to eliminate intermittent overheating problems that were experienced during the previous operational phase. On the read-out side, the data transmission over the optical links was upgraded to double the throughput to 4 GB/s. The TRD pre-trigger system used in Run 1 – a separate, minimum-bias trigger derived from the ALICE veto (V0) and start-counter (T0) detectors – was replaced by a new, ultrafast (425 ns) level-0 trigger featuring a complete veto and “busy” logic within the ALICE central trigger processor (CTP). This implementation required the relocation of racks hosting the V0 and T0 front-end cards to reduce cable delays to the CTP, together with optimization of the V0 front-end firmware for faster generation of time hits in minimum-bias triggers.

The ALICE electromagnetic calorimeter system was augmented with the installation of eight (six full-size and two one-third-size) super modules of the brand new dijet calorimeter (DCal). This now sits back-to-back with the existing electromagnetic calorimeter (EMCal), and brings the total azimuthal calorimeter coverage to 174° – that is, 107° (EMCal) plus 67° (DCal). One module of the photon spectrometer calorimeter (PHOS) was added to the pre-existing three modules and equipped with one charged-particle veto (CPV) detector module. The CPV is based on multiwire proportional chambers with pad read-out, and is designed to suppress the detection of charged hadrons in the PHOS calorimeter.

The overall PHOS/DCal set-up is located in the bottom part of the ALICE detector, and is now held in place by a completely new support structure. During LS1, the read-out electronics of the three calorimeters was fully upgraded from serial to parallel links, to allow operation at a 48 kHz lead–lead interaction rate with a minimum-bias trigger. The PHOS level-0 and level-1 trigger electronics was also upgraded, the latter being interfaced with the neighbouring DCal modules. This will allow the DCal/PHOS system to be used as a single calorimeter able to produce both shower and jet triggers from its full acceptance.

The gas mixture of the ALICE time-projection chamber (TPC) was changed from Ne(90):CO2(10) to Ar(90):CO2(10), to allow for a more stable response to the high particle fluxes generated during proton–lead and lead–lead running without significant degradation of momentum resolution at the lowest transverse momenta. The read-out electronics for the TPC chambers was fully redesigned, doubling the data lines and introducing more field-programmable gate-array (FPGA) capacity for faster processing and online noise removal. One of the 18 TPC sectors (on one side) is already instrumented with a pre-production series of the new read-out cards, to allow for commissioning before operation with the first proton beams in Run 2. The remaining boards are being produced and will be installed on the TPC during the first LHC Technical Stop (TS1). The increased read-out speed will be exploited fully during the four weeks of lead collisions foreseen for mid November 2015. For lead running, ALICE will operate mainly with minimum-bias triggers at a collision rate of 8 kHz or higher, which will produce a track load in the TPC equivalent to operation at 700 kHz in proton running.

LS1 has also seen the design and installation of a new subsystem – the ALICE diffractive (AD) detector. This consists of two double layers of scintillation counters placed far from the interaction region on both sides, one in the ALICE cavern (at z = 16 m) and one in the LHC tunnel (at z = –19 m). The AD photomultiplier tubes are all accessible from the ALICE cavern, and the collected light is transported via clear optical fibres.

The ALICE muon chambers (MCH) underwent a major hardware consolidation of the low-voltage system in which the bus bars were fully re-soldered to minimize the effects of spurious chamber occupancies. The muon trigger (MTR) gas-distribution system was switched to closed-loop operation, and the gas inlet and outlet “beaks” were replaced with flexible material to avoid cracking from mechanical stress. One of the MTR resistive-plate chambers was instrumented with a pre-production front-end card being developed for the upgrade programme in LS2.

The increased read-out rates of the TPC and TRD have been matched by a complete upgrade (replacement) of both the data-acquisition (DAQ) and high-level trigger (HLT) computer clusters. In addition, the DAQ and HLT read-out/receiver cards have been redesigned, and now feature higher-density parallel optical connectivity on a PCIe-bus interface and a common FPGA design. The ALICE CTP board was also fully redesigned to double the number of trigger classes (logic combinations of primary inputs from trigger detectors) from 50 to 100, and to handle the new, faster level-0 trigger architecture developed to increase the efficiency of the TRD minimum-bias inspection.

Regarding data-taking operations, a full optimization of the DAQ and HLT sequences was performed with the aim of maximizing the running efficiency. All of the detector-initialization procedures were analysed to identify and eliminate bottlenecks, to speed up the start- and end-of-run phases. In addition, an in-run recovery protocol was implemented on both the DAQ/HLT/CTP and the detector sides to allow, in case of hiccups, on-the-fly front-end resets and reconfiguration without the need to stop the ongoing run. The ALICE HLT software framework was in turn modified to discard any possible incomplete events originating during online detector recovery. At the detector level, the leakage of “busy time” between the central barrel and muon-arm read-out detectors has been minimized by implementing multievent buffers on the shared trigger detectors. In addition, the central barrel and the muon-arm triggers can now be paused independently to allow for the execution of the in-run recovery.

Towards routine running

The ALICE control room was renovated completely during LS1, with the removal of the internal walls to create an ergonomic open space with 29 universal workstations. Desks in the front rows face 11 extra-large-format LED screens displaying the LHC and ALICE controls and status. They are reserved for the shift crew and the run-co-ordination team. Four concentric lateral rows of desks are reserved for the work of detector experts. The new ALICE Run Control Centre also includes an access ramp for personnel with reduced mobility. In addition, there are three large windows – one of which can be transformed into a semi-transparent, back-lit touchscreen – for the best visitor experience with minimal disturbance to the ALICE operators.

Following the detector installations and interventions on almost all of the components of the hardware, electronics, and supporting systems, the ALICE teams began an early integration campaign at the end of 2014, allowing the ALICE detector to start routine cosmic running with most of the central-barrel detectors by the end of December. The first weeks of 2015 have seen intensive work on performing track alignment of the central-barrel detectors using cosmic muons under different magnetic-field settings. Hence, ALICE’s solenoid magnet has also been extensively tested – together with the dipole magnet in the muon arm – after almost two years of inactivity. Various special runs, such as TPC and TRD krypton calibrations, have been performed, producing a spectacular 5 PB of raw data in a single week, and providing a challenging stress test for the online systems.

The ALICE detector is located at point 2 of the LHC, and the end of the TI2 transfer line – which injects beam 1 (the clockwise beam) into the LHC from the Super Proton Synchrotron (SPS) – is 300 m from the interaction region. This set-up implies additional vacuum equipment and protection collimators close (80 m) to the ALICE cavern, which are a potential source of background interactions. The LHC teams have refurbished most of these components during LS1 to improve the background conditions during proton operations in Run 2.

ALICE took data during the injection tests in early March when beam from the SPS was injected into the LHC and dumped half way along the ring (CERN Courier April 2015 p5). The tests also produced so-called beam-splash events on the SPS beam dump and the TI2 collimator, which were used by ALICE to perform the time alignment for the trigger detectors and to calibrate the beam-monitoring system. The splash events were recorded using all of the ALICE detectors that could be operated safely in such conditions, including the muon arm.

The LHC sector tests mark the beginning of Run 2. The ALICE collaboration plans to exploit fully the first weeks of LHC running with proton collisions at a luminosity of about 1031 Hz/cm2. The aim will be to collect rare triggers and switch to a different trigger strategy (an optimized balance of minimum bias and rare triggers) when the LHC finally moves to operation with a proton bunch separation of 25 ns.

Control of ALICE’s operating luminosity during the 25 ns phase will be challenging, because the experiment has to operate with very intense beam currents but relatively low luminosity in the interaction region. This requires using online systems to monitor the luminous beam region continuously, to control its transverse size and ensure proper feedback to the LHC operators. At the same time, optimized trigger algorithms will be employed to reduce the fraction of pile-up events in the detector.

The higher energy of proton collisions of Run 2 will result in a significant increase in the cross-sections for hard probes, and the long-awaited first lead–lead run after LS1 will see ALICE operating at a luminosity of 1027 Hz/cm2. However, the ALICE collaboration is already looking into the future with its upgrade plans for LS2, focusing on physics channels that do not exhibit hardware trigger signatures in a high-multiplicity environment like that in lead–lead collisions. At the current event storage rate of 0.5 kHz, the foreseen boost of luminosity from the present 1027 Hz/cm2 to more than 6 × 1027 Hz/cm2 will increase the collected statistics by a factor of 100. This will require free-running data acquisition and storage of the full data stream to tape for offline analysis.

In this way, the LS2 upgrades will allow ALICE to exploit the full potential of the LHC for a complete characterization of quark–gluon plasma through measurements of unprecedented precision.

The post ALICE: from LS1 to readiness for Run 2 appeared first on CERN Courier.

]]>
https://cerncourier.com/a/alice-from-ls1-to-readiness-for-run-2/feed/ 0 Feature
NUCLEON takes its place in space https://cerncourier.com/a/nucleon-takes-its-place-in-space/ https://cerncourier.com/a/nucleon-takes-its-place-in-space/#respond Thu, 09 Apr 2015 08:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/nucleon-takes-its-place-in-space/ On 13 January, less than three weeks after being launched into space, the NUCLEON satellite experiment was switched on to collect its first cosmic-ray events.

The post NUCLEON takes its place in space appeared first on CERN Courier.

]]>
CCnew2_03_15

On 13 January, less than three weeks after being launched into space, the NUCLEON satellite experiment was switched on to collect its first cosmic-ray events. Orbiting the Earth on board the RESURS-P No.2 satellite, NUCLEON has been designed to investigate directly the energy spectrum of cosmic-ray nuclei and their chemical composition from 100 GeV to 1000 TeV (1011–1015 eV), as well as the cosmic-ray electron spectrum from 20 GeV to 3 TeV. It is well known that the region of the “knee” – 1014–1016 eV – is crucial for understanding the origin of cosmic rays, as well as their acceleration and propagation in the Galaxy.

NUCLEON has been produced by a collaboration between the Skobeltsyn Institute of Nuclear Physics of Moscow State University (SINP MSU) as the main partner, together with the Joint Institute for Nuclear Research (JINR) and other Russian scientific and industrial centres. It consists of silicon and scintillator detectors, a carbon target, a tungsten γ-converter and a small electromagnet calorimeter.

CCnew3_03_15

The charge-detection system, which consists of four thin detector layers of 1.5 × 1.5 cm silicon pads, is located in front of the carbon target. It is designed for precision measurement of the charge of the primary-particle charge.

A new technique, based on the generalized kinematical method developed for emulsions, is used to measure the cosmic-ray energy. Avoiding the use of heavy absorbers, the Kinematic Lightweight Energy Meter (KLEM) technique gives an energy resolution of 70% or better, according to simulations. Placed just behind the target, this energy-measurement system consists of silicon microstrip layers with tungsten layers to convert secondary γ-rays to electron–positron pairs. This significantly increases the number of secondary particles and therefore improves the accuracy of the energy determination for a primary particle.

The small electromagnet calorimeter (six tungsten/silicon microstrip layers 180 × 180 mm weighing about 60 kg, owing to satellite limitations) has a thickness of 12 radiation lengths, and will measure the primary cosmic-ray energy for some of the events. The effective geometric factor is more than 0.2 m2sr for the full detector and close to 0.1 m2sr for the calorimeter. The NUCLEON device must allow separation of the electromagnetic and hadronic cosmic-ray components at a rejection level of better than 1 in 103 for the events in the calorimeter aperture.

CCnew4_03_15

The design, production and tests of the trigger system were JINR’s responsibility. The system consists of six multistrip scintillator layers to select useful events by measuring the charged-particle multiplicity crossing the trigger planes. The two-level trigger systems have a duplicated structure for reliability, and will provide more than 108 events with energy above 1011 eV during the planned five years of data taking.

The NUCLEON prototypes were tested many times at CERN’s Super Proton Synchrotron (SPS) with high-energy electron, hadron and heavy-ion beams. The last test at CERN, which took place in 2013 at the H2 heavy-ion beam, was dedicated to testing NUCLEON’s charge-measurement system. The results showed that it provides a charge resolution better than 0.3 charge units in the region up to atomic number Z = 30 (figure 2). The Z < 5 beam particles were suppressed by the NUCLEON trigger system.

In 2013, NUCLEON was installed on the RESURS-P No. 2 satellite platform for combined tests at the Samara-PROGRESS space-qualification workshop, some 1000 km southeast of Moscow. The complex NUCLEON tests were continued in 2014 at the Baikonur spaceport, in conjunction with the satellite and the Soyuz-2.1b rocket, before the successful launch on 26 December. The satellite is now in a Sun-synchronous orbit with inclination 97.276° and a middle altitude of 475 km. The total weight of the NUCLEON apparatus is 375 kg, with a power consumption of 175 W.

The flight tests of the NUCLEON detector were continued during January and February, and the NUCLEON team hopes to present the preliminary results at the summer conferences this year. The next step after this experiment will be the High-Energy cosmic-Ray Observatory (HERO) to study high-energy primary cosmic-ray radiation from space. The first HERO prototype is to be tested at the SPS in autumn.

The post NUCLEON takes its place in space appeared first on CERN Courier.

]]>
https://cerncourier.com/a/nucleon-takes-its-place-in-space/feed/ 0 News On 13 January, less than three weeks after being launched into space, the NUCLEON satellite experiment was switched on to collect its first cosmic-ray events. https://cerncourier.com/wp-content/uploads/2015/04/CCnew2_03_15.jpg
ATLAS sets limits on anomalous quartic-gauge couplings https://cerncourier.com/a/atlas-sets-limits-on-anomalous-quartic-gauge-couplings/ https://cerncourier.com/a/atlas-sets-limits-on-anomalous-quartic-gauge-couplings/#respond Thu, 09 Apr 2015 08:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/atlas-sets-limits-on-anomalous-quartic-gauge-couplings/ The ATLAS and CMS collaborations are now looking into deeper levels of Standard Model predictions by probing additional ways in which the gauge bosons interact with each other.

The post ATLAS sets limits on anomalous quartic-gauge couplings appeared first on CERN Courier.

]]>
Experiments at the LHC have been exploring every corner of predictions made by the Standard Model in search of deviations that could point to a more comprehensive description of nature. The LHC detectors have performed superbly, producing measurements that, to date, are consistent with the model in every area tested, the discovery of the Higgs boson with Standard Model properties being a crowning achievement of LHC Run 1 data-taking.

CCnew9_03_15

The ATLAS and CMS collaborations are now looking into deeper levels of Standard Model predictions by probing additional ways in which the gauge bosons (W+, W, Z and photon) interact with each other. These self-interactions are at the heart of the model’s electroweak sector. The gauge bosons are predicted to interact through point-like triple and quartic couplings. The triple-gauge couplings have been tested both at the LHC and at Fermilab’s Tevatron, following on from beautiful studies at the Large Electron–Positron collider that demonstrated the existance of these couplings and measured their properties. A new frontier at the LHC is to explore the quartic coupling of four gauge bosons. This can be done through the two-by-two scattering of the bosons, or more directly through the transition of one of the bosons to a final state with three bosons.

The ATLAS experiment has used data collected in 2012 from 8 TeV proton–proton collisions to make a measurement of triple-gauge boson production. The measurement isolates a final state with a W boson decaying to leptonic final states eν or μν plus the production of two photons with transverse energy ET > 20 GeV, and additional kinematic requirements defined by the acceptance of the ATLAS detector and the need to suppress soft photons. This process is sensitive to possible deviations of the quartic-gauge coupling WWγγ from Standard Model predictions.

The rate of WWγγ is six orders of magnitude lower than that of inclusive W production. The isolation of this signal is a challenge, owing to both the small production rate and competition from similar processes containing a W boson with jets and single photons. The measurement relies upon the ability of the ATLAS electromagnetic calorimeter to select isolated, directly produced photons from those embedded in the more prolific production of hadronic jets. The figure shows the m(γγ) mass distribution from the 110 events that pass the final pp → W(μν) γγ + X selection cuts. The data are compared with the sum of backgrounds plus the Wγγ signal expected from the Standard Model.

These data are used to put limits on deviations of the quartic gauge coupling WWγγ from Standard Model predictions by introducing models for anomalous (non-Standard Model) contributions to pp → Wγγ + X production. These contributions typically enhance events with large invariant mass of the two photons. The anomalous quartic coupling limits are imposed using a subset of the pp → Wγγ + X events with m(γγ) > 300 GeV and no central high-energy jets. The resulting limits on various parameters that introduce non-Standard Model quartic couplings show that they are all consistent with zero (ATLAS Collaboration 2015). Once again, the Standard Model survives a measurement that probes a new aspect of its electroweak predictions.

The post ATLAS sets limits on anomalous quartic-gauge couplings appeared first on CERN Courier.

]]>
https://cerncourier.com/a/atlas-sets-limits-on-anomalous-quartic-gauge-couplings/feed/ 0 News The ATLAS and CMS collaborations are now looking into deeper levels of Standard Model predictions by probing additional ways in which the gauge bosons interact with each other. https://cerncourier.com/wp-content/uploads/2015/04/CCnew9_03_15.jpg
Proto-collaboration formed to promote Hyper-Kamiokande https://cerncourier.com/a/proto-collaboration-formed-to-promote-hyper-kamiokande/ https://cerncourier.com/a/proto-collaboration-formed-to-promote-hyper-kamiokande/#respond Thu, 09 Apr 2015 08:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/proto-collaboration-formed-to-promote-hyper-kamiokande/ The Inaugural Symposium of the Hyper-Kamiokande Proto-Collaboration, took place in Kashiwa, Japan, on 31 January, attended by more than 100 researchers.

The post Proto-collaboration formed to promote Hyper-Kamiokande appeared first on CERN Courier.

]]>
CCnew12_03_15

The Inaugural Symposium of the Hyper-Kamiokande Proto-Collaboration, took place in Kashiwa, Japan, on 31 January, attended by more than 100 researchers. The aim was to promote the proto-collaboration and the Hyper-Kamiokande project internationally. In addition, a ceremony to mark the signing of an agreement for the promotion of the project between the Institute for Cosmic Ray Research of the University of Tokyo and KEK took place during the symposium.

The Hyper-Kamiokande project aims both to address the mysteries of the origin and evolution of the universe’s matter and to confront theories of elementary-particle unification. To achieve these goals, the project will combine a high-intensity neutrino beam from the Japan Proton Accelerator Research Complex (J-PARC) with a new detector based on precision experimental techniques developed in Japan – a new megaton-class water Cherenkov detector to succeed the highly successful Super-Kamiokande detector.

CCnew13_03_15

The Hyper-Kamiokande detector will be about 25 times larger than Super-Kamiokande, the research facility that first found evidence for neutrino mass in 1998. Super-Kamiokande’s discoveries that, in comparison to other elementary particles, neutrinos have extremely small masses, and that the three known types of neutrino mix almost maximally in flight, support the ideas of theories that go beyond the Standard Model to unify the elementary particles and forces.

In particular, the Hyper-Kamiokande project aspires not only to discover CP violation in neutrinos, but to close in on theories of elementary-particle unification by discovering proton decay. By expanding solar, atmospheric, and cosmic neutrino observations, as well as advancing neutrino-interaction research and neutrino astronomy, Hyper-Kamiokande will also provide new knowledge in particle and nuclear physics, cosmology and astronomy.

As an international project, researchers from around the world are working to start the Hyper-Kamiokande experiment in 2025.The Hyper-Kamiokande proto-collaboration now includes an international steering committee and an international board of representatives with members from 13 countries: Brazil, Canada, France, Italy, Japan, Korea, Poland, Portugal (observer state), Russia, Spain, Switzerland, the UK and the US.

The post Proto-collaboration formed to promote Hyper-Kamiokande appeared first on CERN Courier.

]]>
https://cerncourier.com/a/proto-collaboration-formed-to-promote-hyper-kamiokande/feed/ 0 News The Inaugural Symposium of the Hyper-Kamiokande Proto-Collaboration, took place in Kashiwa, Japan, on 31 January, attended by more than 100 researchers. https://cerncourier.com/wp-content/uploads/2015/04/CCnew13_03_15.jpg
Detection techniques for future neutrino observatories https://cerncourier.com/a/detection-techniques-for-future-neutrino-observatories/ https://cerncourier.com/a/detection-techniques-for-future-neutrino-observatories/#respond Mon, 23 Feb 2015 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/detection-techniques-for-future-neutrino-observatories/ The discovery of high-energy astrophysical neutrinos initially announced by IceCube in 2013 provided an added boost to the planning for new, larger facilities that could study the signal in detail and identify its origins. Three large projects – KM3NeT in the Mediterranean Sea, IceCube-Gen2 at the South Pole and the Gigaton Volume Detector (GVD) in […]

The post Detection techniques for future neutrino observatories appeared first on CERN Courier.

]]>

The discovery of high-energy astrophysical neutrinos initially announced by IceCube in 2013 provided an added boost to the planning for new, larger facilities that could study the signal in detail and identify its origins. Three large projects – KM3NeT in the Mediterranean Sea, IceCube-Gen2 at the South Pole and the Gigaton Volume Detector (GVD) in Lake Baikal – are already working together in the framework of the Global Neutrino Network (CERN Courier December 2014 p11).

In December, the RWTH Aachen University hosted a workshop on these projects and their low-energy sub-detectors, ORCA and PINGU, which aim at determination of the neutrino-mass hierarchy through precision measurements of atmospheric-neutrino oscillations. Some 80 participants from 11 different countries came to discuss visionary strategies for detector optimization and technological aspects common to the high-energy neutrino telescopes.

Photodetection techniques, as well as trigger and readout strategies, formed one particular focus. All of the detectors are based on optical modules consisting of photomultiplier tubes (PMTs) housed in a pressure-resistant glass vessel together with their digitization and read-out electronics. Representatives of the experiments shared their experiences on the development, in situ performance and mass-production of the different designs. While the baseline design for IceCube-Gen2 follows the proven IceCube modules closely, KM3NeT has successfully deployed and operated prototypes of a new design consisting of 31 3″ PMTs housed in a single glass sphere, which offer superior timing and intrinsic directional information. Adaption of this technology for IceCube is under investigation.

New and innovative designs for optical modules were also reviewed, for example a large-area sensor employing wavelength-shifting and light-guiding techniques to collect photons in the blue and UV range and guide them to a small-diameter low-noise PMT. Presentations from Hamamatsu Photonics and Nautilus Marine Service on the latest developments in photosensors and glass housings, respectively, complemented the other talks nicely.

In addition, discussions centred on auxiliary science projects that can be carried out at the planned infrastructures. These can serve as a test bed for completely new detection technologies, such as acoustic neutrino detection, which is possible in water and ice, or radio neutrino detection, which is limited to ice as the target medium. Furthermore, IceCube-Gen2 at the South Pole offers the unique possibility to install detectors on the surface above the telescope deep in the ice, the latter acting as a detector for high-energy muons from cosmic-ray-induced extensive air showers. Indeed, the interest in cosmic-ray detectors on top of an extended IceCube telescope reaches beyond the communities of the three big projects.

The second focus of the workshop addressed the physics potential of cosmic-ray detection on the multi-kilometre scale, and especially the use of a surface array as an air-shower veto for the detection of astrophysical neutrinos from the southern sky at the South Pole. The rationale for surface veto techniques is the fact that the main background to extraterrestrial neutrinos from the upper hemisphere consists of muons and neutrinos produced in the Earth’s atmosphere. These particles are correlated to extended air showers, which can be tagged by a surface array. While upward-moving neutrinos have to traverse the entire Earth and are absorbed above some 100 TeV energy, downward-moving neutrinos do not suffer from absorption. Therefore a surface veto is especially powerful for catching larger numbers of cosmic neutrinos at the very highest energies.

The capabilities of these surface extensions together with deep-ice components will be evaluated in the near future. Presentations at the workshop on various detection techniques – such as charged-particle detectors, imaging air-Cherenkov telescopes and Cherenkov timing arrays – allowed detailed comparisons of their capabilities. Parameters of interest are duty cycle, energy threshold and the cost for construction and installation. The development of different detectors for applications in harsh environments is already on its way and the first prototypes are scheduled to be tested in 2015.

• The Detector Design and Technology for Next Generation Neutrino Observatories workshop was supported by the Helmholtz Alliance for Astroparticle Physics (HAP), RWTH Aachen University, and Hamamatsu Photonics. For more information, visit hap2014.physik.rwth-aachen.de.

The post Detection techniques for future neutrino observatories appeared first on CERN Courier.

]]>
https://cerncourier.com/a/detection-techniques-for-future-neutrino-observatories/feed/ 0 News
CMS heads towards solving a decades-long quarkonium puzzle https://cerncourier.com/a/cms-heads-towards-solving-a-decades-long-quarkonium-puzzle/ https://cerncourier.com/a/cms-heads-towards-solving-a-decades-long-quarkonium-puzzle/#respond Mon, 23 Feb 2015 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/cms-heads-towards-solving-a-decades-long-quarkonium-puzzle/ Quarkonia – charm or beauty quark/antiquark bound states – are prototypes of elementary systems governed by the strong force.

The post CMS heads towards solving a decades-long quarkonium puzzle appeared first on CERN Courier.

]]>
Quarkonia – charm or beauty quark/antiquark bound states – are prototypes of elementary systems governed by the strong force. Owing to the large masses and small velocities of the quarks, their mutual interaction becomes simpler to describe, therefore opening unique insights into the mechanism of strong interactions. For decades, research in the area of quarkonium production in hadron collisions has been hampered by anomalies and puzzles in theoretical calculations and experimental results, so that, until recently, the studies were stuck at a validation phase. Now, new CMS data are enabling a breakthrough by accomplishing cross-section measurements for quarkonium production that reach unprecedentedly high values of transverse momentum (pT).

The latest and most persistent “quarkonium puzzle”, lasting for more than 10 years, was the seeming impossibility of theory to reproduce simultaneously quarkonium yields and polarizations, as observed in hadronic interactions. Polarization is particularly sensitive to the mechanism of quark–antiquark (qq) bound-state formation, because it reveals the quantum properties of the pre-resonance qq pair. For example, if a 3S1 bound state (J/ψ or Υ) is measured to be unpolarized (isotropic decay distribution), the straightforward interpretation is that it evolved from an initial coloured 1S0 qq configuration. To extract this information from differential cross-section measurements requires an additional layer of interpretation, based on perturbative calculations of the pre-resonance qq kinematics in the laboratory reference frame. The fragility of this additional step will reveal itself, a posteriori, as the cause of the puzzle.

CCnew7_02_15th

In recent years, CMS provided the first unambiguous evidence that the decays of 3S1 bottomonia (Υ(1,2,3S)) and charmonia (J/ψ, ψ(2S)) are always approximately isotropic (CMS Collaboration 2013): the pre-resonance qq is a 1S0 state neutralizing its colour into the final 3S1 bound state. This contradicted the idea that quarkonium states are produced mainly from a transversely polarized gluon (coloured 3S1 pre-resonance), as deduced traditionally from cross-section measurements. After having exposed the polarization problem with high-precision measurements, CMS is now providing the key to its clarification.

The new cross-section measurements allow a theory/data comparison at large values of the ratio pT/mass, where perturbative calculations are more reliable. First attempts to do so, not yet exploiting the exceptional high-pT reach of the newest data, were revealing. With theory calculations restricted to their region of validity, the cross-section measurements are actually found to agree with the polarization data, indicating that the bound-state formation through coloured 1S0 pre-resonance is dominant (G Bodwin et al. 2014, K-T Chao et al. 2012, P Faccioli et al. 2014).

Heading towards the solution of a decades-long puzzle, what of the fundamental question: how do quarks and antiquarks interact to form bound states? Future analyses will disclose the complete hierarchy of transitions from pre-resonances with different quantum properties to the family of observed bound states, providing a set of “Kepler” laws for the long-distance interactions between quark and antiquark.

The post CMS heads towards solving a decades-long quarkonium puzzle appeared first on CERN Courier.

]]>
https://cerncourier.com/a/cms-heads-towards-solving-a-decades-long-quarkonium-puzzle/feed/ 0 News Quarkonia – charm or beauty quark/antiquark bound states – are prototypes of elementary systems governed by the strong force. https://cerncourier.com/wp-content/uploads/2015/02/CCnew7_02_15th.jpg
ATLAS gives new limits in the search for dark matter https://cerncourier.com/a/atlas-gives-new-limits-in-the-search-for-dark-matter/ https://cerncourier.com/a/atlas-gives-new-limits-in-the-search-for-dark-matter/#respond Mon, 23 Feb 2015 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/atlas-gives-new-limits-in-the-search-for-dark-matter/ There is evidence for dark matter from many astronomical observations, yet so far, dark matter has not been seen in particle-physics experiments, and there is no evidence for non-gravitational interactions between dark matter and Standard Model particles.

The post ATLAS gives new limits in the search for dark matter appeared first on CERN Courier.

]]>
There is evidence for dark matter from many astronomical observations, yet so far, dark matter has not been seen in particle-physics experiments, and there is no evidence for non-gravitational interactions between dark matter and Standard Model particles. If such interactions exist, dark-matter particles could be produced in proton–proton collisions at the LHC. The dark matter would travel unseen through the ATLAS detector, but often one or more Standard Model particles would accompany it, either produced by the dark-matter interaction or radiated from the colliding partons. Observed particles with a large imbalance of momentum in the transverse plane of the detector could therefore signal the production of dark matter.

Because radiation from the colliding partons is most likely a jet, the “monojet” search is a powerful search for dark matter. The ATLAS collaboration now has a new result in this channel and, while it does not show evidence for dark-matter production at the LHC, it does set significantly improved limits on the possible rate for a variety of interactions. The reach of this analysis depends strongly on a precise determination of the background from Z bosons decaying to neutrinos at large-boson transverse-momentum. By deriving this background from data samples of W and Z bosons decaying to charged leptons, the analysis achieves a total background uncertainty in the result of 3–14%, depending on the transverse momentum.

CCnew11_02_15th

To compare with non-collider searches for weakly interacting massive particle (WIMP) dark matter, the limits from this analysis have been translated via an effective field theory into upper limits on WIMP–nucleon scattering or on WIMP annihilation cross-sections. When the WIMP mass is much smaller than several hundred giga-electron-volts – the kinematic and trigger thresholds used in the analysis – the collider results are approximately independent of the WIMP mass. Therefore, the results play an important role in constraining light dark matter for several types of spin-independent scattering interactions (see figure). Moreover, collider results are insensitive to the Lorentz structure of the interaction. The results shown on spin-dependent interactions are comparable to the spin-independent results and significantly stronger than those of other types of experiments.

The effective theory is a useful and general way to relate collider results to other dark-matter experiments, but it cannot always be employed safely. One advantage of the searches at the LHC is that partons can collide with enough energy to resolve the mediating interaction directly, opening complementary ways to study it. In this situation, the effective theory breaks down, and simplified models specifying an explicit mediating particle are more appropriate.

The new ATLAS monojet result is sensitive to dark-matter production rates where both effective theory and simplified-model viewpoints are worthwhile. In general, for large couplings of the mediating particles to dark matter and quarks, the mediators are heavy enough to employ the effective theory, whereas for couplings of order unity the mediating particles are too light and the effective theory is an incomplete description of the interaction. The figures use two types of dashed lines to depict the separate ATLAS limits calculated for these two cases. In both, the calculation removes the portion of the signal cross-section that depends on the internal structure of the mediator, recovering a well-defined and general but conservative limit from the effective theory. In addition, the new result presents constraints on dark-matter production within one possible simplified model, where the mediator of the interaction is a Z’-like boson.

While the monojet analysis is generally the most powerful search when the accompanying Standard Model particle is radiated from the colliding partons, ATLAS has also employed other Standard Model particles in similar searches. They are especially important when these particles arise from the dark-matter interaction itself. Taken together, ATLAS has established a broad and robust programme of dark-matter searches that will continue to grow with the upcoming data-taking.

The post ATLAS gives new limits in the search for dark matter appeared first on CERN Courier.

]]>
https://cerncourier.com/a/atlas-gives-new-limits-in-the-search-for-dark-matter/feed/ 0 News There is evidence for dark matter from many astronomical observations, yet so far, dark matter has not been seen in particle-physics experiments, and there is no evidence for non-gravitational interactions between dark matter and Standard Model particles. https://cerncourier.com/wp-content/uploads/2015/02/CCnew11_02_15th.jpg
LHCf detectors are back in the LHC tunnel https://cerncourier.com/a/lhcf-detectors-are-back-in-the-lhc-tunnel/ https://cerncourier.com/a/lhcf-detectors-are-back-in-the-lhc-tunnel/#respond Tue, 27 Jan 2015 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/lhcf-detectors-are-back-in-the-lhc-tunnel/ The Large Hadron Collider forward (LHCf) experiment measures neutral particles emitted around zero degrees of the hadron interactions at the LHC.

The post LHCf detectors are back in the LHC tunnel appeared first on CERN Courier.

]]>
The Large Hadron Collider forward (LHCf) experiment measures neutral particles emitted around zero degrees of the hadron interactions at the LHC. Because these “very forward” particles carry a large fraction of the collision energy, they are important for understanding the development of atmospheric air-shower phenomena produced by high-energy cosmic rays. Two independent detectors, Arm1 and Arm2, are installed in the target neutral absorbers (TANs) at 140 m from interaction point 1 (IP1) in the LHC, where the single beam pipe is split into two narrow pipes.

After a successful physics operation in 2009/2010, the LHCf collaboration immediately removed their detectors from the tunnel in July 2010 to avoid severe radiation damage. The Arm2 detector, in the direction of IP2, came back into the tunnel for data-taking with proton–lead collisions in 2013, while Arm1 was being upgraded to be a radiation-hard detector, using Gd2SiO5 scintillators. After completion of the upgrade for both Arm1 and Arm2, the performance of the detectors was tested at the Super Proton Synchrotron fixed beam line in Prévessin in October 2014. Both Arm1 and Arm2 were then reinstalled in the LHC tunnel on 17 and 24 November, respectively.

CCnew5_01_15th

The installation went smoothly, thanks to the well-equipped remote-handling system for the TAN instrumentation. During the following days, cabling, commissioning and the geometrical survey of the detectors took place without any serious trouble.

LHCf will restart the activity to relaunch the data-acquisition system in early 2015, to be ready for the dedicated operation time in May 2015 when the LHC will provide low luminosity, low pile-up and high β* (20 m) proton–proton collisions. At √s = 13 TeV, these collisions correspond to interactions in the atmosphere of cosmic rays with energy of 0.9 × 1017 eV. This is the energy at which the origins of the cosmic rays are believed to switch from galactic to extragalactic, and a sudden change of the primary mass is expected. Cosmic-ray physicists expect to confirm this standard scenario of cosmic rays based on the highest-energy LHC data.

Another highlight of the 2015 run will be common data-taking with the ATLAS experiment. LHCf will send trigger signals to ATLAS, and ATLAS will record data after pre-scaling. Based on a preliminary Monte Carlo study using PYTHIA8, which selected events with low central activity in ATLAS, LHCf can select very pure (99%) events produced by diffractive dissociation processes. The identification of the origin of the forward particles will help future developments of hadronic-interaction models.

The post LHCf detectors are back in the LHC tunnel appeared first on CERN Courier.

]]>
https://cerncourier.com/a/lhcf-detectors-are-back-in-the-lhc-tunnel/feed/ 0 News The Large Hadron Collider forward (LHCf) experiment measures neutral particles emitted around zero degrees of the hadron interactions at the LHC. https://cerncourier.com/wp-content/uploads/2015/01/CCnew5_01_15th.jpg
CMS measures the ‘underlying event’ in pp collisions https://cerncourier.com/a/cms-measures-the-underlying-event-in-pp-collisions/ https://cerncourier.com/a/cms-measures-the-underlying-event-in-pp-collisions/#respond Tue, 27 Jan 2015 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/cms-measures-the-underlying-event-in-pp-collisions/ Ever since the earliest experiments with hadron beams, and subsequently during the era of the hadron colliders heralded by CERN’s Intersecting Storage Rings, it has been clear that hadron collisions are highly complicated processes.

The post CMS measures the ‘underlying event’ in pp collisions appeared first on CERN Courier.

]]>
Ever since the earliest experiments with hadron beams, and subsequently during the era of the hadron colliders heralded by CERN’s Intersecting Storage Rings, it has been clear that hadron collisions are highly complicated processes. Indeed, initially it was far from obvious whether it would be possible to do any detailed studies of elementary particle physics with hadron collisions at all.

The question was whether the physics of “interesting” particle production could be distinguished from that of the “background” contribution in hadron collisions. While the former is typically a single parton–parton scattering process at very high transverse momentum (pT), the latter consists of the remnants of the two protons that did not participate in the hard scatter, including the products of any additional soft, multiple-parton interactions. Present in every proton–proton (pp) collision, this soft-physics component is referred to as the “underlying event”, and its understanding is a crucial factor in increasing the precision of physics measurements at high pT. Now, the CMS collaboration has released its latest analysis of the underlying event data at 2.76 TeV at the LHC.

CCnew9_01_15th

The measurement builds on experimental techniques that have been developed at Fermilab’s Tevatron and previously at the LHC to perform measurements that are sensitive to the physics of the underlying event. The main idea is to measure particle production in the region of phase space orthogonal to the high-pT process – that is, in the transverse plane. In its latest analysis of the underlying event data at 2.76 TeV, CMS has measured both the average charged-particle multiplicity as well as the pT sum for the charged particles. The scale of the hard parton–parton scattering is defined by the pT of the most energetic jet of the event.

The measurements are expected to result in more accurate simulations of pp collisions at the LHC. Because the properties of the underlying event cannot be derived from first principles in QCD, Monte Carlo generators employ phenomenological models with several free parameters that need to be “tuned” to reproduce experimental measurements such as the current one from CMS.

An important part of the studies concerns the evolution of the underlying-event properties with collision energy. CMS has therefore presented measurements at centre-of-mass energies of 0.9, 2.76 and 7 TeV. Soon, there will be new data from Run 2 at the LHC. The centre-of-mass energy of 13 TeV will necessitate further measurements, and provide an opportunity to probe the ever-present underlying event in uncharted territory.

The post CMS measures the ‘underlying event’ in pp collisions appeared first on CERN Courier.

]]>
https://cerncourier.com/a/cms-measures-the-underlying-event-in-pp-collisions/feed/ 0 News Ever since the earliest experiments with hadron beams, and subsequently during the era of the hadron colliders heralded by CERN’s Intersecting Storage Rings, it has been clear that hadron collisions are highly complicated processes. https://cerncourier.com/wp-content/uploads/2015/01/CCnew9_01_15th.jpg
CUORE has the coldest heart in the known universe https://cerncourier.com/a/cuore-has-the-coldest-heart-in-the-known-universe/ https://cerncourier.com/a/cuore-has-the-coldest-heart-in-the-known-universe/#respond Thu, 27 Nov 2014 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/cuore-has-the-coldest-heart-in-the-known-universe/ The CUORE collaboration at the INFN Gran Sasso National Laboratory has set a world record by cooling a copper vessel with the volume of a cubic metre to a temperature of 6 mK.

The post CUORE has the coldest heart in the known universe appeared first on CERN Courier.

]]>
CCnew2_10_14

The CUORE collaboration at the INFN Gran Sasso National Laboratory has set a world record by cooling a copper vessel with the volume of a cubic metre to a temperature of 6 mK. It is the first experiment to cool a mass and a volume of this size to a temperature this close to absolute zero. The cooled copper mass, weighing approximately 400 kg, was the coldest cubic metre in the universe for more than 15 days. No experiment on Earth has ever cooled a similar mass or volume to temperatures this low. Similar conditions are also not expected to arise in nature.

CUORE – which stands for Cryogenic Underground Observatory for Rare Events, but is also Italian for heart – is an experiment being built by an international collaboration at Gran Sasso to study the properties of neutrinos and search for rare processes, in particular the hypothesized neutrinoless double-beta decay. The experiment is designed to work in ultra-cold conditions at temperatures of around 10 mK. It consists of tellurium-dioxide crystals serving as bolometers, which measure energy by recording tiny fluctuations in the crystal’s temperature. When complete, CUORE will contain some 1000 instrumented crystals and will be covered by shielding made of ancient Roman lead, which has a particularly low level of intrinsic radioactivity. The mass of material to be held near absolute zero will be almost two tonnes.

The cryostat was implemented and funded by INFN, and the University of Milano Bicocca co-ordinated the research team in charge of the design of the cryogenic system. The successful solution to the technological challenge of cooling the entire experimental mass of almost two tonnes to the temperature of a few millikelvin was made possible through collaboration with high-profile industrial partners such as Leiden Cryogenics BV, who designed and built the unique refrigeration system, and Simic SpA, who built the cryostat vessels.

The post CUORE has the coldest heart in the known universe appeared first on CERN Courier.

]]>
https://cerncourier.com/a/cuore-has-the-coldest-heart-in-the-known-universe/feed/ 0 News The CUORE collaboration at the INFN Gran Sasso National Laboratory has set a world record by cooling a copper vessel with the volume of a cubic metre to a temperature of 6 mK. https://cerncourier.com/wp-content/uploads/2014/11/CCnew2_10_14.jpg
NA62 gets going at the SPS https://cerncourier.com/a/na62-gets-going-at-the-sps/ https://cerncourier.com/a/na62-gets-going-at-the-sps/#respond Thu, 27 Nov 2014 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/na62-gets-going-at-the-sps/ With the end in sight for CERN’s Long Shutdown (LS1), the accelerator chain has been gradually restarting. Since early October, the Super Proton Synchrotron (SPS) has been delivering beams of protons to experiments, including NA62, which has now begun a three-year data-taking run.

The post NA62 gets going at the SPS appeared first on CERN Courier.

]]>
CCnew3_10_14

With the end in sight for CERN’s Long Shutdown (LS1), the accelerator chain has been gradually restarting. Since early October, the Super Proton Synchrotron (SPS) has been delivering beams of protons to experiments, including NA62, which has now begun a three-year data-taking run.

NA62’s main aim is to study rare kaon decays, following on from its predecessors NA31 and NA48, which made important contributions to the study of CP violations in the kaon system. To make beams rich in kaons, protons from the SPS strike a beryllium target. The collisions create a beam that transmits almost one billion particles per second, about 6% of which are kaons.

After almost eight years of design and construction, NA62 was ready for the beam by start-up in October. In early September, the last of the four straw-tracker chambers had been lowered into position in the experiment. The straw tracker is the first of its scale to be placed directly into the vacuum tank of an experiment, allowing NA62 to measure the direction and momentum of charged particles with high precision. From the first design to the final plug-in and testing, teams at CERN worked in close collaboration with the Joint Institute for Nuclear Research in Dubna, who helped to develop the straw-tracker technology and who will participate in the running of the detector now that construction and installation has been completed.

Each straw-tracker chamber weighs close to 5000 kg and is made up of 16 layers of state-of-the-art, highly fragile straw tubes. Although heavy, the four chambers had to be delicately transported to the SPS North Area at CERN’s Prévessin site, lowered into the experiment cavern and installed to a precision of 0.3 mm. The chambers were then equipped with the necessary gas connections, pipes, cables and dedicated read-out boards, before beam commissioning began in early October to tune the tracker prior to integrating it with the other sub-detectors for data taking.

This unique tracker, placed directly inside the experiment’s vacuum tank, sits alongside a silicon-pixel detector and a detector called CEDAR that determines the types of particles from their Cherenkov radiation. A magnetic spectrometer measures charged tracks from kaon decays, and a ring-imaging Cherenkov detector indicates the identity of each decay particle. A large system of photon and muon detectors rejects unwanted decays. In total, the experiment extends across a length of 270 m, of which 85 m are in a vacuum.

• For more about the installation and construction of NA62, see the CERN Bulletin http://cds.cern.ch/record/1951890.

The post NA62 gets going at the SPS appeared first on CERN Courier.

]]>
https://cerncourier.com/a/na62-gets-going-at-the-sps/feed/ 0 News With the end in sight for CERN’s Long Shutdown (LS1), the accelerator chain has been gradually restarting. Since early October, the Super Proton Synchrotron (SPS) has been delivering beams of protons to experiments, including NA62, which has now begun a three-year data-taking run. https://cerncourier.com/wp-content/uploads/2014/11/CCnew3_10_14.jpg
The Global Neutrino Network takes off https://cerncourier.com/a/the-global-neutrino-network-takes-off/ https://cerncourier.com/a/the-global-neutrino-network-takes-off/#respond Thu, 27 Nov 2014 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/the-global-neutrino-network-takes-off/ On 20–12 September, CERN hosted the fifth annual Mediterranean-Antarctic Neutrino Telescope Symposium (MANTS). For the first time, the meeting was organized under the GNN umbrella.

The post The Global Neutrino Network takes off appeared first on CERN Courier.

]]>
CCnew15_10_14

On 20–12 September, CERN hosted the fifth annual Mediterranean-Antarctic Neutrino Telescope Symposium (MANTS). For the first time, the meeting was organized under the GNN umbrella.

The idea to link more closely the various neutrino telescope projects under both water and ice has been a topic for discussion in the international community of high-energy neutrino astrophysicists for several years. On 15 October 2013, representatives of the ANTARES, BAIKAL, IceCube and KM3NeT collaborations signed a memorandum of understanding for co-operation within a Global Neutrino Network (GNN). GNN aims for extended inter-collaboration exchanges, more coherent strategy planning and exploitation of the resulting synergistic effects.

No doubt, the evidence for extraterrestrial neutrinos recently reported by IceCube at the South Pole (“Cosmic neutrinos and more: IceCube’s first three years”) has given wings to GNN, and is encouraging the KM3NeT (in the Mediterranean Sea) and GVD (Lake Baikal) collaborations in their efforts to achieve appropriate funding to build northern-hemisphere cubic-kilometre detectors. IceCube is also working towards an extension of its present configuration.

One focus of the MANTS meeting was, naturally, on the most recent results from IceCube and ANTARES, and their relevance for future projects. The initial configurations of KM3NeT (with three to four times the sensitivity of ANTARES) and GVD (with sensitivity similar to ANTARES) could provide additional information on the characteristics of the IceCube signals, first because they look at a complementary part of the sky, and second because water has optical properties that are different from ice. Cross-checks with different systematics are of the highest importance for these detectors in natural media. As an example, KM3NeT will measure down-going muons from cosmic-ray interactions in the atmosphere with superb precision. This could help in determining more precisely the flux of atmospheric neutrinos co-generated with those muons, in particular those from the decay of charmed mesons, which are expected to have particularly high energies and therefore could mimic an extraterrestrial signal.

A large part of the meeting was devoted to finding the best “figures of merit” characterizing the physics capabilities of the detectors. These not only allow comparison of the different projects, but also provide an important tool to optimize future detector configurations. The latter also concerns the two sub-projects that aim to determine the neutrino mass hierarchy using atmospheric neutrinos. These are both small, high-density versions of the huge kilometre-scale arrays: PINGU at the South Pole and ORCA in the Mediterranean Sea. In this effort a particularly close co-operation has emerged during the past year, down to technical details.

Combining data from different detectors is another aspect of GNN. A recent common analysis of IceCube and ANTARES sky maps has provided the best sensitivity ever for point sources in certain regions of the sky, and will be published soon. Further goals of GNN include the co-ordination of alert and multimessenger policies, exchange and mutual checks of software, creation of a common software pool, development of standards for data representation, cross-checks of results with different systematics, and the organization of schools and other forums for exchanging expertise and experts. Mutual representation in the experiments’ science advisory committees is another way to promote close contact and mutual understanding.

Contingent upon availability of funding, the mid 2020s could see one Global Neutrino Observatory, with instrumented volumes of 5–8 km3 in each hemisphere. This would, finally, fully raise the curtain just lifted by IceCube, and provide a rich view on the high-energy neutrino sky.

The post The Global Neutrino Network takes off appeared first on CERN Courier.

]]>
https://cerncourier.com/a/the-global-neutrino-network-takes-off/feed/ 0 News On 20–12 September, CERN hosted the fifth annual Mediterranean-Antarctic Neutrino Telescope Symposium (MANTS). For the first time, the meeting was organized under the GNN umbrella. https://cerncourier.com/wp-content/uploads/2014/11/CCnew15_10_14.jpg
Cosmic neutrinos and more: IceCube’s first three years https://cerncourier.com/a/cosmic-neutrinos-and-more-icecubes-first-three-years/ https://cerncourier.com/a/cosmic-neutrinos-and-more-icecubes-first-three-years/#respond Thu, 27 Nov 2014 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/cosmic-neutrinos-and-more-icecubes-first-three-years/ Results from the coldest region of the Earth.

The post Cosmic neutrinos and more: IceCube’s first three years appeared first on CERN Courier.

]]>

For the past four years, the IceCube Neutrino Observatory, located at the South Pole, has been collecting data on some of the most violent collisions in the universe. Fulfilling its pre-construction aspirations, the detector has observed astrophysical neutrinos with energies above 60 TeV, at the “magic” 5σ significance. The most energetic neutrino observed had an energy of about 2 PeV (2 × 1015 eV) – 250 times higher than the beam energy of the LHC.

These neutrinos are just one highlight of IceCube’s broad physics programme, which encompasses searches for astrophysical neutrinos, searches for neutrinos from dark matter, studies of neutrino oscillations, cosmic-ray physics, and searches for supernovae and a variety of exotica. All of these studies take advantage of a unique detector at a unique location: the South Pole.

IceCube observes the Cherenkov light emitted by charged particles produced in neutrino interactions in 1 km3 of transparent Antarctic ice. The detector is the ice itself, and is read out by 5160 optical sensors. Figure 1 shows how the optical sensors are distributed throughout the 1 km3 of ice, 1.5 km beneath the geographic South Pole. They are deployed 17 m apart, on 86 vertical cables or “strings”. Seventy-eight of the strings are spaced horizontally, 125 m apart in a grid of equilateral triangles forming a hexagonal array across an area of a square kilometre. The remaining eight strings form a more densely instrumented sub-array called DeepCore. In DeepCore, most of the sensors are concentrated in the lower 350 m of the detector.

Each sensor, or digital optical module (DOM), is like a miniature satellite made up of a 10 inch (25 cm) photomultiplier tube together with data-acquisition and control electronics. These include a custom 300 megasample/s waveform digitizer with 14 bits of dynamic range, plus light sources for calibrations, all consuming a power of less than 5 W. The hardware is protected by a centimetre-thick pressure vessel.

The ice in IceCube formed from compacted snow that fell on Antarctica 100,000 years ago.

The ice in IceCube formed from compacted snow that fell on Antarctica 100,000 years ago. Its properties vary with depth, with layers reflecting the atmospheric conditions when the snow first fell. Measuring the optical properties of this ice has been one of the major challenges of IceCube, involving custom “dust loggers”, studies with LED “flashers” and cosmic-ray muons. During the past decade, the collaboration has found that the ice is layered, that the layers are not perfectly flat and, most recently, that the light scattering is somewhat anisotropic. Each insight has led to a better understanding of the detector and to smaller systematic uncertainties. Fortunately, advances in computing technology have allowed IceCube’s simulations to keep up, more or less, with the increasingly complex models of light propagation in the ice.

The distributed sensors give IceCube strong pattern-recognition capabilities. The three neutrino flavours – νe, νμ and ντ – each leave different signatures in the detector. Charged-current νμ produce high-energy muons, which leave long tracks. All νe interactions, and all neutral-current interactions, produce hadronic or electromagnetic showers. High-energy ντ produce a characteristic “double-bang” signature – one shower when the ντ interacts and a second when the τ decays. More complex topologies have also been studied, including tracks that start in the detector as well as pairs of parallel tracks.

Despite past doubts, IceCube works and works well. More than 98% of the sensors are fully operational, and another 1% are usable – most of the failures occurred during deployment. The post-deployment attrition rate is a few DOMs per year, so IceCube will be able to operate for as long as required. The “live” times are also impressive – in the range of 99%.

IceCube has excellent reconstruction capabilities. For kilometre-long muon tracks, the angular resolution is better than 0.4°, verified by studying the shadow of the Moon cast by cosmic rays. For high-energy contained events, the angular resolution can reach 15°, and at high energies the visible energy can be determined to better than 15%.

Cosmic neutrinos

The detector’s dynamic range covers from 10 GeV to infinity. The higher energy the neutrino, the easier it is to detect. Every six minutes, IceCube records an atmospheric neutrino, from the decay of pions, kaons and heavier particles produced in cosmic-ray air showers. These 100,000 neutrinos collected every year are interesting in their own right, but they are also the background to any search for cosmic neutrinos. On top of this, the detector records about 3000 atmospheric muons every second. This is a painful background for neutrino searches, but a gold mine for cosmic-ray physics.

Although IceCube has an extremely rich physics programme, the centrepiece is clearly the search for cosmic neutrinos. Many signatures have been proposed for these neutrinos: point source searches, a high-energy diffuse flux, identified ντ, and others. IceCube has looked for all of these.

Point-source searches are the simplest strategy conceptually – just create a sky map showing the arrival directions of all of the detected neutrinos. Figure 2 shows the IceCube sky map containing 400,000 events gathered across four years (Aartsen et al. 2014c). In the southern hemisphere, the large background of downgoing muons is only partially counteracted by selecting high-energy muons, which are less likely to be of atmospheric origin. The 177,544 events in the northern-hemisphere sample are mostly from νμ. So far, there is no statistically significant evidence for any hot spots, even in searches for spatially extended sources. IceCube has also looked for variable sources, whether episodic or periodic, with similar results. These limits constrain theoretical models, especially those involving gamma-ray bursts.

If there are enough weak sources in the cosmos, they should be visible as an aggregate, diffuse flux. This diffuse flux is expected to have a harder energy spectrum than do atmospheric neutrinos. Calculations have indicated that IceCube would be more sensitive to this diffuse flux than to point sources, which is indeed the case. Several early searches, using the partially completed detector, turned up intriguing hints of an excess over the expected atmospheric neutrino flux. Then the search diverged from the anticipated script.

One of the first searches for diffuse neutrinos with the complete detector looked for ultra-high-energy cosmogenic neutrinos – neutrinos produced when ultra-high-energy cosmic-ray protons (E > 4 × 1019 eV) interact with photons of around 10–4 eV in the cosmic-microwave background, exciting them to a Δ+ resonance. The decay products of the pion produced in the Δ’s decay include a neutrino with a typical energy of 1018 eV (1 EeV). The search found two spectacular events, one of which is shown in figure 3. Both events were well contained within the detector – clearly neutrinos. Both had energies around 1 PeV – spectacular, but too low to be produced by cosmic rays interacting with CMB photons. Such events were completely unexpected.

Inspired by these events, the IceCube collaboration instigated a follow-up search that used two powerful techniques (Aartsen et al. 2013). The first was a filter to identify neutrino interactions that originate inside the detector, as distinct from events originating outside it. The filter divides the instrumented volume into an outer-veto shield and a 420 megatonne inner active volume. Figure 4 shows how this veto works: by rejecting events with significant in-time energy deposition in the veto region, neutrino interactions within the detector’s fiducial volume can be separated from backgrounds. For neutrinos that are contained within the instrumented volume of ice, the detector functions as a total absorption calorimeter, measuring energy with 15% resolution. It is flavour-blind, equally sensitive to hadronic or electromagnetic showers and to muon tracks. This veto analysis also used a “tagging” approach to estimate the atmospheric-muon background using the data, rather than relying on simulations. Because of the veto, the analysis could observe neutrinos from all directions in the sky.

The second innovation was to take advantage of the fact that downgoing atmospheric neutrinos should be accompanied by a cosmic-ray air shower depositing one or more muons inside IceCube. In contrast, cosmic neutrinos should be unaccompanied. A very high-energy, isolated downgoing neutrino is highly likely to be cosmic.

The follow-up search found 26 additional events. Although no new events had an energy near 1 PeV, the analysis produced evidence for cosmic neutrinos at the 4σ level. To clinch the case, the collaboration added a third year of data, pushing the significance above the “magic” 5σ level (Aartsen et al. 2014a). One of the new events had an energy above 2 PeV, making it the most energetic neutrino ever seen.

The observation of a flux of cosmic neutrinos was soon confirmed by the independent and more traditional analysis recording the diffuse flux of muon neutrinos penetrating the Earth. Both observations are consistent with a diffuse flux composed equally of the three neutrino flavours. No statistically significant hot spots were seen. The observed flux is consistent with that expected from cosmic accelerators producing equal energies in gamma rays, neutrinos and, possibly, cosmic rays.

Newer studies are shedding more light on these events, extending contained-event studies down to lower energies and adding flavour identification. At energies above 10 TeV, the astrophysical neutrino flux can be fit by a single power-law spectrum that is significantly harder than the background cosmic-ray muon spectrum:
φν = 2.06+0.4–0.3 × 10–18 (Ev/100TeV)–2.46±0.12 GeV–1 cm–2 sr–1 s (Aartsen et al. 2014d).

Within the limited statistics, the flux appears isotropic and consistent with the νeμτ ratio of 1:1:1 that is expected for cosmic neutrinos. The majority of the events appear to be extragalactic. Some might originate in the Galaxy, but there is no compelling statistical evidence for that at this point.

Many explanations have been proposed for the IceCube observations, ranging from the relativistic particle jets emitted by active galactic nuclei to gamma-ray bursts, to starburst galaxies to magnetars. IceCube’s dedicated searches do, however, disfavour gamma-ray bursts as the source. A spectral index of –2 (dNν/dE ~ E–2), predicted by Fermi shock-acceleration models, is also disfavoured, but many other scenarios are possible. Of course, the answer is clear: more data are needed.

Other physics

The 100,000 neutrinos and 85 × 109 cosmic-ray events recorded each year provide ample opportunities to search for dark matter and to study cosmic rays as well as neutrinos themselves. IceCube has measured the cosmic-ray spectrum and composition and observed anisotropies in the spectrum at the 10–4 level that have thus far defied explanation. It has also studied atypical events, such as muon-free showers expected from photons with peta-electron-volt energies, produced in the Galaxy, and investigated isolated muons produced in air showers. The latter have separations that shift from an exponential decrease to a power-law separation spectrum, as predicted by perturbative QCD.

IceCube observes atmospheric neutrinos across an energy range from 10 GeV to 100 TeV – at higher energies, the atmospheric flux is swamped by the flux of cosmic neutrinos. As figure 5 shows, the flux is consistent with expectations across a large energy range. Lower-energy neutrinos are of particular interest because they are sensitive to neutrino oscillations. For neutrinos passing vertically through the Earth, the νμ flux develops a first minimum at 28 GeV.

Figure 6 shows the observed νμ flux, seen in one year of data, using well-reconstructed events contained within DeepCore. The change in flux with distance travelled/energy (L/E) is consistent with neutrino oscillations and inconsistent with a no-oscillation scenario. IceCube constraints on the mixing angle θ23 and |Δm232| are comparable to constraints from other experiments.

IceCube also searched for neutrinos from dark-matter annihilation. Dark matter can be gravitationally captured by the Earth, the Sun, or in the centre or halo of the Galaxy. It then accumulates and the dark-matter particles annihilate, producing neutrinos. IceCube has searched for signatures of this annihilation, and has set limits. The Sun is a particularly interesting option, producing a characteristic dark-matter signature that cannot be explained by any astrophysical scenario. It is also mostly protons, allowing IceCube to set the world’s best limits on the spin-dependent cross-section for the interaction of dark-matter particles with ordinary matter.

The collaboration has also looked for even more exotic signatures, such as magnetic monopoles and pairs of upgoing particles. One particularly spectacular and interesting signature could come from the next supernova in the Galaxy. These explosions produce a blast of neutrinos with 10–50 MeV energy. This energy level is far too low to trigger IceCube directly, but the neutrinos would be visible as a collective increase in the singles rate in the buried IceCube photomultipliers. Moreover, IceCube has a huge effective area, which will allow measurements of the time structure of the supernova-neutrino pulse with millisecond precision.

IceCube is still a novel instrument unlikely to have exhausted its discovery potential. However, at high energies, it might not be big enough. Doing neutrino astronomy could require samples of 1000 or more, high-energy neutrino events. In addition, some key physics questions require a detector with a lower energy threshold. These two considerations are driving two different upgrade projects.

The IceCube high-energy extension (IceCube-gen2) aims for a detector with a 10-times-larger instrumented volume.

DeepCore has demonstrated that IceCube is capable of making precise measurements of neutrino-oscillation parameters. If precision studies can be extended to neutrino energies below 10 GeV, it will be possible to determine the neutrino-mass hierarchy. Neutrinos passing through the Earth interact coherently with matter electrons, modifying the oscillation pattern in a way that differs for normal and inverted hierarchies. In addition to a threshold of a few giga-electron-volts, this measurement requires improved control of systematic uncertainties. An expanded collaboration has come together to pursue the construction of a high-density infill array called Precision In Ice Next-Generation Upgrade, or PINGU (Aartsen et al. 2014b). The present design consists of 40 additional high-sensitivity strings equipped with improved calibration devices. PINGU should be able to determine the mass hierarchy with 3σ significance within about three years, independent of the value of the CP-violation phase.

The IceCube high-energy extension (IceCube-gen2) aims for a detector with a 10-times-larger instrumented volume, albeit with a higher energy threshold. It will explore the observed cosmic neutrino flux and pin down its origin. With a sample of more than 100 cosmic neutrinos per year, it will be possible to observe multiple neutrinos from the same sources, and so do astronomy. The instrument will also have an improved sensitivity to study the ultra-high-energy neutrinos produced in the interactions of cosmic rays with microwave photons.

Of course, IceCube is not the only collaboration studying high-energy neutrinos. Projects on the cubic-kilometre scale are also being prepared in the Mediterranean Sea (KM3NeT) and in Lake Baikal (GVD), with a field of view complementary to that of IceCube. Within KM3NeT, ORCA, a proposed low-threshold detector, would pursue the same physics as PINGU. And the radio-detection experiments ANITA, ARA, GNO and ARIANNA are beginning to explore the neutrino sky at energies above 1017 eV.

After a decade of construction, the completed IceCube detector came on line in December 2010. It has achieved the outstanding goal of observing cosmic neutrinos and has produced important results in diverse areas: cosmic-ray physics, dark-matter searches and neutrino oscillations, not to mention its contributions to glaciology and solar physics. The observation of cosmic neutrinos at the peta-electron-volt energy scale has attracted enormous attention, with many suggestions about the location of the requisite cosmic accelerators.

Looking ahead, IceCube anticipates two important extensions: PINGU, which will determine the neutrino-mass hierarchy, and IceCube-gen2, which will expand a discovery instrument into an astronomical telescope.

The post Cosmic neutrinos and more: IceCube’s first three years appeared first on CERN Courier.

]]>
https://cerncourier.com/a/cosmic-neutrinos-and-more-icecubes-first-three-years/feed/ 0 Feature Results from the coldest region of the Earth.
ATLAS closes and prepares for the restart https://cerncourier.com/a/atlas-closes-and-prepares-for-the-restart/ https://cerncourier.com/a/atlas-closes-and-prepares-for-the-restart/#respond Tue, 23 Sep 2014 08:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/atlas-closes-and-prepares-for-the-restart/ On 7 August, the technical teams in charge of closing activities in the ATLAS collaboration started to move the first pieces back into position around the LHC beam pipe.

The post ATLAS closes and prepares for the restart appeared first on CERN Courier.

]]>
CCnew3_08_14

On 7 August, the technical teams in charge of closing activities in the ATLAS collaboration started to move the first pieces back into position around the LHC beam pipe. The subdetectors had been moved out in February 2013, at the beginning of the first LHC Long Shutdown (LS1) – a manoeuvre that was needed to allow access and work on the planned upgrades.

LS1 has seen a great deal of work on the ATLAS detector. In addition to the upgrades carried out on all of the subdetectors, when the next LHC run starts in 2015 the experiment will have a new beam pipe and a new inner barrel layer (IBL) for the pixel detector. For the work to be carried out in the cavern, one of the small wheels of the muon system had to be moved to the surface.

The various pieces are moved using an air-pad system on rails, with the exception of the 25-m-diameter big wheel (in the muon system), which moves on bogies. One of the most difficult objects to move is the endcap calorimeter: it weighs about 1000 tonnes and comes with many “satellites”, i.e. electric cables, cryogenic lines and optical fibres for the read-out. Thanks to the air pads, the 1000 tonnes of the calorimeter can be moved by applying a force of only 23 tonnes. During the movement, the calorimeter, with its cryostat filled with liquid argon, remains connected to the flexible lines whose motion is controlled by the motion of the calorimeter.

The inflation of the air pads must be controlled perfectly to avoid any damage to the delicate equipment. This is achieved using two automated control units –one built during LS1 – which perform hydraulic and pneumatic compensation. This year, the ATLAS positioning system has been improved thanks to the installation of a new sensor system on the various subdetectors. This will allow the experts to achieve an accuracy of 300 μm in placing the components in their final position. The position sensors were originally developed by Brandeis University within the ATLAS collaboration, but the positioning system itself was developed with the help of surveyors from CERN, who are now using this precision system in other experiments.

All of the equipment movements in the cavern happen under the strict control of the technical teams and the scientists in charge of the various subdetectors. It takes several hours to move each piece, not only owing to the weight involved, but also because several stops are necessary to perform tests and checks.

The closing activities are scheduled to run until the end of September. By then, the team will have moved a total of 12 pieces, that is, 3300 tonnes of material.

The post ATLAS closes and prepares for the restart appeared first on CERN Courier.

]]>
https://cerncourier.com/a/atlas-closes-and-prepares-for-the-restart/feed/ 0 News On 7 August, the technical teams in charge of closing activities in the ATLAS collaboration started to move the first pieces back into position around the LHC beam pipe. https://cerncourier.com/wp-content/uploads/2014/09/CCnew3_08_14.jpg
A bright future for dark-matter searches https://cerncourier.com/a/a-bright-future-for-dark-matter-searches/ https://cerncourier.com/a/a-bright-future-for-dark-matter-searches/#respond Tue, 23 Sep 2014 08:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/a-bright-future-for-dark-matter-searches/ The US Department of Energy Office of High Energy Physics and the National Science Foundation Physics Division have announced their joint programme for second-generation dark-matter experiments, aiming at direct detection of the elusive dark-matter particles in Earth-based detectors.

The post A bright future for dark-matter searches appeared first on CERN Courier.

]]>
CCnew10_08_14

The US Department of Energy Office of High Energy Physics and the National Science Foundation Physics Division have announced their joint programme for second-generation dark-matter experiments, aiming at direct detection of the elusive dark-matter particles in Earth-based detectors. It will include ADMX-Gen2 – a microwave cavity searching for axions – and the LUX-Zeplin (LZ) and SuperCDMS-SNOLAB experiments targeted at weakly interacting massive particles (WIMPs). These selections were partially in response to recommendations of the P5 subpanel of the US High-Energy Physics Advisory Panel for a broad second-generation dark-matter direct-detection programme at a funding level significantly above that originally planned.

While ADMX-Gen2 consists mainly of an upgrade of the existing apparatus to reach a lower operation temperature of around 100 mK, and is rather inexpensive, the two WIMP projects are significantly larger. SuperCDMS will initially operate around 50 kg of ultra-pure germanium and silicon crystals at the SNOLAB laboratory in Ontario, for a search focused on WIMPs with low masses, below 10 GeV/c2. The detectors will be optimized for low-energy thresholds and for very good particle discrimination. The experiment will be designed such that up to 400 kg of crystals can be installed at a later stage. The massive LZ experiment will employ about 7 tonnes of liquid xenon as a dark-matter target in a dual-phase time-projection chamber (TPC), installed at the Sanford Underground Research Facility in South Dakota. It is targeted mainly towards WIMPs with masses above 10 GeV/ c2. The timescale for these experiments foresees that the detector construction will start in 2016, with commissioning in 2018. All three experiments need to run for several years to reach their design sensitivities.

Meanwhile, other projects are operational and taking data, and several new second-generation experiments, with target masses beyond the tonne scale, are fully funded and currently being installed. The Canadian–UK project DEAP-3600, installed at SNOLAB, should take its first data with a 3.6-tonne single-phase liquid-argon detector by the end of this year. Its sensitivity goal is a factor 10–25 beyond the current best limit, depending on the WIMP mass. XENON1T, a joint effort by US, European, Swiss and Israeli groups, aims to surpass this goal using 3 tonnes of liquid xenon, of which 2 tonnes will be inside a dual-phase TPC. Construction is progressing fast at the Gran Sasso National Laboratory, and first data are expected by 2015. These experiments and their upgrades, the newly funded US projects, and other efforts around the globe, should open up a bright future for direct-dark-matter searches in the years to come.

The post A bright future for dark-matter searches appeared first on CERN Courier.

]]>
https://cerncourier.com/a/a-bright-future-for-dark-matter-searches/feed/ 0 News The US Department of Energy Office of High Energy Physics and the National Science Foundation Physics Division have announced their joint programme for second-generation dark-matter experiments, aiming at direct detection of the elusive dark-matter particles in Earth-based detectors. https://cerncourier.com/wp-content/uploads/2014/09/CCnew10_08_14.jpg
Semiconductor X-Ray Detectors https://cerncourier.com/a/semiconductor-x-ray-detectors/ Tue, 26 Aug 2014 11:59:36 +0000 https://preview-courier.web.cern.ch/?p=104228 This book provides an up-to-date review of the principles, practical applications, and state-of-the-art of semiconductor X-ray detectors, and describes many of the facets of X-ray detection and measurement using semiconductors – from manufacture to implementation.

The post Semiconductor X-Ray Detectors appeared first on CERN Courier.

]]>
By B G Lowe and R A Sareen
CRC Press
Hardback: £108

9780429088247

The history and development of Si(Li) X-ray detectors is an important supplement to the knowledge required to achieve full understanding of the workings of SDDs, CCDs, and compound semiconductor detectors. This book provides an up-to-date review of the principles, practical applications, and state-of-the-art of semiconductor X-ray detectors, and describes many of the facets of X-ray detection and measurement using semiconductors – from manufacture to implementation. The initial chapters present a self-contained summary of relevant background physics, materials science and engineering aspects. Later chapters compare and contrast the assembly and physical properties of systems and materials currently employed.

The post Semiconductor X-Ray Detectors appeared first on CERN Courier.

]]>
Review This book provides an up-to-date review of the principles, practical applications, and state-of-the-art of semiconductor X-ray detectors, and describes many of the facets of X-ray detection and measurement using semiconductors – from manufacture to implementation. https://cerncourier.com/wp-content/uploads/2022/08/9780429088247-feature.jpg
Countdown to physics https://cerncourier.com/a/countdown-to-physics/ https://cerncourier.com/a/countdown-to-physics/#respond Tue, 26 Aug 2014 08:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/countdown-to-physics/ Following the restart of the first elements in CERN’s accelerator complex in June, beams are now being delivered to experiments from the Proton Synchrotron (PS) and the PS Booster.

The post Countdown to physics appeared first on CERN Courier.

]]>
CCnew1_07_14

Following the restart of the first elements in CERN’s accelerator complex in June, beams are now being delivered to experiments from the Proton Synchrotron (PS) and the PS Booster.

First in line were experiments in the East Area of the PS, where the T9 and T10 beam lines are up and running. These test beams serve projects such as the Advanced European Infrastructures for Detectors at Accelerators (AIDA), which looks at new detector solutions for future accelerators, and the ALICE collaboration’s tests of components for their inner tracking system. By the evening of 14 July, beam was hitting the East Area’s target and the next day, beams were back in T9 and T10.

CCnew2_07_14

Next to receive beams for physics were experiments at the neutron time-of-flight facility, n_TOF, and the Isotope mass Separator On-Line facility, ISOLDE. On 25 July, detectors measured the first neutron beam in n_TOF’s new Experimental Area 2 (EAR2). It was a low-intensity beam, but it showed that the whole chain – from the spallation target to the experimental hall, including the sweeping magnet and the collimators – is working well. Built about 20 m above the neutron production target, EAR2 is a bunker connected to the underground facilities via a vertical flight path through a duct 80 cm in diameter, where the beamline is installed. At n_TOF, neutron-induced reactions are studied with high accuracy, thanks to the high instantaneous neutron flux that the facility provides. The first experiments will be installed in EAR2 this autumn and the schedule is full until the end of 2015.

A week later, on 1 August, ISOLDE restarted its physics programme with beams from the PS Booster, after a shutdown of almost a year and a half during which many improvements were made. One of the main projects was the installation of new robots for handling the targets that become very radioactive. The previous robots were more than 20 years old and beginning to suffer from the effects of radiation. The long shutdown of CERN’s accelerator complex, LS1, provided the perfect opportunity to replace them with more modern robots with electronic-sensor feedback. On the civil engineering side, three ISOLDE buildings have been demolished and replaced with a single building that includes a new control room, a data-storage room, three laser laboratories, and a biology and materials laboratory. In the ISOLDE hall, new permanent experimental stations have also been installed. Almost 40 experiments are planned for the remainder of 2014.

After the PS, the Super Proton Synchrotron (SPS) will be next to receive beam. On 27 June, the SPS closed its doors to the LS1 engineers, bringing almost 17 months of activities to an end. The machine has now entered the hardware-testing phase, in preparation for a restart in October.

Meanwhile at the LHC, early August saw the start of the cool down of a third sector – sector 1-2. By the end of August, five sectors of the machine should be in the process of cooling down, with one (sector 6-7) already cold. Meanwhile, the copper stabilizer continuity measurements (CSCM) have been completed in the first sector (6-7), with no defect found. CSCM tests are to start in the second sector in mid-August. Elsewhere in the machine, the last pressure tests were carried out on 31 July, and the last short-circuit tests should be complete by mid-August.

The post Countdown to physics appeared first on CERN Courier.

]]>
https://cerncourier.com/a/countdown-to-physics/feed/ 0 News Following the restart of the first elements in CERN’s accelerator complex in June, beams are now being delivered to experiments from the Proton Synchrotron (PS) and the PS Booster. https://cerncourier.com/wp-content/uploads/2014/08/CCnew1_07_14.jpg
MicroBooNE detector is moved into place https://cerncourier.com/a/microboone-detector-is-moved-into-place/ https://cerncourier.com/a/microboone-detector-is-moved-into-place/#respond Tue, 26 Aug 2014 08:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/microboone-detector-is-moved-into-place/ The particle detector for MicroBooNE, a new short-baseline neutrino experiment at Fermi National Accelerator Laboratory, was gently lowered into place on 23 June. It is expected to detect its first neutrinos this winter.

The post MicroBooNE detector is moved into place appeared first on CERN Courier.

]]>
CCnew9_07_14

The particle detector for MicroBooNE, a new short-baseline neutrino experiment at Fermi National Accelerator Laboratory, was gently lowered into place on 23 June. It is expected to detect its first neutrinos this winter.

The detector – a time-projection chamber surrounded by a 12-m-long cylindrical vessel – was carefully transported by truck across the Fermilab site, from the assembly building where the detector was constructed to the experimental hall nearly 5 km away. The 30-tonne object was then hoisted up by a crane, lowered through the open roof of a new building and placed into its permanent home, directly in the path of Fermilab’s Booster neutrino beamline.

When filled with 170 tonnes of liquid argon, MicroBooNE will look for low-energy neutrino oscillations to help to resolve the origin of a mysterious low-energy excess of particle events seen by the MiniBooNE experiment, which used the same beam line and relied on a Cherenkov detector filled with mineral oil.

The MicroBooNE time-projection chamber is the largest ever built in the US and is equipped with 8256 delicate gold-plated wires. The three layers of wires will capture pictures of particle interactions at different points in space and time. The superb resolution of the time-projection chamber will allow scientists to check whether the excess of MiniBooNE events is due to photons or electrons.

Using one of the most sophisticated processing programs ever designed for a neutrino experiment, computers will sift through the thousands of neutrino interactions recorded every day and create 3D images of the most interesting ones. The MicroBooNE team will use that data to learn more about neutrino oscillations and to narrow the search for a hypothesized fourth type of neutrino.

MicroBooNE is a cornerstone of Fermilab’s short-baseline neutrino programme, which could also see the addition of two more neutrino detectors along the Booster neutrino beamline, to refute or confirm hints of a fourth type of neutrino first reported by the LSND collaboration at Los Alamos National Laboratory. In its recent report, the Particle Physics Project Prioritization Panel (P5) expressed strong support for a short-baseline neutrino programme at Fermilab. The report was commissioned by the High Energy Physics Advisory Panel, which advises both the US Department of Energy and the National Science Foundation on funding priorities.

The detector technology used in MicroBooNE will serve as a prototype for a much larger liquid-argon detector that has been proposed as part of a long-baseline neutrino facility to be hosted at Fermilab. The P5 report strongly supports this larger experiment, which will be designed and funded through a global collaboration.

The post MicroBooNE detector is moved into place appeared first on CERN Courier.

]]>
https://cerncourier.com/a/microboone-detector-is-moved-into-place/feed/ 0 News The particle detector for MicroBooNE, a new short-baseline neutrino experiment at Fermi National Accelerator Laboratory, was gently lowered into place on 23 June. It is expected to detect its first neutrinos this winter. https://cerncourier.com/wp-content/uploads/2014/08/CCnew9_07_14.jpg
OECD report praises innovation at CERN https://cerncourier.com/a/oecd-report-praises-innovation-at-cern/ https://cerncourier.com/a/oecd-report-praises-innovation-at-cern/#respond Wed, 23 Jul 2014 08:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/oecd-report-praises-innovation-at-cern/ In early June, the Organisation for Economic Co-operation and Development (OECD) published their Global Science Forum (GSF) report, “The Impacts of Large Research Infrastructures on Economic Innovation and on Society: Case studies at CERN”. The report praises the culture of innovation at CERN, and finds that the laboratory has “evident links to economic, political, educational […]

The post OECD report praises innovation at CERN appeared first on CERN Courier.

]]>
In early June, the Organisation for Economic Co-operation and Development (OECD) published their Global Science Forum (GSF) report, “The Impacts of Large Research Infrastructures on Economic Innovation and on Society: Case studies at CERN”. The report praises the culture of innovation at CERN, and finds that the laboratory has “evident links to economic, political, educational and social advances of the past half-century”.

Through in-depth, confidential interviews with the people involved directly, the report focuses on two of CERN’s projects: the development of superconducting dipole magnets for the LHC and the organization’s contribution to hadron therapy.

As many as 1232 superconducting dipoles – each 14 m long and weighing 35 tonnes – steer the particle beams in the LHC. Following the R&D phase in the years 1985–2001, a call to tender was issued for the series production of the dipoles. R&D had included building a proof-of-concept prototype, meeting the considerable challenge of designing superconducting cables made of niobium-titanium (NbTi), and designing a complex cryostat system to keep the magnets cold enough to operate under superconducting conditions (CERN Courier October 2006 p28).

The report notes that although innovation at the cutting edge of technology is “inherently difficult, costly, time consuming and risky”, CERN mitigated those risks by keeping direct responsibility, decision-making and control for the project. While almost all of the “intellectual added value” from the project stemmed from CERN, contractors interviewed for the study reported their experience with the organization to be positive. CERN’s flexibility and ability to innovate attracts creative, ambitious individuals, such that “success breeds success in innovation”, note the report’s authors.

The second case study covered CERN’s contribution to hadron therapy using beams of protons, or heavier nuclei such as carbon, to kill tumours. The authors attribute CERN’s success in pushing through medical research to its relatively “flat” hierarchy, where students and junior members of staff can share ideas freely with heads of department or management. A key project was the three-year Proton Ion Medical Machine Study, which started in 1996 and submitted a complete accelerator-system design in 1999 (CERN CourierOctober 1998 p20). CERN’s involvement in hadron therapy is also a story of collaboration – the laboratory retains close links with CNAO, the National Centre for Oncological Hadron Therapy in Italy, and the MedAustron centre in Austria and others (CERN Courier December 2011 p37).

The report also praises the longevity of CERN, which allows it to “recyle” its infrastructure for new projects, and the CERN staff. This manpower is described as a “great asset” for the organization, which can be deployed in response to strategic “top down” decisions or in response to initiatives that arise in a “bottom up” mode.

• For the full report, see www.oecd.org/sti/sci-tech/CERN-case-studies.pdf.

The post OECD report praises innovation at CERN appeared first on CERN Courier.

]]>
https://cerncourier.com/a/oecd-report-praises-innovation-at-cern/feed/ 0 News
NOvA experiment sees its first long-distance neutrinos https://cerncourier.com/a/nova-experiment-sees-its-first-long-distance-neutrinos/ https://cerncourier.com/a/nova-experiment-sees-its-first-long-distance-neutrinos/#respond Fri, 28 Mar 2014 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/nova-experiment-sees-its-first-long-distance-neutrinos/ On 11 February, the NOvA collaboration announced the detection of the first neutrinos in the long-baseline experiment’s far detector in northern Minnesota.

The post NOvA experiment sees its first long-distance neutrinos appeared first on CERN Courier.

]]>
CCnew15_03_14

On 11 February, the NOvA collaboration announced the detection of the first neutrinos in the long-baseline experiment’s far detector in northern Minnesota. The neutrino beam is generated at Fermilab and sent 800 km through the Earth’s surface to the far detector. Once completed, the near and far detectors will weigh 300 and 14,000 tonnes, respectively. Installation of the last module of the far detector is scheduled for early this spring and outfitting of both detectors with electronics should be completed in summer.

The post NOvA experiment sees its first long-distance neutrinos appeared first on CERN Courier.

]]>
https://cerncourier.com/a/nova-experiment-sees-its-first-long-distance-neutrinos/feed/ 0 News On 11 February, the NOvA collaboration announced the detection of the first neutrinos in the long-baseline experiment’s far detector in northern Minnesota. https://cerncourier.com/wp-content/uploads/2014/03/CCnew15_03_14-635x162-feature.jpg
MINERvA searches for wisdom among neutrinos https://cerncourier.com/a/minerva-searches-for-wisdom-among-neutrinos/ https://cerncourier.com/a/minerva-searches-for-wisdom-among-neutrinos/#respond Fri, 28 Mar 2014 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/minerva-searches-for-wisdom-among-neutrinos/ Precise measurements of cross-sections continue a rich history of neutrino physics at Fermilab.

The post MINERvA searches for wisdom among neutrinos appeared first on CERN Courier.

]]>
MINERvA Collab

Neutrino physicists enjoy a challenge, and the members of the MINERvA (Main INjector ExpeRiment for v-A) collaboration at Fermilab are no exception. MINERvA seeks to make precise measurements of neutrino reactions using the Neutrinos at the Main Injector (NuMI) beam on both light and heavy nuclei. Does this goal reflect the wisdom of the collaboration’s namesake? Current and future accelerator-based neutrino-oscillation experiments must precisely predict neutrino reactions on the nuclei if they are to search successfully for CP violation in oscillations. Understanding matter–antimatter asymmetries might in turn lead to a microphysical mechanism to answer the most existential of questions: why are we here? Although MINERvA might provide vital assistance in meeting this worthy goal, neutrinos never yield answers easily. Moreover, using neutrinos to probe the dynamics of reactions on complicated nuclei convolutes two challenges.

The history of neutrinos is wrought with theorists underestimating the persistence of experimentalists (Close 2010). Wolfgang Pauli’s quip about the prediction of the neutrino, “I have done a terrible thing. I have postulated a particle that cannot be detected,” is a famous example. Nature rejected Enrico Fermi’s 1933 paper explaining β decay, saying it “contained speculations too remote from reality to be of interest to readers”. Eighty years ago, when Hans Bethe and Rudolf Peierls calculated the first prediction for the neutrino cross-section, they said, “there is no practical way of detecting a neutrino” (p23). But when does practicality ever stop physicists? The theoretical framework developed during the following two decades predicted numerous measurements of great interest using neutrinos, but the technology of the time was not sufficient to enable those measurements. The story of neutrinos across the ensuing decades is that of many dedicated experimentalists overcoming these barriers. Today, the MINERvA experiment continues Fermilab’s rich history of difficult neutrino measurements.

Neutrinos at Fermilab

Fermilab’s research on neutrinos is as old as the lab itself. While it was still being built, the first director, Robert Wilson, said in 1971 that the initial aim of experiments on the accelerator system was to detect a neutrino. “I feel that we then will be in business to do experiments on our accelerator…[Experiment E1A collaborators’] enthusiasm and improvisation gives us a real incentive to provide them with the neutrinos they are waiting for.” The first experiment, E1A, was designed to study the weak interaction using neutrinos, and was one of the first experiments to see evidence of the weak neutral current. In the early years, neutrino detectors at Fermilab were both the “15 foot” (4.6 m) bubble chamber filled with neon or hydrogen, and coarse-grained calorimeters. As the lab grew, the detector technologies expanded to include emulsion, oil-based Cherenkov detectors, totally active scintillator detectors, and liquid-argon time-projection chambers. The physics programme expanded as well, to include 42 neutrino experiments either completed (37), running (3) or being commissioned (2). The NuTeV experiment collected an unprecedented million high-energy neutrino and antineutrino interactions, of both charged and neutral currents. It provided precise measurements of structure functions and a measurement of the weak mixing angle in an off-shell process with comparable precision to contemporary W-mass measurements (Formaggio and Zeller 2013). Then in 2001, the DONuT experiment observed the τ neutrino – the last of the fundamental fermions to be detected.

neutrino event

While much of the progress of particle physics has come by making proton beams of higher and higher energies, the most recent progress at Fermilab has come from making neutrino beams of lower energies but higher intensities. This shift reflects the new focus on neutrino oscillations, where the small neutrino mass demands low-energy beams sent over long distances. While NuTeV and DONuT used beams of 100 GeV neutrinos in the 1990s, the MiniBooNE experiment, started in 2001, used a 1 GeV neutrino beam to search for oscillations over a short distance. The MINOS experiment, which started in 2005, used 3 GeV neutrinos and measured them both at Fermilab and in a detector 735 km away, to study oscillations that were seen in atmospheric neutrinos. MicroBooNE and NOvA – two experiments completing construction at the time of this article – will place yet more sensitive detectors in these neutrino beamlines. Fermilab is also planning the Long-Baseline Neutrino Experiment to be broadly sensitive to resolve CP violation in neutrinos.

A spectrum of interactions

Depending on the energy of the neutrino, different types of interactions will take place (Formaggio and Zeller 2013, Kopeliovich et al. 2012). In low-energy interactions, the neutrino will scatter from the entire nucleus, perhaps ejecting one or more of the constituent nucleons in a process referred to as quasi-elastic scattering. At slightly higher energies, the neutrinos interact with nucleons and can excite a nucleon into a baryon resonance that typically decays to create new final-state hadrons. In the high-energy limit, much of the scattering can be described as neutrinos scattering from individual quarks in the familiar deep-inelastic scattering framework. MINERvA seeks to study this entire spectrum of interactions.

To measure CP violation in neutrino-oscillation experiments, quasi-elastic scattering is an important channel. In a simple model where the nucleons of the nucleus live in a nuclear binding potential, the reaction rate can be predicted. In addition, an accurate estimate of the energy of the incoming neutrino can be made using only the final-state charged lepton’s energy and angle, which are easy to measure even in a massive neutrino-oscillation experiment. However, the MiniBooNE experiment at Fermilab and the NOMAD experiment at CERN both measured the quasi-elastic cross-section and found contradictory results in the framework of this simple model (Formaggio and Zeller 2013, Kopeliovich et al. 2012).

he neutrino quasi-elastic cross-section

One possible explanation of this discrepancy can be found in more sophisticated treatments of the environment in which the interaction occurs (Formaggio and Zeller 2013, Kopeliovich et al. 2012). The simple relativistic Fermi-gas model treats the nucleus as quasi-free independent nucleons with Fermi motion in a uniform binding potential. The spectral-function model includes more correlation among the nucleons in the nucleus. However, more complete models that include the interactions among the many nucleons in the nucleus modify the quasi-elastic reaction significantly. In addition to modelling the nuclear environment on the initial reaction, final-state interactions of produced hadrons inside the nucleus must also be modelled. For example, if a pion is created inside the nucleus, it might be absorbed on interacting with other nucleons before leaving the nucleus. Experimentalists must provide sufficient data to distinguish between the models.

The ever-elusive neutrino has forced experimentalists to develop clever ways to measure neutrino cross-sections, and this is exactly what MINERvA is designed to do with precision. The experiment uses the NuMI beam – a highly intense neutrino beam. The MINERvA detector is made of finely segmented scintillators, allowing the measurement of the angles and energies of the particles within. Figures 1 and 2 show the detector and a typical event in the nuclear targets. The MINOS near-detector, located just behind MINERvA, is used to measure the momentum and charge of the muons. With this information, MINERvA can measure precise cross-sections of different types of neutrino interactions: quasi-elastic, resonance production, and deep-inelastic scatters, among others.

ratio of charged-current cross-section

The MINERvA collaboration began by studying the quasi-elastic muon neutrino scattering for both neutrinos (MINERvA 2013b) and antineutrinos (MINERvA 2013a). By measuring the muon kinematics to estimate the neutrino energies, they were able to measure the neutrino and antineutrino cross-sections. The data, shown in figure 3, suggest that the nucleons do spend some time in the nucleus joined together in pairs. When the neutrino interacts with the pair, the pair is kicked out of the nucleus. Using the visible energy around the nucleus allowed a search for evidence of the pair of nucleons. Experience from electron quasi-elastic scattering leads to an expectation of final-state proton–proton pairs for neutrino quasi-elastic scattering and neutron–neutron pairs for antineutrino scattering. MINERvA’s measurements of the energy around the vertex in both neutrino and antineutrino quasi-elastic scattering support this expectation (figure 3, right).

A 30-year-old puzzle

Another surprise beyond the standard picture in lepton–nucleus scattering emerged 30 years ago in deep-inelastic muon scattering. The European Muon Collaboration (EMC) observed a modification of the structure functions in heavy nuclei that is still theoretically unresolved, in part because there is no other reaction in which an analogous effect is observe. Neutrino and antineutrino deep-inelastic scattering might see related effects with different leptonic currents, and therefore different couplings to the constituents of the nucleus (Gallagher et al. 2010, Kopeliovich et al. 2012). MINERvA has begun this study using large targets of active scintillator and passive graphite, iron and lead (MINERvA 2014). Figure 4 shows the ratio of lead to scintillator and illustrates behaviour that is not in agreement with a model based on charged-lepton scattering modifications of deep-inelastic scattering and the elastic physics described above. Similar behaviour, but with smaller deviations from the model, is observed in the ratio of iron to scintillator. MINERvA’s investigation of this effect will benefit greatly from its current operation in the upgraded NuMI beam for the NOvA experiment, which is more intense and higher in (the beamline’s on-axis) energy. Both features will allow more access to the kinematic regions where deep-inelastic scattering dominates. By including a long period of antineutrino operation needed for NOvA’s oscillation studies, an even more complete survey of the nucleons can be done. The end result of these investigations will be a data set that can offer a new window on the process behind the EMC effect.

Initially in the history of the neutrino, theory led experiment by several decades

Initially in the history of the neutrino, theory led experiment by several decades. Now, experiment leads theory. Neutrino physics has repeatedly identified interesting and unexpected physics. Currently, physics is trying to understand how the most abundant particle in the universe interacts in the simplest of situations. MINERvA is just getting started on answering these types of questions and there are many more interactions to study. The collaboration is also looking at what happens when neutrinos make pions or kaons when they hit a nucleus, and how well they can measure the number of times a neutrino scatters off an electron – the only “standard candle” in this business.

Time after time, models fail to predict what is seen in neutrino physics. The MINERvA experiment, among others, has shown that quasi-elastic scattering is a wonderful tool to study the nuclear environment. Maybe the use of neutrinos, once thought to be impossible to detect, as a probe to study inside the nucleus, would make Pauli, Fermi, Bethe, Peierls and the rest chuckle.

The post MINERvA searches for wisdom among neutrinos appeared first on CERN Courier.

]]>
https://cerncourier.com/a/minerva-searches-for-wisdom-among-neutrinos/feed/ 0 Feature Precise measurements of cross-sections continue a rich history of neutrino physics at Fermilab. https://cerncourier.com/wp-content/uploads/2014/03/CCmin1_03_14.jpg
Advanced radiation detectors in industry https://cerncourier.com/a/advanced-radiation-detectors-in-industry/ https://cerncourier.com/a/advanced-radiation-detectors-in-industry/#respond Fri, 28 Mar 2014 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/advanced-radiation-detectors-in-industry/   The European Physical Society’s Technology and Innovation Group (EPS-TIG) was set up in 2011 to work at the boundary between basic and applied sciences, with annual workshops organized in collaboration with CERN as its main workhorse (CERN Courier April 2013 p31). The second workshop, organized in conjunction with the department of physics and astronomy […]

The post Advanced radiation detectors in industry appeared first on CERN Courier.

]]>
 

The European Physical Society’s Technology and Innovation Group (EPS-TIG) was set up in 2011 to work at the boundary between basic and applied sciences, with annual workshops organized in collaboration with CERN as its main workhorse (CERN Courier April 2013 p31). The second workshop, organized in conjunction with the department of physics and astronomy and the “Fondazione Flaminia” of Bologna University, took place in Ravenna on 11–12 November 2013. The subject – advanced radiation detectors for industrial use – brought experts involved in the research and development of advanced sensors, together with representatives from related spin-off companies.

The first session, on technology-transfer topics, opened with a keynote speech by Karsten Buse, director of the Fraunhofer Institute for Physical Measurement Technique (IPM), Freiburg. In the spirit of Joseph von Fraunhofer (1787–1826) – a researcher, inventor and entrepreneur – the Fraunhofer Gesellschaft promotes innovation and applied research that is of direct use for industry. Outlining the IPM’s mission and the specific competences and services it provides, Buse presented an impressive overview of technology projects that have been initiated and developed or improved and supported by the institute. He also emphasized the need to build up and secure intellectual property, and explained contract matters. The success stories include the MP3 audio-compression algorithm, white LEDs to replace conventional light bulbs, and all-solid-state widely tunable lasers. Buse concluded by observing that bridging the gap between academia and industry requires some attention, but is less difficult than often thought and also highly rewarding. A lively discussion followed in the audience of students, researchers and partners from industry.

The second talk focused on knowledge transfer (KT) from the perspective of CERN’s KT Group. First, Giovanni Anelli described the KT activities based on CERN’s technology portfolio and on people – that is, students and fellows. In the second part, Manjit Dosanjh presented the organization’s successful and continued transfer to medical applications of advanced technologies in the fields of accelerators, detectors and informatics technologies. Catalysing and facilitating collaborations between medical doctors, physicists and engineers, CERN plays an important role in “physics for health” projects at the European level via conferences and networks such as ENLIGHT, set up to bring medical doctors and physics researchers together (CERN Courier December 2012 p19).

Andrea Vacchi of INFN/Trieste reviewed the INFN’s KT activities. He emphasized that awareness of the value of the technology assets developed inside INFN is growing. In the past, technology transfer between INFN and industry happened mostly through the involvement of suppliers in the development of technologies. In future, INFN will take more proactive measures to encourage technology transfer between INFN research institutions and industry.

From lab to industry

The first afternoon was rounded up by Colin Latimer of the University of Belfast and member of the EPS Executive Committee. He illustrated the varying timescales between invention and mass-application multi-billion-dollar markets, with a number of example technologies including optical fibres (1928), liquid-crystal displays (1936), magnetic-resonance imaging (MRI) scanners (1945) and lasers (1958), with high-temperature superconductors (1986) and graphene (2004) still waiting to make a major impact. Latimer went on to present results from the recent study commissioned by the EPS from the Centre for Economics and Business Research, which has shown the importance of physics to the European economy (EPS/Cebr 2013).

The second part of the workshop was devoted to sensors and innovation in instrumentation and industrial applications, starting with a series of talks that reviewed the latest developments. This was followed by presentations from industry on various sensor products, application markets and technological developments.

Erik Heijne, a pioneer of silicon and silicon-pixel detectors at CERN, started by discussing innovation in instrumentation through the use of microelectronics technology. Miniaturization to sub-micron silicon technologies allows many functions to be compacted into a small volume. This has led in turn to the integration of sensors and processing electronics in powerful devices, and has opened up new fields of applications (CERN Courier March 2014 p26). In high-energy particle physics, the new experiments at the LHC have been based on sophisticated chips that allow unprecedented event rates of up to 40 MHz. Some of the chips – or at least the underlying ideas – have found applications in materials analysis, medical imaging and other types of industrial equipment. The radiation imaging matrix, for example, based on silicon-pixel and integrated read-out chips, has many applications already.

Detector applications

Julia Jungmann of PSI emphasized the use of active pixel detectors for imaging in mass spectrometry in molecular pathology, in research done at the FOM Institute AMOLF in Amsterdam. The devices have promising features for fast, sensitive ion-imaging with time and space information from the same detector, high spatial resolution, direct imaging acquisition and highly parallel detection. The technique, which is based on the family of Medipix/Timepix devices, provides detailed information on molecular identity and localization – vital, for example in detecting the molecular basis of a pathology without the need to label bio-molecules. Applications include disease studies, drug-distribution studies and forensics. The wish list is now for chips with 100 ps time bins, a 1 ms measurement interval, multi-hit capabilities at the pixel level, higher read-out rates and high fluence tolerance.

In a similar vein, Alberto Del Guerra of the University of Pisa presented the technique of positron-emission tomography (PET) and its applications. Outlining the physics and technology of PET, he showed improved variants of PET systems and applications to molecular imaging, which also allow the visual representation, characterization and quantification of biological processes at the cellular and subcellular levels within living organisms. Clinical systems of hybrid PET and computerized tomography (CT) for application in oncology and neurology, human PET and micro-PET equipment, combined with small-animal CT, are available from industry, and today there are also systems where PET and magnetic resonance imaging (MRI) are combined. Such systems are being used in hadron therapy in Italy for monitoring purposes at the 62 MeV proton cyclotron of the CATANA facility in Catania, and at the proton and carbon synchrotron of the CNAO centre in Pavia. An optimized tri-modality imaging tool for schizophrenia is even being developed, combining PET with MRI and electroencephalography measurements. Del Guerra’s take-home message was that technology transfer in the medical field needs long-term investment – industry can withdraw halfway if a technology is not profitable (for example, Siemens in the case of proton therapy). In future, applications will be multimodal with PET combined with other imaging techniques (CT, MRI, optical projection tomography), for applications to specific organs such as the brain, breast, prostate and more.

The next topic related to recent developments in the silicon drift detector (SDD) and its applications. Chiara Guazzoni, of the Politecnico di Milano and INFN Milan, gave an excellent overview of SDDs, which were invented by Emilio Gatti and Pavel Rehak 30 years ago. These detectors are now widely used in X-ray spectroscopy and are commercially available. Conventional and non-conventional applications include the non-destructive analysis of cultural heritage and biomedical imaging based on X-ray fluorescence, proton-induced X-ray emission studies, gamma-ray imaging and spectroscopy, X-ray scatter imaging, etc. As Gatti and Rehak stated in their first patent, “additional objects and advantages of the invention will become apparent to those skilled in the art,” and Guazzoni hopes that the art will keep “drifting on” towards new horizons.

Moving on to presentations from industry and start-up companies, Jürgen Knobloch of KETEK GmbH in Munich presented new high-throughput, large-area SDDs, starting with a historical review of the work of Josef Kemmer, who in 1970 started to develop planar silicon technology for semiconductor detectors. Collaborating with Rehak and the Max-Planck Institute in Munich, Kemmer went on to produce the first SDDs with a homogeneous entrance window, with depleted field-effect transistor (DEPFET) and MOS-type DEPFET (DEPMOS) technologies. In 1989 he founded the start-up company KETEK, which is now the global commercial market leader in SSD technology. Knobloch presented the range of products from KETEK and concluded with a list of recommendations for better collaboration between research and industry. KETEK’s view on how science and industry can better collaborate includes: workshops of the kind organized by EPS-TIG; meetings between scientists and technology companies to set out practical needs and future requirements; involvement of technology-transfer offices to resolve intellectual-property issues; encouragement of industry to accept longer times for returns in investments; and the strengthening of synergies between basic research and industry R&D.

Knobloch’s colleague at KETEK, Werner Hartinger, then described new silicon photomultipliers (SiPMs) with high proton-detection efficiency, and listed the characteristics of a series of KETEK’s SiPM sensors, which also feature a huge gain (> 106) with low excess noise and a low temperature coefficient. KETEK has off-the-shelf SiPM devices and also customizes devices for CERN. The next steps will be continuous noise reduction (in both dark rate and cross-talk) by enhancing the KETEK “trench” technology, enhancement of the pulse shape and timing properties by optimizing parasitic elements and read-out, and the production of chip-size packages and arrays at the package level.

New start-ups

PIXIRAD, a new X-ray imaging system based on chromatic photon-counting technology, was presented by Ronaldo Bellazzini of PIXIRAD Imaging Counters srl – a recently constituted INFN spin-off company. The detector can deliver extremely clear and highly detailed X-ray images for medical, biological, industrial and scientific applications in the energy range 1–100 keV. Photon counting, colour mode and high spatial resolution lead to an optimal ratio of image quality to absorbed dose. Modules with units of 1, 2, 4 and 8 tiles have been built with almost zero dead space between the blocks. A complete X-ray camera based on the PIXIRAD-1 single-module assembly is available for customers in scientific and industrial markets for X-ray diffraction, micro-CT, etc. A dedicated machine to perform X-ray slot-scanning imaging has been designed and built and is currently under test. This system, which uses the PIXIRAD-8 module and is able to produce large-area images with fine position resolution, has been designed for digital mammography, which is one of the most demanding X-ray imaging applications.

CIVIDEC Instrumentation – another start-up company – was founded in 2009 by Erich Griesmayer. He presented several examples of applications of the products, which are based on diamond-detector technology. They have found use at the LHC and other accelerator beamlines as beam-loss and beam-position monitors for time measurements, high-radiation-level measurements, neutron time of flight, and as low-temperature detectors in superconducting quadrupoles. The company provides turn-key solutions that connect via the internet, supplying clients worldwide.

Nicola Tartoni, head of the detector group at the Diamond Light Source, outlined the layout of the facility and its diversified programmes. He presented an overview of the detector development and beamlines of this outstanding user facility in partnership with industry, with diverse R&D projects of increasing complexity.

Last, Carlos Granja, of the Institute of Experimental and Applied Physics (IEAP) at the Czech Technical University (CTU) in Prague, described the research carried out with the European Space Agency (ESA) demonstrating the impressive development in detection and particle tracking of individual radiation quanta in space. This has used the Timepix hybrid semiconductor pixel-detector developed by the Medipix collaboration at CERN. The Timepix-based space-qualified payload, produced by IEAP CTU in collaboration with the CSRC company of the Czech Republic, has been operating continuously on board ESA’s Proba-V satellite in low-Earth orbit at 820 km altitude, since being launched in May 2013. Highly miniaturized devices produced by IEAP CTU are also flying on board the International Space Station for the University of Houston and NASA for high-sensitivity quantum dosimetry of the space-station crew.

In other work, IEAP CTU has developed a micro-tracker particle telescope in which particle tracking and directional sensitivity are enhanced by the stacked layers of the Timepix device. For improved and wide-application radiation imaging, edgeless Timepix sensors developed at VTT and Advacam in Finland, with advanced read-out instrumentation and micrometre-precision tiling technology (available at IEAP CTU and the WIDEPIX spin-off company, of the Czech Republic), enable large sensitive areas up to 14 cm square to be covered by up to 100 Timepix sensors. This development allows the extension of high-resolution X-ray and neutron imaging at the micrometre level to a range of scientific and industrial applications.

• For more about the workshop, visit www.emrg.it/TIG_Workshop_2013/program.php?language=en. For the presentations, see http://indico.cern.ch/event/284070/.

The post Advanced radiation detectors in industry appeared first on CERN Courier.

]]>
https://cerncourier.com/a/advanced-radiation-detectors-in-industry/feed/ 0 Feature
LBNE prototype cryostat exceeds goals https://cerncourier.com/a/lbne-prototype-cryostat-exceeds-goals/ https://cerncourier.com/a/lbne-prototype-cryostat-exceeds-goals/#respond Mon, 24 Feb 2014 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/lbne-prototype-cryostat-exceeds-goals/ Scientists and engineers working on the design of the particle detector for the Long-Baseline Neutrino Experiment (LBNE) celebrated a major success in January.

The post LBNE prototype cryostat exceeds goals appeared first on CERN Courier.

]]>
CCnew10_02_14

Scientists and engineers working on the design of the particle detector for the Long-Baseline Neutrino Experiment (LBNE) celebrated a major success in January. They showed that very large cryostats for liquid-argon-based neutrino detectors can be built using industry-standard technology normally employed for the storage of liquefied natural gas. The 35-tonne prototype system satisfies LBNE’s stringent purity requirement on oxygen contamination in argon of less than 200 parts per trillion (ppt) – a level that the team could maintain stably.

The purity of liquid argon is crucial for the proposed LBNE time-projection chamber (TPC), which will feature wire planes that collect electrons from an approximately 3.5 m drift region. Oxygen and other electronegative impurities in the liquid can absorb ionization electrons created by charged particles emerging from neutrino interactions and prevent them from reaching the TPC’s signal wires.

The test results were the outcome of the first phase of operating the LBNE prototype cryostat, which was built at Fermilab and features a membrane designed and supplied by the IHI Corporation of Japan. As part of the test, engineers cooled the system and filled the cryostat with liquid argon without prior evacuation. On 20 December, during a marathon 36 hour session, they cooled the membrane cryostat slowly and smoothly to 110 K, at which point they commenced the transfer of some 20,000 litres of liquid argon, maintained at about 89 K, from Fermilab’s Liquid-Argon Purity Demonstrator to the 35 tonne cryostat. By the end of the session, the team was able to verify that the systems for purifying, recirculating and recondensing the argon were working properly.

The LBNE team then topped off the tank with an additional 6000 litres of liquid argon and began to determine the argon’s purity by measuring the lifetime of ionization electrons travelling through the liquid, accelerated by an electric field of 60 V/cm. The measured electron lifetimes were between 2.5 and 3 ms – corresponding to an oxygen contamination approaching 100 ppt and nearly two times better than LBNE’s minimum requirement of 1.5 ms.

The Phase II testing programme, scheduled to begin at the end of 2014, will focus on the performance of active TPC detector elements submerged in liquid argon. Construction of the LBNE experiment, which will look for CP violation in neutrino oscillations by examining a neutrino beam travelling 1300 km from Fermilab to the Sanford Underground Research Facility, could begin in 2016. More than 450 scientists from 85 institutions collaborate on LBNE.

The post LBNE prototype cryostat exceeds goals appeared first on CERN Courier.

]]>
https://cerncourier.com/a/lbne-prototype-cryostat-exceeds-goals/feed/ 0 News Scientists and engineers working on the design of the particle detector for the Long-Baseline Neutrino Experiment (LBNE) celebrated a major success in January. https://cerncourier.com/wp-content/uploads/2014/02/CCnew10_02_14.jpg
Microelectronics at CERN: from infancy to maturity https://cerncourier.com/a/microelectronics-at-cern-from-infancy-to-maturity/ https://cerncourier.com/a/microelectronics-at-cern-from-infancy-to-maturity/#respond Mon, 24 Feb 2014 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/microelectronics-at-cern-from-infancy-to-maturity/ The start of the LAA project in 1986 propelled electronics at CERN into the era of microelectronics, and laid crucial foundations for the success of the LHC experiments.

The post Microelectronics at CERN: from infancy to maturity appeared first on CERN Courier.

]]>
Two decades of microelectronics

When the project for the Large Electron–Positron (LEP) collider began at CERN in the early 1980s, the programme required the concentration of all available CERN resources, forcing the closure not only of the Intersecting Storage Rings and its experiments, but of all the bubble chambers and several other fixed-target programmes. During this period, the LAA detector R&D project was approved at the CERN Council meeting in December 1986 as “another CERN programme of activities” (see box) opening a door to initiate developments for the future. A particular achievement of the project was to act as an incubator for the development of microelectronics at CERN, together with the design of silicon-strip and pixel detectors – all of which would become essential ingredients for the superb performance of the experiments at the LHC more than two decades later.

The start of the LAA project led directly to the build-up of know-how within CERN’s Experimental Physics Facilities Division, with the recruitment of young and creative electronic engineers. It also enabled the financing of hardware and software tools, as well as the training required to prepare for the future. By 1988, an electronics design group had been set up at CERN, dedicated to the silicon technology that now underlies many of the high-performing detectors at the LHC and in other experiments. Miniaturization to submicrometre scales allowed many functions to be compacted into a small volume in sophisticated, application-specific integrated circuits (ASICS), generally based on complementary metal-oxide-silicon (CMOS) technology. The resulting microchips incorporate analogue or digital memories, so selective read-out of only potentially useful data can be used to reduce the volume of data that is transmitted and analysed. This allows the recording of particle-collision events at unprecedented rates – the LHC experiments register 40 million events per second, continuously.

muon tracks

Last November, 25 years after the chip-design group was set up, some of those involved in the early days of these developments – including Antonino Zichichi, the initiator of LAA – met at CERN to celebrate the project and its vital role in establishing microelectronics at CERN. There were presentations from Erik Heijne and Alessandro Marchioro, who were among the founding members of the group, and from Jim Virdee, who is one of the founding fathers of the CMS experiment at the LHC. Together, they recalled the birth and gradual growth to maturity of microelectronics at CERN.

The beginnings

The story of advanced ASIC design at CERN began around the time of UA1 and UA2, when the Super Proton Synchrotron was operating as a proton–antiproton collider, to supply enough interaction energy for discovery of the W and Z bosons. In 1988, UA2 became, by chance, the first collider experiment to exploit a silicon detector with ASIC read-out. Outer and inner silicon-detector arrays were inserted into the experiment to solve the difficulty of identifying the single electron that comes from a decay of the W boson, close to the primary interaction vertex. The inner silicon-detector array with small pads could be fitted in the 9 mm space around the beam pipe, thanks to the use of the AMPLEX – a fully fledged, 16-channel 3-μm CMOS chip for read-out and signal multiplexing.

The need for such read-out chips was triggered by the introduction of silicon microstrip detectors at CERN in 1980 by Erik Heijne and Pierre Jarron. These highly segmented silicon sensors allow micrometre precision, but the large numbers of parallel sensor elements have to be dealt with by integrated on-chip signal processing. To develop ideas for such detector read-out, in the years 1984–1985 Heijne was seconded to the University of Leuven, where the microelectronics research facility had just become the Interuniversity MicroElectronics Centre (IMEC). It soon became apparent that CMOS technology was the way ahead, and the experience with IMEC led to Jarron’s design of the AMPLEX.

(Earlier, in 1983, a collaboration between SLAC, Stanford University Integrated Circuits Laboratory, the University of Hawaii and Bernard Hyams from CERN had already initiated the design of the “Microplex” – a silicon-microstrip detector read-out chip using nMOS, which was eventually used in the MARK II experiment at SLAC in the summer of 1990. The design was done in Stanford by Sherwood Parker and Terry Walker. A newer iteration of the Microplex design was used in autumn 1989 for the microvertex detector in the DELPHI experiment at LEP.)

The first digital ASIC

Heijne and Jarron were keen to launch chip design at CERN, as was Alessandro Marchioro, who was interested in developing digital microelectronics. However, finances were tight after the approval of LEP. With the appearance of new designs, the tools and methodologies developed in industry had to be adopted. For example, performing simulations was better than the old “try-and-test technique” of wire wrapping, but this required the appropriate software, including licences and training. The LAA project came at just the right time, allowing the chip-design group to start work in the autumn of 1988, with a budget for workstations, design software and analysis equipment – and crucially, up to five positions for chip-design engineers, most of whom remain at CERN to this day.

On the analogue side, there were three lines to the proposed research programme within LAA: silicon-microstrip read-out, a silicon micropattern pixel detector and R&D on chip radiation-hardness. The design of the first silicon-strip read-out chip at CERN – dubbed HARP for Hierarchical Analog Readout Processor – moved ahead quickly. The first four-channel prototypes were already received in 1988, with work such as the final design verification and layout check still being done at IMEC.

The silicon micropattern pixel detector, with small pixels in a 2D matrix, required integration of the sensor matrix and the CMOS read-out chip, either in the same silicon structure (monolithic) or in a hybrid technology with the read-out chip “bump bonded” to each pixel. Such a chip was developed as a prototype at CERN in 1989 in collaboration with Eric Vittoz of the Centre Suisse d’Electronique et de Microtechnique and his colleagues at the École polytechnique fédérale de Lausanne. While it turned out that this first chip could not be bump bonded, it successfully demonstrated the concept. In 1991, the next pixel-read-out chip designed at CERN was used in a three-unit “telescope” to register tracks behind the WA94 heavy-ion experiment in the Omega spectrometer. This test convinced the physicists to propose an improved heavy-ion experiment, WA97, with a larger telescope of seven double planes of pixel detectors. This experiment not only took useful data, but also proved that the new hybrid pixel detectors could be built and exploited.

Research on radiation hardness in chips remained limited within the LAA project, but took off later with the programme of the Detector Research and Development Committee (DRDC) and the design of detectors for the LHC experiments. Initially, it was more urgent to show the implementation of functioning chips in real experiments. Here, the use of AMPLEX in UA2 and later the first pixel chips in WA97 were crucial in convincing the community.

In parallel, components such as time-to-digital converters (TDCs) and other Fastbus digital-interface chips were successfully developed at CERN by the digital team. The new simulation tools purchased through the financial injection from the LAA project were used for modelling real-time event processing in a Fastbus data-acquisition system. This was to lead to high-performance programmable Fastbus ASICs for data acquisition in the early 1990s. Furthermore, a fast digital 8-bit adder-multiplier with a micropipelined architecture for correcting pedestals, based on a 1.2 μm CMOS technology, was designed and used in early 1987. By 1994, the team had designed a 16-channel TDC for the NA48 experiment, with a resolution of 1.56 ns, which could be read out at 40 MHz. The LAA had well and truly propelled the engineers at CERN into the world of microtechnology.

The LAA

The LAA programme, proposed by Antonino Zichichi and financed by the Italian government, was launched as a comprehensive R&D project to study new experimental techniques for the next step in hadron-collider physics at multi-tera-electron-volt energies. The project provided a unique opportunity for Europe to take a leading role in advanced technology for high-energy physics. It was open to all physicists and engineers interested in participating. A total of 40 physicists, engineers and technicians were recruited, and more than 80 associates joined the programme. Later in the 1990s, during the operation of LEP for physics, the programme was complemented by the activities overseen by CERN’s Detector R&D Committee.

The challenge for the LHC

A critical requirement for modern high-energy-physics detectors is to have highly “transparent” detectors, maximizing the interaction of particles with the active part of the sensors while minimizing similar interactions with auxiliary material such as electronics components, cables, cooling and mechanical infrastructure – all while consuming absolute minimum power. Detectors with millions of channels can be built only if each channel consumes milliwatts of power. In this context, the developments in microelectronics offered a unique opportunity, allowing the read-out system of each detector to be designed to provide optimal signal-to-noise characteristics for minimal power consumption. In addition, auxiliary electronics such as high-speed links and monitoring electronics could be highly optimized to provide the best solution for system builders.

However, none of this was evident when thoughts turned to experiments for the LHC. The first workshop on the prospects for building a large proton collider in the LEP tunnel took place in Lausanne in 1984, the year following the discovery of the W and Z bosons by UA1 and UA2. A prevalent saying at the time was “We think we know how to build a high-energy, high-luminosity hadron collider – we don’t have the technology to build a detector for it.” Over the next six years, several seminal workshops and conferences took place, during the course of which the formidable experimental challenges started to appear manageable, provided that enough R&D work could be carried out, especially on detectors.

CMS tracker barrel

The LHC experiments needed special chips with a rate capability compatible with the collider’s 40 MHz/25 ns cycle time and with a fast signal rise time to allow each event to be uniquely identified. (Recall that LEP ran with a 22 μs cycle time.) Thin – typically 0.3 mm – silicon sensors could meet these requirements, having a dead time of less than 15 ns. With sub-micron CMOS technology, front-end amplifiers could also be designed with a recovery time of less than 50 ns, therefore avoiding pile-up problems.

Thanks to the LAA initiative and the launch in 1990 by CERN of R&D for LHC detectors, overseen by the DRDC, technologies were identified and prototyped that could operate well in the harsh conditions of the LHC. In particular, members of the CERN microelectronics group pioneered the use of special full custom-design techniques, which led to the production of chips capable of withstanding the extreme radiation environment of the experiments while using a commercially available CMOS process. The first full-scale chip developed using these techniques is the main building block of the silicon-pixel detector in the ALICE experiment. Furthermore, in the case of CMS, the move to sub-micron 0.25-μm CMOS high-volume commercial technology for producing radiation-hard chips enabled the front-end read-out for the tracker to be both affordable and delivered on time. This technology became the workhorse for the LHC and has been used since for many applications, even where radiation tolerance is not required.

An example of another area that benefited from an early launch, assisted by the LAA project, is optical links. These play a crucial role in transferring large volumes of data, an important example being the transfer from the front ends of detectors that require one end of the link to be radiation hard – again, a new challenge.

Today, applications that require a high number of chips can profit from the increase in wafer size, with many chips per wafer, and low cost in high-volume manufacturing. This high level of integration also opens new perspectives for more complexity and intelligence in detectors, allowing new modes of imaging.

Looking ahead

Many years after Moore’s law was suggested, miniaturization still seems to comply with it. There has been continuous progress in silicon technology, from 10 μm silicon MOS transistors in the 1970s to 20 nm planar silicon-on-insulator transistors today. Extremely complex FinFET devices promise further downscaling to 7 nm transistors. Such devices will allow even more intelligence in detectors. The old dream of having detectors that directly provide physics primitives – namely, essential primary information about the phenomena involved in the interaction of particles with matter – instead of meaningless “ADC counts” or “hits” is now fully within reach. It will no longer be necessary to wait for data to come out of a detector because new technology for chips and high-density interconnections will make it possible to build in direct vertex-identification, particle-momenta evaluation, energy sums and discrimination, and fast particle-flow determination.

Some of the chips developed at CERN – or the underlying ideas – have found applications in materials analysis, medical imaging and various types of industrial equipment that employ radiation. Here, system integration has been key to new functionalities, as well as to cost reduction. The Medipix photon-counting chip developed in 1997 with collaborators in Germany, Italy and the UK is the ancestor of the Timepix chip that is used today, for example, for dosimetry on the International Space Station and in education projects. Pixel-matrix-based radiation imaging also has many applications, such as for X-ray diffraction. Furthermore, some of the techniques that were pioneered and developed at CERN for manufacturing chips sufficiently robust to survive the harsh LHC conditions are now adopted universally in many other fields with similar environments.

Looking ahead to Europe’s top priority for particle physics, exploitation of the LHC’s full potential until 2035 – including the luminosity upgrade – will require not only the maintenance of detector performance but also its steady improvement. This will again require a focused R&D programme, especially in microelectronics because more intelligence can now be incorporated into the front end.

Lessons learnt from the past can be useful guides for the future. The LAA project propelled the CERN electronics group into the new world of microelectronic technology. In the future, a version of the LAA could be envisaged for launching CERN into yet another generation of discovery-enabling detectors exploiting these technologies for new physics and new science.

The post Microelectronics at CERN: from infancy to maturity appeared first on CERN Courier.

]]>
https://cerncourier.com/a/microelectronics-at-cern-from-infancy-to-maturity/feed/ 0 Feature The start of the LAA project in 1986 propelled electronics at CERN into the era of microelectronics, and laid crucial foundations for the success of the LHC experiments. https://cerncourier.com/wp-content/uploads/2014/02/CClaa1_02_14.jpg
Silicon Solid State Devices and Radiation Detection https://cerncourier.com/a/silicon-solid-state-devices-and-radiation-detection/ Wed, 22 Jan 2014 13:52:16 +0000 https://preview-courier.web.cern.ch/?p=104406 Using their many years of experience both in research with silicon detectors and in giving lectures at various levels, Leroy and Rancoita address the fundamental principles of interactions between radiation and matter, together with working principles and the operation of particle detectors based on silicon solid-state devices.

The post Silicon Solid State Devices and Radiation Detection appeared first on CERN Courier.

]]>
By Claude Leroy and Pier-Giorgio Rancoita
World Scientific
Hardback: £89
E-book: £67

51R1mvPZvWL._SX342_SY445_QL70_ML2_

Using their many years of experience both in research with silicon detectors and in giving lectures at various levels, Leroy and Rancoita address the fundamental principles of interactions between radiation and matter, together with working principles and the operation of particle detectors based on silicon solid-state devices. They cover a range of fields of application of radiation detectors based on these devices, from low- to high-energy physics experiments, including those in outer space and medicine. Their book also covers state-of-the-art detection techniques in the use of such radiation detectors and their read-out electronics, including the latest developments in pixellated silicon radiation detectors and their applications.

The post Silicon Solid State Devices and Radiation Detection appeared first on CERN Courier.

]]>
Review Using their many years of experience both in research with silicon detectors and in giving lectures at various levels, Leroy and Rancoita address the fundamental principles of interactions between radiation and matter, together with working principles and the operation of particle detectors based on silicon solid-state devices. https://cerncourier.com/wp-content/uploads/2022/08/51R1mvPZvWL._SX342_SY445_QL70_ML2_.jpg
A new focus on forward muons for the ALICE upgrade programme https://cerncourier.com/a/a-new-focus-on-forward-muons-for-the-alice-upgrade-programme/ https://cerncourier.com/a/a-new-focus-on-forward-muons-for-the-alice-upgrade-programme/#respond Wed, 20 Nov 2013 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/a-new-focus-on-forward-muons-for-the-alice-upgrade-programme/ The ALICE experiment, with its state-of-the-art detection systems, produced a wealth of results during Run 1 of the LHC (2009–2013) – driving a new impetus in the field of heavy-ion collisions. While Run 2 (2015–2017) will see the consolidation and completion of the scientific programme for which the experiment was originally approved, the ALICE collaboration has already […]

The post A new focus on forward muons for the ALICE upgrade programme appeared first on CERN Courier.

]]>

The ALICE experiment, with its state-of-the-art detection systems, produced a wealth of results during Run 1 of the LHC (2009–2013) – driving a new impetus in the field of heavy-ion collisions. While Run 2 (2015–2017) will see the consolidation and completion of the scientific programme for which the experiment was originally approved, the ALICE collaboration has already taken up the challenge to make a quantitative leap in the precision of its observations by exploiting the high luminosity anticipated for the LHC in Run 3 (2019–2022). The plan is to upgrade the detector during the LHC’s second long shutdown, just before Run 3. In September, the LHC Committee (LHCC) approved an addendum to the letter of intent for the ALICE upgrade programme concerning the project for the Muon Forward Tracker (MFT) – an assembly of silicon pixel planes serving as internal tracker, in the forward acceptance of ALICE’s muon arm.

The basic idea behind the MFT concept – measuring muons both before and after the hadron absorber, then matching the two pieces of information – is well established in the field of heavy-ion physics, having been exploited both at CERN’s Super Proton Synchrotron and more recently at Brookhaven’s Relativistic Heavy-Ion Collider. Evaluation of the expected scientific impact of such a major upgrade required the preparation of a detailed letter of intent, the first draft of which was submitted to the ALICE collaboration in December 2011. The final document received internal approval in March 2013 and the first discussions with the LHCC started two months later.

There are three main pillars of the MFT’s contribution to the ALICE physics programme: dimuon measurement of prompt charmonia states J/ψ and ψ´, to study in-medium colour-screening and hadronization mechanisms of cc pairs; measurement of charm and beauty production via single muons and J/ψ particles from B decay, allowing a tomography of the medium via study of the energy loss of heavy quarks; and low-mass dimuon measurements, to study thermal radiation from quark–gluon plasma and search for in-medium modifications of the spectral functions of light vector mesons. The technical feasibility was also demonstrated as reported in the letter of intent – from the choice of the CMOS pixel technology to aspects related to the detector mechanics and cooling. It is also worth emphasizing that ALICE is the only LHC experiment that is designed to perform precision measurements at forward rapidities in the high-multiplicity environment of heavy-ion collisions.

What will the MFT do for ALICE? Put simply, it will be like wearing a pair of glasses to correct myopia. The MFT will reveal the details of the muon tracks in the vertex region, allowing not only a powerful rejection of background muons but also access to measurements that are not feasible with the existing muon spectrometer. A prime example is the disentanglement of prompt (charm) and displaced (from beauty) J/ψ production (figure 1), which is achievable only by measuring precisely the distance between the primary vertex of the collision and the vertex in which the J/ψ is produced. Because such distances are in the order of a few hundreds of micrometres, a dedicated vertex detector is needed. Figure 2 illustrates how this measurement becomes possible only with the addition of the MFT to the current muon spectrometer set-up.

The team involved with the MFT project must now provide details on all of the technological aspects, producing a Technical Design Report in the second half of 2014 – the final effort before assembling the first pieces of the detector. Then four more years of intense work will be needed before the MFT is installed and becomes operational in the ALICE cavern in 2019.

The post A new focus on forward muons for the ALICE upgrade programme appeared first on CERN Courier.

]]>
https://cerncourier.com/a/a-new-focus-on-forward-muons-for-the-alice-upgrade-programme/feed/ 0 News
TOTEM continues to pin down physics in the very forward region https://cerncourier.com/a/totem-continues-to-pin-down-physics-in-the-very-forward-region/ https://cerncourier.com/a/totem-continues-to-pin-down-physics-in-the-very-forward-region/#respond Wed, 20 Nov 2013 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/totem-continues-to-pin-down-physics-in-the-very-forward-region/ The TOTEM collaboration has made the first measurement of the double diffractive cross-section in the very forward region at the LHC, in a range in pseudorapidity where it has never been measured before.

The post TOTEM continues to pin down physics in the very forward region appeared first on CERN Courier.

]]>
The TOTEM collaboration has made the first measurement of the double diffractive cross-section in the very forward region at the LHC, in a range in pseudorapidity where it has never been measured before.

Double diffraction (DD) is the process in which two colliding hadrons dissociate into clusters of particles, the interaction being mediated by an object with the quantum numbers of the vacuum. Because the exchange is colourless, DD events are typically associated experimentally with a large “rapidity gap” – a range in rapidity that is devoid of particles.

CCnew8_10_13

The TOTEM experiment – designed in particular to study diffraction, total cross-sections and elastic scattering at the LHC – has three subdetectors placed symmetrically on both sides of the interaction point. Detectors in Roman pots identify leading protons, while two telescopes detect charged particles in the forward region. These two telescopes, T1 and T2, are the key to the measurement of double diffraction in the very forward region. T2 consists of gas-electron multipliers that detect charged particles with transverse momentum pT > 40 MeV/c at pseudorapidities of 5.3 < |η| < 6.5. The T1 telescope consists of cathode-strip chambers that measure charged particles with pT > 100 MeV/c at 3.1 < |η| < 4.7. (Pseudorapidity, η, is defined as –ln(tan θ/2), where θ is the angle of the outgoing particle relative to the beam axis, so a higher value corresponds to a more forward direction.)

For this novel measurement, TOTEM selected the DD events by vetoing T1 tracks and requiring tracks in T2. This is equivalent to selecting events that have two diffractive systems with 4.7 < |η|min < 6.5, where ηmin is the minimum pseudorapidity of all of the primary particles produced in the diffractive system. The measurement used data that were collected in October 2011 at a centre-of-mass energy of 7 TeV during a low pile-up run with special β* = 90 m optics and with the T2 minimum-bias trigger. After offline reconstruction, the DD events were selected by requiring tracks in both T2 arms and no tracks in either of the T1 arms. This allowed the extraction of a clean sample of double-diffractive events.

The analysis of these events led to a result for the double diffraction cross-section of σDD = (116±25) μb, for events where both diffractive systems have 4.7 < |η|min < 6.5. The measured values for the cross-section are between the predictions of the hadron-interaction models, Pythia and Phojet, for the corresponding ranges in η.

The post TOTEM continues to pin down physics in the very forward region appeared first on CERN Courier.

]]>
https://cerncourier.com/a/totem-continues-to-pin-down-physics-in-the-very-forward-region/feed/ 0 News The TOTEM collaboration has made the first measurement of the double diffractive cross-section in the very forward region at the LHC, in a range in pseudorapidity where it has never been measured before. https://cerncourier.com/wp-content/uploads/2013/11/CCnew8_10_13.jpg
First results from LUX on dark matter https://cerncourier.com/a/first-results-from-lux-on-dark-matter/ https://cerncourier.com/a/first-results-from-lux-on-dark-matter/#respond Wed, 20 Nov 2013 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/first-results-from-lux-on-dark-matter/ The collaboration that built and runs the Large Underground Xenon (LUX) experiment, operating in the Sanford Underground Research Laboratory, has released its first results in the search for weakly interacting massive particles (WIMPs) – a favoured candidate for dark matter.

The post First results from LUX on dark matter appeared first on CERN Courier.

]]>
CCnew9_10_13

The collaboration that built and runs the Large Underground Xenon (LUX) experiment, operating in the Sanford Underground Research Laboratory, has released its first results in the search for weakly interacting massive particles (WIMPs) – a favoured candidate for dark matter.

The LUX detector holds 370 kg of liquid xenon, with 250 kg actively monitored in a dual-phase (liquid–gas) time-projection chamber measuring 47 cm in diameter and 48 cm in height (cathode-to-gate). If a WIMP strikes a xenon atom it recoils from other xenon atoms and emits photons and electrons. The electrons are drawn upwards by an electrical field and interact with a thin layer of xenon gas at the top of the tank, releasing more photons. Light detectors in the top and bottom of the tank can detect single photons and so the two photon signals – one at the interaction point, the other at the top of the tank – can be pinpointed to within a few millimetres. The energy of the interaction can be measured precisely from the brightness of the signals.

The detector was filled with liquid xenon in February and the first results, for data taken during April to August, represent the analysis of 85.3 live days of data with a fiducial volume of 118 kg. The data are consistent with a background-only hypothesis, allowing 90% confidence limits to be set on spin-independent WIMP–nucleon elastic scattering with a minimum upper limit on the cross-section of 7.6 ×10–46 cm2 at a WIMP mass of 33 GeV/c2. The data are in strong disagreement with low-mass WIMP signal interpretations of the results from several recent direct-detection experiments.

The post First results from LUX on dark matter appeared first on CERN Courier.

]]>
https://cerncourier.com/a/first-results-from-lux-on-dark-matter/feed/ 0 News The collaboration that built and runs the Large Underground Xenon (LUX) experiment, operating in the Sanford Underground Research Laboratory, has released its first results in the search for weakly interacting massive particles (WIMPs) – a favoured candidate for dark matter. https://cerncourier.com/wp-content/uploads/2013/11/CCnew9_10_13.jpg
Fermilab gears up for an intense future https://cerncourier.com/a/fermilab-gears-up-for-an-intense-future/ https://cerncourier.com/a/fermilab-gears-up-for-an-intense-future/#respond Wed, 20 Nov 2013 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/fermilab-gears-up-for-an-intense-future/ A series of upgrades will deliver many more protons.

The post Fermilab gears up for an intense future appeared first on CERN Courier.

]]>
Satellite view of Fermilab

When a beam of protons passed through Fermilab’s Main Injector at the end of July, it marked the first operation of the accelerator complex since April 2012. The intervening long shutdown had seen significant changes to all of the accelerators to increase the proton-beam intensity that they can deliver and so maximize the scientific reach of Fermilab’s experiments. In August, acceleration of protons to 120 GeV succeeded at the first attempt – a real accomplishment after all of the upgrades that were made – and in September the Main Injector was already delivering 250 kW of proton-beam power. The goal is to reach 700 kW in the next couple of years.

With the end of the Tevatron collider programme in 2011, Fermilab increased its focus on studying neutrinos and rare subatomic processes while continuing its active role in the CMS experiment at CERN. Accelerator-based neutrino experiments, in particular, require intense proton beams. In the spring of 2012, Fermilab’s accelerator complex produced the most intense high-energy beam of neutrinos in the world, delivering a peak power of 350 kW by routinely sending 3.8 × 1013 protons/pulse at 120 GeV every 2.067 s to the MINOS and MINERvA neutrino experiments. It also delivered 15 kW of beam power at 8 GeV, sending 4.4 × 1012 protons/pulse every 0.4 s to the MiniBooNE neutrino experiment.

Higher intensities

This level of beam intensity was pushing the capabilities of the Linac, the Booster and the Main Injector. During the shutdown, Fermilab reconfigured its accelerator complex (see figure 1) and upgraded its machines to prepare them for the new NOvA, MicroBooNE and LBNE experiments, which will demand more muon neutrinos. In addition, the planned Muon g-2 and Mu2e experiments will require proton beams for muon production. With the higher beam intensities it is important to reduce beam losses, so the recent accelerator upgrades have also greatly improved beam quality and mitigated beam losses.

Proton throughput in the Booster

Before the shutdown, four machines were involved in delivering protons for neutrino production: the Cockcroft–Walton pre-accelerator, the linear accelerator, the Booster accelerator and the Main Injector. During the past 15 years, the proton requests for the Linac and Booster have gone up by more than an order of magnitude – first in support of MiniBooNE, which received beam from the Booster, and then in support of MINOS, which received beam from the Main Injector. Past upgrades to the accelerator complex ensured that those requests were met. However, during the next 10 years another factor of three is required to meet the goals of the new neutrino experiments. The latest upgrades are a major step towards meeting these goals.

For the first 40 years of the laboratory’s existence, the initial stage of the Fermilab accelerator chain was a caesium-ion source and a Cockcroft–Walton accelerator, which produced a 750 keV H beam. In August 2012, these were replaced with a new ion source, a radiofrequency quadrupole (RFQ) and an Einzel lens. The RFQ accomplishes transverse focusing, bunching and acceleration in a single compact device, significantly smaller than the room-sized Cockcroft–Walton accelerator. Now the 750 keV beam is already bunched, which improves capture in the following Linac (a drift-tube linear accelerator). The Einzel lens is used as a beam chopper: the transmission of the ions can be turned on and off by varying the voltage on the lens. Since the ion source and RFQ are a continuous-wave system, beam chopping is important to develop notches in the beam to allow for the rise times of the Booster extraction kicker. Chopping at the lowest possible energy minimizes the power loss in other areas of the complex.

Main Injector

The Booster, which receives 400 MeV H ions from the Linac, uses charge-exchange injection to strip the electrons from the ions and maximize beam current. It then accelerates the protons to 8 GeV. For the first 30 years of Booster operation, the demand for proton pulses was often less than 1 Hz and never higher than about 2 Hz. With the advent of MiniBooNE in 2002 and MINOS in 2005, demand for protons rose dramatically. As figure 2 shows, in 2003 – the first year of full MiniBooNE operation – 1.6 × 1020 protons travelled through the Booster. This number was greater than the total for the previous 10 years.

Booster upgrades

A series of upgrades during the past 10 years enabled this factor of 10 increase in proton throughput. The upgrades improved both the physical infrastructure (e.g. cooling water and transformer power) and accelerator physics (aperture and orbit control).

While the Booster magnet systems resonate at 15 Hz – the maximum number of cycles the machine can deliver – many of the other systems have not had sufficient power or cooling to operate at this frequency. Previous upgrades have pushed the Booster’s performance to about 7.5 Hz but the goal of the current upgrades is to bring the 40-year-old Linac and Booster up to full 15 Hz operation.

Understanding the aperture, orbit, beam tune and beam losses is increasingly important as the beam frequency rises. Beam losses directly result in component activation, which makes maintenance and repair more difficult because of radiation exposure to workers. Upgrades to instrumentation (beam-position monitors and dampers), orbit control (new ramped multipole correctors) and loss control (collimation systems) have led to a decrease in total power loss of a factor of two, even with the factor of 10 increase in total beam throughput.

The injection area

Two ongoing upgrades to the RF systems continued during the recent shutdown. One concerns the replacement of the 20 RF power systems, exchanging the vacuum-tube-based modulators and power amplifiers from the 1970s with a solid-state system. This upgrade was geared towards improving reliability and reducing maintenance. The solid-state designs have been in use in the Main Injector for 15 years and have proved to be reliable. The tube-based power amplifiers were mounted on the RF cavities in the Booster tunnel, a location that exposed maintenance technicians to radiation. The new systems reduce the number of components in the tunnel, therefore reducing radiation exposure and downtime because they can be serviced without entering the accelerator tunnel. The second upgrade is a refurbishment of the cavities, with a focus on the cooling and the ferrite tuners. As operations continue, the refurbishment is done serially so that the Booster always has a minimum number of operational RF cavities. Working on these 40-plus-year-old cavities that have been activated by radiation is a labour-intensive process.

The Main Injector and Recycler

The upgrades to the Main Injector and the reconfiguration of the Recycler storage ring have been driven by the NOvA experiment, which will explore the neutrino-mass hierarchy and investigate the possibility of CP violation in the neutrino sector. With the goal of 3.6 × 1021 protons on target and 14 kt of detector mass, a significant region of the phase space for these parameters can be explored. For the six-year duration of the experiment, this requires the Main Injector to deliver 6 × 1021 protons/year. The best previous operation was 3.25 × 1021 protons/year. A doubling of the integrated number of protons is required to meet the goals of the NOvA experiment.

In 2012, just before the shutdown, the Main Injector was delivering 3.8 × 1013 protons every 2.067 s to the target for the Neutrinos at the Main Injector (NuMI) facility. This intensity was accomplished by injecting nine batches at 8 GeV from the Booster into the Main Injector, ramping up the Main Injector magnets while accelerating the protons to 120 GeV, sending them to the NuMI target, and ramping the magnets back down to 8 GeV levels – then repeating the process. The injection process took 8/15 of a second (0.533 s) and the ramping up and down of the magnets took 1.533 s.

The refurbished RF cavities

A key goal of the shutdown was to reduce the time of the injection process. To achieve this, Fermilab reconfigured the Recycler, which is an 8 GeV, permanent-magnet storage ring located in the same tunnel as the Main Injector. The machine has the same 3.3 km circumference as the Main Injector. During the Tevatron collider era, it was used for the storage and cooling of antiprotons, achieving a record accumulation of 5 × 1012 antiprotons with a lifetime in excess of 1000 hours.

In future, the Recycler will be used to slip-stack protons from the Booster and transfer them into the Main Injector. By filling the Recycler with 12 batches (4.9 × 1013 protons) from the Booster while the Main Injector is ramping, the injection time can be cut from 0.533 s to 11 μs. Once completed, the upgrades to the magnet power and RF systems will speed up the Main Injector cycle to 1.33 s – a vast improvement compared with the 2.067 s achieved before the shutdown. When the Booster is ready to operate at 15 Hz, the total beam power on target will be 700 kW.

To use the Recycler for slip-stacking required a reconfiguration of the accelerator complex. A new injection beamline from the Booster to the Recycler had to be built (figure 3), since previously the only way to get protons into the Recycler was via the Main Injector. In addition, a new extraction beamline from the Recycler to the Main Injector was needed, as the aperture of the previous line was designed for the transfer of low-emittance, low-intensity antiproton beams. New 53 MHz RF cavities for the Recycler were installed to capture the protons from the Booster, slip-stack them and then transfer them to the Main Injector. New instrumentation had to be installed and all of the devices for cooling antiproton beams – both stochastic and electron cooling systems – and for beam transfer had to be removed.

New neutrino horn

Figure 4 shows the new injection line from the Booster (figure 5) to the Recycler, together with the upgraded injection line to the Main Injector, the transfer line for the Booster Neutrino Beam programme, and the Main Injector and Recycler rings. During the shutdown, personnel removed more than 100 magnets, all of the stochastic cooling equipment, vacuum components from four transfer lines and antiproton-specific diagnostic equipment. More than 150 magnets, 4 RF cavities and about 500 m of beam pipe for the new transfer lines were installed. Approximately 300 km of cable was pulled to support upgraded beam-position monitoring systems, new vacuum installations, new kicker systems, other new instrumentation and new powered elements. Approximately 450 tonnes of material was moved in or out of the complex at the same time.

The NuMI target

To prepare for a 700 kW beam, the target station for the NuMI facility needed upgrades to handle the increased power. A new target design was developed and fabricated in collaboration with the Institute for High Energy Physics, Protvino, and the Rutherford Appleton Laboratory, UK. A new focusing horn was installed to steer higher-energy neutrinos to the NOvA experiment (figure 6). The horn features a thinner conductor to minimize ohmic heating at the increased pulsing rate. The water-cooling capacity for the target, the focusing horns and the beam absorber were also increased.

With the completion of the shutdown, commissioning of the accelerator complex is underway. Operations have begun using the Main Injector, achieving 250 kW on target for the NuMI beamline and delivering beam to the Fermilab Test Beam Facility. The reconfigured Recycler has circulated protons for the first time and work is underway towards full integration of the machine into Main Injector operations. The neutrino experiments are taking data and the SeaQuest experiment will receive proton beam soon. Intensity and beam power are inceasing in all of the machines and the full 700 kW beam power in the Main Injector should be accomplished in 2015.

The post Fermilab gears up for an intense future appeared first on CERN Courier.

]]>
https://cerncourier.com/a/fermilab-gears-up-for-an-intense-future/feed/ 0 Feature A series of upgrades will deliver many more protons. https://cerncourier.com/wp-content/uploads/2013/11/CCfer3_10_13.jpg
LHCb plans for cool pixel detector https://cerncourier.com/a/lhcb-plans-for-cool-pixel-detector/ https://cerncourier.com/a/lhcb-plans-for-cool-pixel-detector/#respond Mon, 21 Oct 2013 07:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/lhcb-plans-for-cool-pixel-detector/ As the first long shutdown since the start-up of the LHC continues, many teams at CERN are already preparing for future improvements in performance that were foreseen when the machine restarts after the second long shutdown, in 2019.

The post LHCb plans for cool pixel detector appeared first on CERN Courier.

]]>
As the first long shutdown since the start-up of the LHC continues, many teams at CERN are already preparing for future improvements in performance that were foreseen when the machine restarts after the second long shutdown, in 2019. The LHCb collaboration, for one, has recently approved the choice of technology for the upgrade of its Vertex Locator (VELO), giving the go-ahead for a new pixel detector to replace the current microstrip device.

CCnew4_09_13

The collaboration is working towards a major upgrade of the LHCb experiment for the restart of data-taking in 2019. Most of the subdetectors and electronics will be replaced so that the experiment can read out collision events at the full rate of 40 MHz. The upgrade will also allow LHCb to run at higher luminosity and eventually accumulate an order of magnitude more data than was foreseen with the current set-up.

The job of the VELO is to peer closely at the collision region and reconstruct precisely the primary and secondary interaction vertices. The aim of the upgrade of this detector is to reconstruct events with high speed and precision, allowing LHCb to extend its investigations of CP violation and rare phenomena in the world of beauty and charm mesons.

The new detector will contain 40 million pixels, each measuring 55 μm square. The pixels will form 26 planes arranged perpendicularly to the LHC beams over a length of 1 m (see figure). The sensors will come so close to the interaction region that the LHC beams will have to thread their way through an aperture of only 3.5 mm radius.

Operating this close to the beams will expose the VELO to a high flux of particles, requiring new front-end electronics capable of spitting out data at rates of around 2.5 Tbits/s from the whole VELO. To develop suitable electronics, LHCb has been collaborating closely with the Medipix3 collaboration. The groups involved have recently celebrated the successful submission and delivery of the Timepix3 chip. The VeloPix chip planned for the read-out of LHCb’s new pixel detector will use numerous Timepix3 features. The design should be finalized about a year from now.

An additional consequence of the enormous track rate is that the VELO will have to withstand a considerable radiation dose. This means that it requires highly efficient cooling, which must also be extremely lightweight. LHCb has therefore been collaborating with CERN’s PH-DT group and the NA62 collaboration to develop the concept of microchannel cooling for the new pixel detector. Liquid CO2 will circulate in miniature channels etched into thin silicon plates, evaporating under the sensors and read-out chips to carry the heat away efficiently. The CO2 will be delivered via novel lightweight connectors that are capable of withstanding the high pressures involved. LHCb will be the first experiment to use evaporative CO2 cooling in this way, following on from the successful experience with CO2 cooling delivered via stainless steel pipes in the current VELO.

All of these novel concepts combine to make a “cool” pixel detector, well equipped to do the job for the LHCb upgrade.

The post LHCb plans for cool pixel detector appeared first on CERN Courier.

]]>
https://cerncourier.com/a/lhcb-plans-for-cool-pixel-detector/feed/ 0 News As the first long shutdown since the start-up of the LHC continues, many teams at CERN are already preparing for future improvements in performance that were foreseen when the machine restarts after the second long shutdown, in 2019. https://cerncourier.com/wp-content/uploads/2013/10/CCnew4_09_13.jpg
Genetic multiplexing: how to read more with less electronics https://cerncourier.com/a/genetic-multiplexing-how-to-read-more-with-less-electronics/ https://cerncourier.com/a/genetic-multiplexing-how-to-read-more-with-less-electronics/#respond Mon, 21 Oct 2013 07:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/genetic-multiplexing-how-to-read-more-with-less-electronics/ Modern physics experiments often require the detection of particles over large areas with excellent spatial resolution.

The post Genetic multiplexing: how to read more with less electronics appeared first on CERN Courier.

]]>
CCnew5_09_13

Modern physics experiments often require the detection of particles over large areas with excellent spatial resolution. This inevitably leads to systems equipped with thousands, if not millions, of read-out elements (strips, pixels) and consequently the same number of electronic channels. In most cases, it increases the total cost of a project significantly and can even be prohibitive for some applications.

In general, the size of the electronics can be reduced considerably by connecting several read-out elements to a single channel through an appropriate multiplexing pattern. However any grouping implies a certain loss of information and this means that ambiguities can occur. Sébastien Procureur, Raphaël Dupré and Stéphan Aune at CEA Saclay and IPN Orsay have devised a method of multiplexing that overcomes this problem. Starting from the assumption that a particle leaves a signal on at least two neighbouring elements, they built a pattern in which the loss of information coincides exactly with this redundancy of the signal, therefore minimizing the ambiguities of localization. In this pattern, two given channels are connected to several strips in such a way that these strips are consecutive only once in the whole detector. The team has called this pattern “genetic multiplexing” for its analogy with DNA, as a sequence of channels uniquely codes the particle’s position.

Combinatorial considerations indicate that, using a prime number p of channels, a detector can be equipped with at most p(p–1)/2+1 read-out strips. Furthermore, the degree of multiplexing can be adapted easily to the incident flux. Simulations show that a reduction in the electronics by a factor of two can still be achieved at rates up to the order of 10 kHz/cm2.

The team has successfully built and tested a large, 50 × 50 cm2 Micromegas (micro-pattern gaseous detector) with such a pattern, the 1024 strips being read out with only 61 channels. The prototype showed the same spatial resolution as a non-multiplexed detector (Procureur et al. 2013). A second prototype that is built from resistive-strip technology will be tested soon, to achieve efficiencies close to 100%.

The possibility of building large micro-pattern detectors with up to 30 times less electronics opens the door for new applications both within and beyond particle physics. In muon tomography, this multiplexing could be used to image large objects with an unprecedented accuracy, either by deflection (containers, trucks, manufacturing products) or by absorption (geological structures such as volcanoes, large monuments such as a cathedral roof). The reduction of the electronics and power consumption also suggests applications in medical imaging or dosimetry, where light, portable systems are required. Meanwhile, in particle physics this multiplexing could bring a significant reduction in the cost of electronics – after optimizing the number of channels with the incident flux – and simplifications in integration and cooling.

The post Genetic multiplexing: how to read more with less electronics appeared first on CERN Courier.

]]>
https://cerncourier.com/a/genetic-multiplexing-how-to-read-more-with-less-electronics/feed/ 0 News Modern physics experiments often require the detection of particles over large areas with excellent spatial resolution. https://cerncourier.com/wp-content/uploads/2013/10/CCnew5_09_13.jpg
Micropattern-detector experts meet in Zaragoza https://cerncourier.com/a/micropattern-detector-experts-meet-in-zaragoza/ https://cerncourier.com/a/micropattern-detector-experts-meet-in-zaragoza/#respond Mon, 21 Oct 2013 07:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/micropattern-detector-experts-meet-in-zaragoza/ Latest research developments in the technology of MPGDs.

The post Micropattern-detector experts meet in Zaragoza appeared first on CERN Courier.

]]>
The three winners of the Charpak Award.

Micropattern gaseous detectors (MPGDs) are the modern heirs of multiwire proportional counter (MWPC) planes, with the wires replaced by microstructures that are engraved on printed-circuit-like substrates. An idea that was first proposed by Anton Oed in 1988, it was the invention of stable amplification structures such as the micromesh gaseous structure (Micromegas) by Ioannis Giomataris in 1996 and the gas electron multiplier (GEM) by Fabio Sauli in 1997 that triggered a boom in the development and applications of these detectors. It was as a consequence of this increasing activity that the series of international conferences on micropattern gaseous detectors was initiated, with the first taking place in Crete in 2009 followed by the second meeting in Kobe in 2011.

The third conference – MPGD2013 – moved to Spain, bringing more than 125 physicists, engineers and students to the Paraninfo building of the Universidad de Zaragoza during the first week of July. The presentations and discussions took place in the same room that, about a century ago, Santiago Ramón y Cajal – the most prominent Spanish winner of a scientific Nobel prize – studied and taught in. The Paraninfo is the university’s oldest building and its halls, corridors and stairs provided an impressive setting for the conference. The streets, bars and restaurants of Zaragoza – the capital of Aragon – were further subjects for the conference participants to discover. After an intense day of high-quality science, lively discussions often continued into the evening and sometimes late into the night, helped by a variety of tapas and wines.

The wealth of topics and applications that were reviewed at the conference reflected the current exciting era in the field. Indeed, the large amount of information and number of projects that were presented make it difficult to summarize the most relevant ones in a few lines. The following is a personal selection. Readers who would like more detail can browse the presentations that are posted on the conference website, including the excellent and comprehensive conference-summary talk given by Silvia Dalla Torre of INFN/Treiste on the last day.

The meeting started with talks about experiments in high-energy and nuclear physics that are using (or planning to use) MPGDs. Since the pioneering implementation of GEM and Micromegas detectors by the COMPASS collaboration at CERN – the first large-scale use of MPGDs in high-energy physics – they have spread to many more experiments. Now all of the LHC experiment collaborations plan to install MPGDs in their future upgrades. The most impressive examples, in terms of detector area, are the 1200 m2 of Micromegas modules to be installed in the muon system of ATLAS and the 1000 m2 of GEM modules destined for the forward muon spectrometer of CMS. These examples confirm that MPGDs are the technology of choice when large areas need to be covered with high granularity and occupancy in a cost-effective way. These numbers also imply that transferring the fabrication know-how to industry is a must. A good deal of effort is currently devoted to industrialization of MPGDs and this was also an important topic at the conference.

MPGDs have found application in other fields of fundamental research. Some relevant examples that were discussed at the conference included their use as X-ray or γ detectors or as polarimeters in astrophysics, as neutron or fission-fragment detectors, or in rare-event experiments. Several groups are exploring the possibility of developing MPGD-based light detectors – motivated greatly by the desire to replace large photo-multiplier tube (PMT) arrays in the next generation of rare-event experiments. Working at cryogenic temperatures – or even within the cryogenic liquid itself – is sometimes a requirement. Large-area light detectors are also needed for Cherenkov detectors and in this context presentations at the conference included several nice examples of Cherenkov rings registered by MPGDs. Several talks reported on applications beyond fundamental research, including a review by Fabrizio Murtas of INFN/Frascati and CERN. MPGDs are being used or considered in fields as different as medical imaging, radiotherapy, material science, radioactive-waste monitoring and security inspection, among others.

A picnic lunch at the Canfranc Estación village

An important part of the effort of the community is to improve the overall performance of current MPGDs, in particular regarding issues such as ageing or resilience to discharges. This is leading to modified versions of the established amplification structures of Micromegas and GEMs and to new alternative geometries. Some examples that were mentioned at the conference are variations that go under the names of μ-PIC, THGEM, MHSP or COBRA, as well as configurations that combine several different geometries. In particular, a number of varieties of thick GEM-like (THGEM) detectors (also known as large electron multipliers, or LEM) are being actively developed, as Shikma Bressler of the Weizmann Institute of Science described in her review.

Many of the advances that were presented involve the use of new materials – for example, a GEM made out of glass or Teflon – or the implementation of electrodes with resistive materials, the main goal being to limit the size and rate of discharges and their potential damage. Advances on the GridPix idea – the use of a Micromegas mesh post-processed on top of a Timepix chip – also go in the direction of adding a resistive layer to limit discharges and attracted plenty of interest. Completely new approaches were also presented, such as the “piggyback Micromegas” that separates the Micromegas from the actual readout by a ceramic layer, so that the signal is read by capacitive coupling and the readout is immune to discharges.

Several senior members gave advice to the new generation of MPGD researchers and proposed a toast to them

The presence of CERN’s Rui de Oliveira to review the technical advances in MPGD fabrication techniques and industrialization is already a tradition at these meetings. Current efforts focus on the challenges presented by projects that require GEM and Micromegas detectors with larger areas. Another tradition is to hear Rob Veenhof of Uludağ University and the RD51 collaboration review the current situation in the simulation of electron diffusion, amplification and signal detection in gas, as well as the corresponding software tools. Current advances are allowing the community to understand progressively the performance of MPGDs at the microphysics level. Finally, although electronics issues were present in many of the talks, the participants especially enjoyed a pedagogical talk by CERN’s Alessandro Marchioro about the trends in microelectronics and how they might affect future detectors in the field. These topics were studied in more detail in the sessions of the RD-51 collaboration meeting that came after the conference at the same venue. Fortunately, there was the opportunity to relax before this following meeting, with a one-day excursion to the installations of the Canfranc Underground Laboratory in the Spanish Pyrenees and St George’s castle in the town of Jaca.

The vitality of the MPGD community resides in the relatively large number of young researchers who came to Zaragoza eager to present their work as a talk or as one of the 40 posters that were displayed in the coffee and lunch hall during the week. Three of those young researchers – Michael Lupberger of the University of Bonn, Diego González-Díaz of the University of Zaragoza and Takeshi Fujiwara of the University of Tokyo – received the Charpak Prize to reward their work. This award was first presented at MPGD2011 in Japan and the hope is that it becomes formally established in the MPGD community on future occasions.

Time will tell which of the many ideas that are now being put forward will eventually become established but the creativity of the community is remarkable and one of its most important assets. References to this creativity – and to the younger generation of researchers who foster it – were recurrent throughout the conference. At the banquet, by the Ebro riverbank under the shadow of the tall towers of the Basílica del Pilar, several senior members gave advice to the new generation of MPGD researchers and proposed a toast to them – a blessing for the field.

The post Micropattern-detector experts meet in Zaragoza appeared first on CERN Courier.

]]>
https://cerncourier.com/a/micropattern-detector-experts-meet-in-zaragoza/feed/ 0 Meeting report Latest research developments in the technology of MPGDs. https://cerncourier.com/wp-content/uploads/2013/10/CCmpg2_09_13.jpg
ATLAS undergoes some delicate gymnastics https://cerncourier.com/a/atlas-undergoes-some-delicate-gymnastics/ https://cerncourier.com/a/atlas-undergoes-some-delicate-gymnastics/#respond Fri, 27 Sep 2013 07:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/atlas-undergoes-some-delicate-gymnastics/ A huge programme of consolidation and improvements is under way at Point 1.

The post ATLAS undergoes some delicate gymnastics appeared first on CERN Courier.

]]>
ATLAS detector,

The LHC’s Long Shutdown 1 (LS1) is an opportunity that the ATLAS collaboration could not miss to improve the performance of its huge and complex detector. Planning began almost three years ago to be ready for the break and to produce a precise schedule for the multitude of activities that are needed at Point 1 – where ATLAS is located on the LHC. Now, a year after the famous announcement of the discovery of a “Higgs-like boson” on 4 July 2012 and only six months after the start of the shutdown, more than 800 different tasks have been already accomplished in more than 250 work packages. But what is ATLAS doing and why this hectic schedule? The list of activities is long, so only a few examples will be highlighted here.

The inner detector

One of the biggest interventions concerns the insertion of a fourth and innermost layer of the pixel detector – the IBL. The ATLAS pixel detector is the largest pixel-based system at the LHC. With about 80 million pixels, until now it has covered a radius from 12 cm down to 5 cm from the interaction point. At its conception, the collaboration already thought that it could be updated after a few years of operation. An additional layer at a radius of about 3 cm would allow for performance consolidation, in view of the effects of radiation damage to the original innermost layer at 5 cm (the b-layer). The decision to turn this idea into reality was taken in 2008, with the aim of installation around 2016. However, fast progress in preparing the detector and moving the long shutdown to the end of 2012 boosted the idea and the installation goal was moved forward by two years.

Making space

To make life more challenging, the collaboration decided to build the IBL using not only well established planar sensor technology but also novel 3D sensors. The resulting highly innovative detector is a tiny cylinder that is about 3 cm in radius and about 70 cm long but it will provide the ATLAS experiment with another 12 million detection channels. Despite its small dimensions, the entire assembly – including the necessary services – will need an installation tool that is nearly 10 m long. This has led to the so-called “big opening” of the ATLAS detector and the need to lift one of the small muon wheels to the surface.

The “big opening” of ATLAS is a special configuration where at one end of the detector one of the big muon wheels is moved as far as possible towards the wall of the cavern, the 400-tonne endcap toroid is moved laterally towards the surrounding path structure, the small muon wheel is moved as far as the already opened big wheel and then the endcap calorimeter is moved out by about 3 m. But that is not the end of the story. To make more space, the small muon wheel must be lifted to the surface to allow the endcap calorimeter to be moved further out against the big wheels.

In 2011, the ATLAS pixel community decided to prepare new services for the detector – code-named nSQP

This opening up – already foreseen for the installation of the IBL – became more worthwhile when the collaboration decided to use LS1 to repair the pixel detector. During the past three years of operation, the number of pixel modules that have stopped being operational has risen continuously from the original 10–15 up to 88 modules, at a worryingly increasing rate. Back in 2010, the first concerns triggered a closer look at the module failures and it was clear that in most of the cases the modules were in a good state but that something in the services had failed. This first glance was then augmented by substantial statistics after up to 88 modules had failed by mid-2012.

In 2011, the ATLAS pixel community decided to prepare new services for the detector – code-named nSQP for “new service quarter panels”. In January 2013, the collaboration decided to deploy the nSQP not only to fix the failures of the pixel modules and to enhance the future read-out capabilities for two of the three layers but also to ease the task of inserting the IBL into the pixel detector. This decision implied having to extract the pixel detector and take it to the clean-room building on the surface at Point 1 to execute the necessary work. The “big opening” therefore became mandatory.

"Big opening" of ATLAS

The extraction of the pixel detector was an extremely delicate operation but it was performed perfectly and a week in advance of the schedule. Work on both the original pixels and the IBL is now in full swing and preparations are under way to insert the enriched four-layer pixel detector back into ATLAS. The pixel detector will then contain 92 million channels – some 90% of the total number of channels in ATLAS.

But that is not the end of the story for the ATLAS inner detector. Gas leaks appeared last year during operation of the transition radiation tracker (TRT) detector. Profiting from the opening of the inner detector plates to access the pixel detector, a dedicated intervention was performed to cure as many leaks as possible using techniques that are usually deployed in surgery.

Further improvements

Another important improvement for the silicon detectors concerns the cooling. The evaporative cooling system that was based on a complex compressor plant has been satisfactory, even if it has created a variety of problems and interventions. The system allowed operating temperatures to be set to –20 °C with the possibility of going down to –30 °C, although the lower value has not been used so far as radiation damage to the detector is still in its infancy. However, the compressor plant needed continual attention and maintenance. The decision was therefore taken to build a second plant that was based on the thermosyphon concept, where the pressure that is required is obtained without a compressor, using instead the gravity advantage offered by the 90-m-deep ATLAS cavern. The new plant has been built and is now being commissioned, while the original plant has been refurbished and will serve as a redundant (back-up) system. In addition, the IBL cooling is based on CO2 cooling technology and a new redundant plant is being built to be ready for the IBL operations.

Both the semiconductor tracker and the pixel detector are also being consolidated. Improvements are being made to the back-end read-out electronics to cope with the higher luminosities that will go beyond twice the LHC design luminosity.

Extracting the pixel detector

Lifting the small muon wheel to the surface – an operation that had never been done before – was a success. The operation was not without difficulties because of the limited amount of space for manoeuvering the 140-tonne object to avoid collisions with other detectors, crates and the walls of the cavern and access shaft. Nevertheless, it was executed perfectly thanks to highly efficient preparation and the skill of the crane drivers and ATLAS engineers, with several dry runs done on the surface. Not to miss the opportunity, the few problematic cathode-strip chambers on the small wheel that was lifted to the surface will be repaired. A specialized tool is being designed and fabricated to perform this operation in the small space that is available between the lifting frame and the detector.

Many other tasks are foreseen for the muon spectrometer. The installation of a final layer of chambers – the endcap extensions – which was staged in 2003 for financial reasons has already been completed. These chambers were installed on one side of the detector during previous mid-winter shutdowns. The installation on the other side has now been completed during the first three months of LS1. In parallel, a big campaign to check for and repair leaks has started on the monitored drift tubes and resistive-plate chambers, with good results so far. As soon as access allows, a few problematic thin-gap chambers on the big wheels will be exchanged. Construction of some 30 new chambers has been under way for a few months and their installation will take place during the coming winter.

At the same time, the ATLAS collaboration is improving the calorimeters. New low-voltage power supplies are being installed for both the liquid-argon and tile calorimeters to give a better performance at higher luminosities and to correct issues that have been encountered during the past three years. In addition, a broad campaign of consolidation of the read-out electronics for the tile calorimeter is ongoing because it is many years since it was constructed. Designing, prototyping, constructing and testing new devices like these has kept the ATLAS calorimeter community busy during the past four years. The results that have been achieved are impressive and life for the calorimeter teams during operation will become much better with these new devices.

Improvements are also under way for the ATLAS forward detectors. The LUCID luminosity monitor is being rebuilt in a simplified way to make it more robust for operations at higher luminosity. All of the four Roman-pot stations for the absolute luminosity monitor, ALFA, located at 240 m from the centre of ATLAS in the LHC tunnel, will soon be in laboratories on the surface. There they will undergo modifications to implement wake-field suppression measures that will fight against the beam-induced increase in temperature that was suffered during operations in 2012. There are other plans for the beam-conditions monitor, the diamond-beam monitor and the zero-degree calorimeters. The activities are non-stop everywhere.

The infrastructure

All of the above might seem to be an enormous programme but it does not touch on the majority of the effort. The consolidation work spans the improvements to the evaporative cooling plants that have already been mentioned to all aspects of the electrical infrastructure and more. Here are a few examples from a long list.

The detector

Installation of a new uninterruptible power supply is ongoing at Point 1, together with replacement of the existing one. This is to avoid power glitches, which have affected the operation of the ATLAS detector on some occasions. Indeed, the whole electrical installation is being refreshed.

The cryogenic infrastructure is being consolidated and improved to allow completely separate operation of the ATLAS solenoid and toroid magnets. Redundancy is implemented everywhere in the magnet systems to limit downtime. Such downtime has, so far, been small enough to be unnoticeable in ATLAS data-taking but it could create problems in future.

All of the beam pipes will be replaced with new ones. In the inner detector, a new beryllium pipe with a smaller diameter to allow space for the IBL has been constructed and installed already in the IBL support structure. All of the other stainless-steel pipes will be replaced with aluminium ones to improve the level of background everywhere in ATLAS and minimize the adverse effects of activation.

A back-up for the ATLAS cooling towers is being created via a connection to existing cooling towers for the Super Proton Synchrotron. This will allow ATLAS to operate at reduced power, even during maintenance of the main cooling towers. The cooling infrastructure for the counting rooms is also undergoing complete improvement with redundancy measures inserted everywhere. All of these tasks are the result of a robust collaboration between ATLAS and all CERN departments.

LS1 is not, then, a period of rest for the ATLAS collaboration. Many resources are being deployed to consolidate and improve all possible aspects of the detector, with the aim of minimizing downtime and its impact on data-taking efficiency. Additional detectors are being installed to improve ATLAS’s capabilities. Only a few of these have been mentioned here. Others include, for example, even more muon chambers, which are being installed to fill any possible instrumental cracks in the detector.

All of this effort requires the co-ordination and careful planning of a complicated gymnastics of heavy elements in the cavern. ATLAS will be a better detector at the restart of LHC operations, ready to work at higher energies and luminosities for the long period until LS2 – and then the gymnastics will begin again.

The post ATLAS undergoes some delicate gymnastics appeared first on CERN Courier.

]]>
https://cerncourier.com/a/atlas-undergoes-some-delicate-gymnastics/feed/ 0 Feature A huge programme of consolidation and improvements is under way at Point 1. https://cerncourier.com/wp-content/uploads/2013/09/CCatl1_08_13.jpg
AIDA boosts detector development https://cerncourier.com/a/aida-boosts-detector-development/ https://cerncourier.com/a/aida-boosts-detector-development/#respond Mon, 19 Aug 2013 13:23:44 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/aida-boosts-detector-development/ An EU project is enabling detector solutions for upgraded and future accelerators.

The post AIDA boosts detector development appeared first on CERN Courier.

]]>
Conceptual structure of a pixel detector

Research in high-energy physics at particle accelerators requires highly complex detectors to observe the particles and study their behaviour. In the EU-supported project on Advanced European Infrastructure for Detectors at Accelerators (AIDA), more than 80 institutes from 23 European countries have joined forces to boost detector development for future particle accelerators in line with the European Strategy for Particle Physics. These include the planned upgrade of the LHC, as well as new linear colliders and facilities for neutrino and flavour physics. To fulfil its aims, AIDA is divided into three main activities: networking, joint research and transnational access, all of which are progressing well two years after the project’s launch.

Networking

AIDA’s networking activities fall into three work packages (WPs): the development of common software tools (WP2); microelectronics and detector/electronics integration (WP3); and relations with industry (WP4).

Building on and extending existing software and tools, the WP2 network is creating a generic geometry toolkit for particle physics together with tools for detector-independent reconstruction and alignment. The design of the toolkit is shaped by the experience gained with detector-description systems implemented for the LHC experiments – in particular LHCb – as well as by lessons learnt from various implementations of geometry-description tools that have been developed for the linear-collider community. In this context, the Software Development for Experiments and LHCb Computing groups at CERN have been working together to develop a new generation of software for geometry modellers. These are used to describe the geometry and material composition of the detectors and as the basis for tracking particles through the various detector layers.

Enabling the community to access the most advanced semiconductor technologies is an important aim for AIDA

This work uses the geometrical models in Geant4 and ROOT to describe the experimental set-ups in simulation or reconstruction programmes and involves the implementation of geometrical solid primitives as building blocks for the description of complex detector arrangements. These include a large collection of 3D primitives, ranging from simple shapes such as boxes, tubes or cones to more complex ones, as well as their Boolean combinations. Some 70–80% of the effort spent on code maintenance in the geometry modeller is devoted to improving the implementation of these primitives. To reduce the effort required for support and maintenance and to converge on a unique solution based on high-quality code, the AIDA initiative has started a project to create a “unified-solids library” of the geometrical primitives.

Enabling the community to access the most advanced semiconductor technologies – from nanoscale CMOS to innovative interconnection processes – is an important aim for AIDA. One new technique is 3D integration, which has been developed by the microelectronic industry to overcome limitations of high-frequency microprocessors and high-capacity memories. It involves fabricating devices based on two or more active layers that are bonded together, with vertical interconnections ensuring the communication between them and the external world. The WP3 networking activity is studying 3D integration to design novel tracking and vertexing detectors based on high-granularity pixel sensors.

Interesting results have already emerged from studies with the FE-Ix series of CMOS chips that the ATLAS collaboration has developed for the read-out of high-resistivity pixel sensors – 3D processing is currently in progress on FE-I4 chips. Now, some groups are evaluating the possibility of developing new electronic read-out chips in advanced CMOS technologies, such as 65 nm and of using these chips in a 3D process with high-density interconnections at the pixel level. Once the feasibility of such a device is demonstrated, physicists should be able to design a pixel detector with highly aggressive and intelligent architectures for sensing, analogue and digital processing, storage and data transmission (figure 1).

The development of detectors using breakthrough technologies calls for the involvement of hi-tech industry. The WP4 networking activity aims to increase industrial involvement in key detector-developments in AIDA and to provide follow-up long after completion of the project. To this end, it has developed the concept of workshops tailored to maximize the attendees’ benefits while also strengthening relations with European industry, including small and medium-sized enterprises (SMEs). The approach is to organize “matching events” that address technologies of high relevance for detector systems and gather key experts from industry and academia with a view to establish high-quality partnerships. WP4 is also developing a tool called “collaboration spotting”, which aims to monitor through publications and patents the industrial and academic organizations that are active in the technologies under focus at a workshop and to identify the key players. The tool was used successfully to invite European companies – including SMEs – to attend the workshop on advanced interconnections for chip packaging in future detectors that took place in April at the Laboratori Nazionali di Frascati of INFN.

Test beams and telescopes

The development, design and construction of detectors for particle-physics experiments are closely linked with the availability of test beams where prototypes can be validated under realistic conditions or production modules can undergo calibration. Through its transnational access and joint research activities, AIDA is not only supporting test-beam facilities and corresponding infrastructures at CERN, DESY and Frascati but is also extending them with new infrastructures. Various sub-tasks cover the detector activities for the LHC and linear collider, as well as a neutrino activity, where a new low-energy beam is being designed at CERN, together with prototype detectors.

One of the highlights of WP8 is the excellent progress made towards two new major irradiation facilities at CERN

One of the highlights of WP8 is the excellent progress made towards two new major irradiation facilities at CERN. These are essential for the selection and qualification of materials, components and full detectors operating in the harsh radiation environments of future experiments. AIDA has strongly supported the initiatives to construct GIF++ – a powerful γ irradiation facility combined with a test beam in the North Area – and EAIRRAD, which will be a powerful proton and mixed-field irradiation facility in the East Area. AIDA is contributing to both projects with common user-infrastructure as well as design and construction support. The aim is to start commissioning and operation of both facilities following the LS1 shutdown of CERN’s accelerator complex.

The current shutdown of the test beams at CERN during LS1 has resulted in a huge increase in demand for test beams at the DESY laboratory. The DESY II synchrotron is used mainly as a pre-accelerator for the X-ray source PETRA III but it also delivers electron or positron beams produced at a fixed carbon-fibre target to as many as three test-beam areas. Its ease of use makes the DESY test beam an excellent facility for prototype testing because this typically requires frequent access to the beam area. In 2013 alone, 45 groups from more than 30 countries with about 200 users have already accessed the DESY test beams. Many of them received travel support from the AIDA Transnational Access Funds and so far AIDA funding has enabled a total of 130 people to participate in test-beam campaigns. The many groups using the beams include those from the ALICE, ATLAS, Belle II, CALICE, CLIC, CMS, Compass, LHCb, LCTPC and Mu3e collaborations.

Combined beam-telescope

About half of the groups using the test beam at DESY have taken advantage of a beam telescope to provide precise measurements of particle tracks. The EUDET project – AIDA’s predecessor in the previous EU framework programme (FP6) – provided the first beam telescope to serve a large user community, which was aimed at detector R&D for an international linear collider. For more than five years, this telescope, which was based on Minimum Ionizing Monolithic Active pixel Sensors (MIMOSA), served a large number of groups. Several copies were made – a good indication of success – and AIDA is now providing continued support for the community that uses these telescopes. It is also extending its support to the TimePix telescope developed by institutes involved in the LHCb experiment.

The core of AIDA’s involvement lies in the upgrade and extension of the telescope. For many users who work on LHC applications, a precise reference position is not enough. They also need to know the exact time of arrival of the particle but it is difficult to find a single system that can provide both position and time at the required precision. Devices with a fast response tend to be less precise in the spatial domain or put too much material in the path of the particle. So AIDA combines two technologies: the thin MIMOSA sensors with their spatial resolution provide the position; while the ATLAS FEI4 detectors provide time information with the desired LHC structure.

The first beam test in 2012 with a combined MIMOSA-FEI4 telescope was an important breakthrough. Figure 2 shows the components involved in the set-up in the DESY beam. Charged particles from the accelerator – electrons in this case – first traverse three read-out planes of the MIMOSA telescope, followed by the device under test, then the second triplet of MIMOSA planes and then the ATLAS-FEI4 arm. The DEPFET pixel-detector international collaboration was the first group to use the telescope, so bringing together within a metre pixel detectors from three major R&D collaborations.

While combining the precise time information from the ATLAS-FEI4 detector with the excellent spatial resolution of MIMOSA provides the best of both worlds, there is an additional advantage: the FEI4 chip has a self-triggering capability because it can issue a trigger signal based on the response of the pixels. Overlaying the response of the FEI4 pixel matrix with a programmable mask and feeding the resulting signal into the trigger logic allows triggering on a small area and is more flexible than a traditional trigger based on scintillators. To change the trigger definition, all that is needed is to upload a new mask to the device. This turns out to be a useful feature if the prototypes under test cover a small area.

CALICE tungsten calorimeter

Calorimeter development in AIDA WP9 is mainly motivated by experiments at possible future electron–positron colliders, as defined in the International Linear Collider and Compact Linear Collider studies. These will demand extremely high-performance calorimetry, which is best achieved using a finely segmented system that reconstructs events using the so-called particle-flow approach to allow the precise reconstruction of jet energies. The technique works best with an optimal combination of tracking and calorimeter information and has already been applied successfully in the CMS experiment. Reconstructing each particle individually requires fine cell granularity in 3D and has spurred the development of novel detection technologies, such as silicon photo-multipliers (SiPMs) mounted on small scintillator tiles or strips, gaseous detectors (micro mesh or resistive plate chambers) with 2D read-out segmentation and large-area arrays of silicon pads.

After tests of sensors developed by the CALICE collaboration in a tungsten stack at CERN (figure 3) – in particular to verify the neutron and timing response at high energy – the focus is now on the realization of fully technological prototypes. These include power-pulsed embedded data-acquisition chips requested for the particle-flow-optimized detectors for a future linear collider and they address all of the practical challenges of highly granular devices – compactness, integration, cooling and in situ calibration. Six layers (256 channels each) of a fine granularity (5 × 5 mm2) silicon-tungsten electromagnetic calorimeter are being tested in electron beams at DESY this July (figure 4). At the same time, the commissioning of full-featured scintillator hadron calorimeter units (140 channels each) is progressing at a steady pace. A precision tungsten structure and read-out chips are also being prepared for the forward calorimeters to test the radiation-hard sensors produced by the FCAL R&D collaboration.

Five scintillator HCAL units

The philosophy behind AIDA is to bring together institutes to solve common problems so that once the problem is solved, the solution can be made available to the entire community. Two years on from the project’s start – and halfway through its four-year lifetime – the highlights described here, from software toolkits to a beam-telescope infrastructure to academia-industry matching, illustrate well the progress that is being made. Ensuring the user support of all equipment in the long term will be the main task in a new proposal to be submitted next year to the EC’s Horizon 2020 programme. New innovative activities to be included will be discussed during the autumn within the community at large.

The post AIDA boosts detector development appeared first on CERN Courier.

]]>
https://cerncourier.com/a/aida-boosts-detector-development/feed/ 0 Feature An EU project is enabling detector solutions for upgraded and future accelerators.
ALICE through a gamma-ray looking glass https://cerncourier.com/a/alice-through-a-gamma-ray-looking-glass/ https://cerncourier.com/a/alice-through-a-gamma-ray-looking-glass/#respond Fri, 19 Jul 2013 07:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/alice-through-a-gamma-ray-looking-glass/ The ALICE experiment is optimized to perform in the environment of heavy-ion collisions at the LHC, which can produce thousands of particles. Its design combines an excellent vertex resolution with a minimal thickness of material. It has excellent performance for particle identification in a large range of momenta as it employs essentially all of the […]

The post ALICE through a gamma-ray looking glass appeared first on CERN Courier.

]]>

The ALICE experiment is optimized to perform in the environment of heavy-ion collisions at the LHC, which can produce thousands of particles. Its design combines an excellent vertex resolution with a minimal thickness of material. It has excellent performance for particle identification in a large range of momenta as it employs essentially all of the known relevant techniques. Accurate knowledge of the geometry and chemical composition of the detectors is particularly important for tracking charged particles, for the calculation of energy loss and efficiency corrections, as well as for various physics analyses such as those involving the antiproton–proton ratio, direct photons and electrons from semileptonic decays of heavy-flavour hadrons.

The γ-rays produced in proton–proton collisions at the LHC (mainly from π0 decays), which undergo pair production in the ALICE experiment, provide a precise 3D image of the detector, including the inaccessible innermost parts. The process is almost exactly the same as in 1895 when Wilhelm Röntgen produced an X-ray image of his wife’s hand – the inner parts of the body could be seen for the first time without surgery. The main difference lies in the energy of the radiation – of the order of 100 keV for Röntgen’s X-rays compared with more than 1.02 MeV for the γ-rays from pair conversions. Importantly for the ALICE experiment, it allows the implementation of the detector geometry in GEANT Monte Carlo simulations to be checked.

To produce the γ-ray image, photons from pair conversions are reconstructed through the tracking of electron–positron pairs using a secondary vertex algorithm. Contamination from other secondary particles, such as K0S, Λ and Λ, is reduced by exploiting ALICE’s excellent capabilities for particle identification. In this case, the analysis uses the specific energy-loss signal in the time-projection chamber (TPC) as well as the signal in the time-of-flight (TOF)detector. Photons from pair conversions provide an accurate position for the conversion vertex, directional information and a momentum resolution given by that for the charged particles – an advantage over calorimeter measurements at low transverse momentum.

Figure 1 shows the γ-ray picture of the ALICE experiment, i.e. the Y-distribution versus X-distribution of the reconstructed photon conversion vertices, compared with the actual arrangement used in the 2012 run. The different layers of the inner tracking system and the TPC, as well as their individual components (ladders, thermal shields, vessels, rods and drift gas), are clearly visible up to a radius of 180 cm. To obtain a quantitative comparison, the radial distribution of reconstructed photon conversion vertices normalized by the number of charged particles in the acceptance is plotted together with the Monte Carlo distributions in figure 2.

This indicates an excellent knowledge of the material thickness of the ALICE experiment: up to a radius of 180 cm and in the pseudorapidity region |η| < 0.9, the thickness is 11.4±0.5(sys.)% of a radiation length. The systematic error is obtained from a quantitative comparison of the data with the Monte Carlo distributions, after taking into account the limited knowledge of the true photon sample, of the photon reconstruction efficiency and of the geometry and chemical composition of the detectors.

The accuracy achieved, as well as the full azimuthal acceptance of the central barrel, allows converted photons to be used in physics analyses. So far, photons from pair conversions have been used for the identification of neutral mesons in proton–proton collisions at 7 TeV down to a transverse momentum of 0.3 GeV/c – the first time in a collider experiment. Moreover, a direct photon signal observed in lead–lead collisions at √sNN = 2.76 TeV has been measured with the photon-conversion method. The latter measurement demonstrates that the quark–gluon plasma formed at the LHC is the hottest matter ever made in the laboratory (CERN Courier December 2012 p6).

The post ALICE through a gamma-ray looking glass appeared first on CERN Courier.

]]>
https://cerncourier.com/a/alice-through-a-gamma-ray-looking-glass/feed/ 0 News
The Tevatron’s data continue to excite https://cerncourier.com/a/the-tevatrons-data-continue-to-excite/ https://cerncourier.com/a/the-tevatrons-data-continue-to-excite/#respond Fri, 19 Jul 2013 07:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/the-tevatrons-data-continue-to-excite/ Latest results from 10 years of proton–antiproton running.

The post The Tevatron’s data continue to excite appeared first on CERN Courier.

]]>
CDF and DØ

The first collisions occurred in Fermilab’s Tevatron in 1985. Over the following years, both the energy and the luminosity increased  and by the time operations ceased in 2011 the collision luminosity had reached 7 × 1032 cm–2 s–1, more than 350 times the original design value. The Tevatron’s unique feature was its collisions of protons with antiprotons. While it requires substantial technical efforts to make antimatter – the Tevatron’s antiproton source was the world’s most powerful producer of antimatter but still incapable by a long way of the destruction imagined in Angels & Demons – the study of proton–antiproton collisions provides the opportunity to study quark–antiquark interactions against low backgrounds. By the final shutdown, a total luminosity of 12 fb–1 had been delivered to each of the two gigantic Tevatron experiments, CDF and DØ, corresponding to around 5 × 1014 proton–antiproton interactions at a collision energy of 2 TeV.

Images of the two experiments (figure 1) appeared on the front pages of many magazines, in artworks and on TV shows. These modern engineering marvels were largely innovative and demonstrated, for example, the power of a silicon detector in a hadron-collider environment, multi-level triggering, uranium–liquid-argon calorimetry and the ability to identify b quarks. From the collisions provided, the teams recorded the 2 × 1010 most interesting events to tape for detailed examination offline. The analysis effort included searches and studies of new particles, such as the Higgs boson, and precision studies of the parameters of the Standard Model. Many of the exciting results obtained before the end of 2011 have already been summarized in CERN Courier. This article presents an update on some of the results obtained by CDF and DØ over the past two years.

The search for the Higgs boson was among the central physics goals of the programme for Tevatron Run II (2001–2011) and the challenge of understanding the origin of mass in the Standard Model attracted world-leading experimentalists to Fermilab. In 2005, the data sets provided by the Tevatron reached the point where the search for a substantial number of Higgs events above backgrounds could start. From then until 2012, the analysis teams provided not only increasingly stringent direct mass-exclusions but also reduced indirectly the mass range where the Higgs boson could exist, using highly precise measurements of the masses of the top quark and the W boson (see below). By early 2011, results from the Tevatron and CERN’s Large Electron–Positron collider had reduced the allowed mass range to 125±10 GeV, so the joke among experimentalists at the time was: “We know the mass of the Higgs, we just don’t know if it exists.”

Higgs-boson searches

The CDF and DØ collaborations developed many new experimental methods in their hunt for the Higgs boson, from the combined searches of hundreds of individual channels for the boson’s production and decay to an extremely precise understanding of the backgrounds and a high-efficiency reconstruction of the Higgs-decay objects. The Tevatron’s high luminosity was the key, because only a few events were expected to remain in the signal region following all of the selections. The unique feature of proton–antiproton collisions was critical for the searches, especially in the decay to a pair of b quarks – the most probable channel for Higgs decay at a mass of 125 GeV. While cross-sections for Higgs production increase with energy and are much higher at the LHC, the increase in the main backgrounds is even faster, so the signal-to-background ratio for this main Higgs-decay channel remains favourable at the Tevatron.

By the early summer of 2012, both CDF and DØ had analysed the full Tevatron data set in all sensitive Higgs-decay modes: bb, WW, ττ, γγ and ZZ. The results included not only larger data sets than before but also substantially improved analysis methods. Multivariate analysis was used to take full advantage of the information available in each event, rather than using the more traditional cuts on kinematical parameters. Such techniques optimize the ratio of signal to background in a multi-dimensional phase space and were critical for reaching sensitivity to the Higgs-boson signal.

What became even more exciting was that in the search channels where the Higgs decays to a pair of b quarks only, the significance of the excess exceeded 3σ

At the Tevatron, the primary search sensitive to Higgs masses below around 135 GeV comes from the associated production of the Higgs boson with W or Z bosons, with the Higgs decaying to a pair of b quarks. This topology increases the signal-to-background ratio, because decays to a pair of b quarks have the highest probability while also minimizing backgrounds as the extra W or Z boson provides useful features, both for triggering and for offline event selection. Nevertheless, reconstructing jets from b quarks – which sometimes consist of hundreds of particles – with high precision is challenging. This is why the expected shape of the Higgs signal is rather wide, with a mass resolution of around 15 GeV, in comparison with searches in the channels where single particles, such as a pair of photons or leptons, are used to reconstruct the mass of the Higgs.

The CDF and DØ collaborations then combined their search results that summer. The excess observed around a mass of 125 GeV, which the experiments had seen for the previous two years, became even more pronounced (figure 2). The significance of the excess was close to 3σ. What became even more exciting was that in the search channels where the Higgs decays to a pair of b quarks only, the significance of the excess exceeded 3σ, indicating evidence for the production and decay of a Higgs boson at 3.1σ (Aaltonen et al. 2012). It was an extremely exciting summer. As the Tevatron passed the baton for Higgs searches (and now studies) to the LHC, its experiments had established evidence of the production and decay of a Higgs boson in the most-probable decay channel to a pair of fermions.

Precise measurements

The Standard Model is one of the most fundamental and accurate theories of nature, so precision measurements of its parameters figure among the major goals and results of the Tevatron’s physics programme. Those perfected over the past two years include the determination of the masses of the top quark and the W boson, both of which are fundamental parameters of the Standard Model.

Since the discovery of the top quark at the Tevatron in 1995, measurements of its mass have improved by more than an order of magnitude. In addition to the larger data sets, from some 10 events in 1995 to many thousands in 2012, the analysis methods have also been improved dramatically. One of the innovations developed for precision determination of the top mass – the matrix-element method – is now used in many other studies in particle physics.

In the channel that allows the most accurate mass measurement, the top quark’s final decay products are: a lepton (electron or muon); missing energy from the escaping neutrino; a pair of light quark jets from the decay of the W boson; and two b-quark jets. Determination of the energy of the jets is the most challenging task for precision measurement. In addition to using complex methods to determine the jet energy based on energy conservation in di-jet and γ+jet events, the fact that a pair of light jets come from the decay of a W boson with extremely well known mass (see below) is critical in obtaining high precision for the top-quark mass.

Using a large fraction of the Tevatron data, CDF and DØ reached a precision in the measurement of the top-quark mass of less than 1 GeV (figure 3), i.e. a relative accuracy of 0.5% (Tevatron Electroweak Working Group 2013). This is based on the combination of results from both experiments in many different channels. All of the results are in good agreement, demonstrating the validity of the methods that were developed and used to measure the top-quark mass at the Tevatron. Analyses of the full Tevatron data set are in progress and these should improve the accuracy by a further 20–30%. Experiment collaborations at both the LHC (ATLAS and CMS) and the Tevatron have formed a group to combine the results of the top-quark mass measurements from all four experiments. Such a combination will have a precision that is substantially better than individual measurements, because many of the uncertainties are not correlated between the experiments.

The measurement of the mass of the W boson requires even higher precision. By the end of the Tevatron’s operation, the combined Tevatron measurement for this particle with a mass of 80 GeV reached 31 MeV, or 0.04%. A precise value of the mass of the W boson is critical for understanding the Standard Model; in addition to being closely related to the masses of the Higgs boson and the top quark, it defines the parameters of many important processes. The main decay channel used to measure the W mass is the decay to a lepton (muon or electron) and a neutrino (“missing energy”). The precision calibration of the lepton energy is obtained from resonances with well known masses, such as the J/ψ or the Z boson, while the measurement of missing energy is calibrated using different methods for cross-checks. The calibration of the lepton energy is the most difficult part of the measurement; larger data sets provide more events and improve the accuracy of the measurement.

Measured values of the top-quark mass

With up to around 50% of the Tevatron data set, the combined analysis of CDF and DØ gives the mass of the W boson to be 80.387 MeV with an accuracy of 16 MeV – twice as good as only a year previously (Tevatron Electroweak Working Group 2013). The accuracy is now driven by systematic uncertainties. In order to reduce them, careful work and analysis of more data are needed; a precision of around 10 MeV should be reachable using the full data set. Such accuracies were once thought to be impossible to achieve in a “dirty” hadron-collider environment.

In the Standard Model, the masses of the Higgs boson, W boson and top quark are closely related and a cross-check of the relationship is one of the model’s most stringent tests. Figure 4 shows recent results for the top-quark mass (from the Tevatron), the W-boson mass (dominated by the Tevatron, with a world-average accuracy of 15 MeV vs 16 MeV Tevatron only) and the mass of the Higgs boson, as measured by the LHC experiments. The good agreement demonstrates the validity of the Standard Model with high precision.

At its inception, researchers had not expected the Tevatron to be the precision b factory that it became. However, with the success of the silicon vertex detectors in identifying the vertexes of the decays of mesons and baryons containing b quarks, the copious production of these b hadrons and the extremely well understood triggers, detectors and advanced analysis techniques, the Tevatron has proved to be extremely productive in this arena. A large number of new mesons and baryons have been discovered there and the properties of particles containing b quarks have been studied with high precision, including the measurement of the oscillation frequency of the Bs mesons.

Search for rare decay

Studies of particles with b quarks provide an indirect way to look for physics beyond the Standard Model. The rate of the rare decay of the Bs meson to a pair of muons is tiny in the Standard Model but new physics models, including some versions of supersymmetry, predict enhancements. Figure 5 shows how the steady improvements in the Tevatron limits on the decay rate reached around 10–8 by 2012, as more data and more elaborate analysis methods were developed by CDF and DØ.

In late 2011, the ATLAS collaboration presented results indicating the existence of a new particle, which was interpreted as an excited state of a bb pair, χb(3P). It is always important to confirm observations of a new particle with independent measurements and even more important to see such a particle at another accelerator and detector. Within just a couple of months, the DØ collaboration confirmed the observation by ATLAS (Abazov et al. 2012). This was the first time that a discovery at the LHC was confirmed using data already collected at the Tevatron.

Many important studies performed at the Tevatron measure properties of the strong force, which holds together protons and neutrons in the nucleus and is described by the theory of QCD. These include extremely accurate studies of the production of jets and of W and Z bosons accompanied by jets. The Tevatron articles that provide information for the development of the QCD run to tens of pages long and have tens of plots and tables documenting – with extremely high precision – the details of interactions between strongly interacting particles.

Running of the strong coupling constant

One unusual property of the strong interaction is that, contrary to electromagnetic and gravity interactions where the force increases when objects come closer to each other, the interaction of quarks becomes stronger as they move apart. The experiments at the Tevatron studied the strength of the strong force vs the distance between quarks, the running of the strong coupling constant, and verified that the strong force steadily decreases down to a distance between particles of around 5 × 10–16 cm (figure 6).

During the last month of the Tevatron run in September 2011, the CDF and DØ experiments collected data at energies below 2 TeV, going all of the way down to 0.3 TeV in the centre of mass. Such data are useful for studies of the energy dependence of the strong interaction and to compare with previous colliders results, such as the SppS proton–antiproton collider at CERN. An interesting recent measurement is the energy dependence of the “underlying event” in the hard scattering of the proton and antiproton – that is, everything except the two outgoing hard-scattered jets from a pair of hard-scattered quarks (figure 7).

There are many instances when the course of physics changed when experimental results did not fit the current theoretical predictions. Quantum theory and relativity were both born from such “clouds” on the clear horizon of classical physics. Several puzzles remain in the Tevatron data, which are leading to analysis and re-analysis of the full data set. These include the observed anomalous dimuon asymmetry, where the production of negative muon pairs exceeds positive pairs, in contradiction with expectations from the Standard Model (Abazov et al. 2011). This result has attracted much attention, because it could relate to the observed matter–antimatter asymmetry in the universe.

CDF data

There is also a puzzling effect in the production of the heaviest known elementary particle, the top quark. When top–antitop pairs are produced, more top quarks follow the direction of the colliding proton than is predicted in the Standard Model (Aaltonen et al. 2013, Abazov et al. 2013). Some of the models of new physics predict such abnormal behaviour.

Both of these “clouds” have a significance of 2–3σ and both are easier to study in the collisions of protons and antiprotons at the Tevatron. Will these measurements point to new physics or will the discrepancies be resolved with the further development of analysis tools or more elaborate theoretical descriptions based on the Standard Model? In any scenario, exciting physics from the Tevatron data is set to continue.

The Tevatron was at the leading edge of the energy frontier in particle-physics research for more than a quarter of a century. More than 1000 students received their doctorates based on data analysis in the Tevatron’s physics programme, which as a result trained generations of particle physicists. So far, in excess of 1000 scientific publications have come out of the programme, helping to shape the understanding of the subnuclear world. Analysis of the Tevatron’s unique data set continues and efforts to preserve the data for future access are in progress. There are sure to be many more exciting results in the coming years.

The post The Tevatron’s data continue to excite appeared first on CERN Courier.

]]>
https://cerncourier.com/a/the-tevatrons-data-continue-to-excite/feed/ 0 Feature Latest results from 10 years of proton–antiproton running. https://cerncourier.com/wp-content/uploads/2013/07/CCtev1_06_13.jpg
CMS hunts for low-mass dark matter https://cerncourier.com/a/cms-hunts-for-low-mass-dark-matter/ https://cerncourier.com/a/cms-hunts-for-low-mass-dark-matter/#respond Wed, 22 May 2013 10:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/cms-hunts-for-low-mass-dark-matter/ Astronomical observations – such as the rotation velocities of galaxies and gravitational lensing – show that more than 80% of the matter in the universe remains invisible.

The post CMS hunts for low-mass dark matter appeared first on CERN Courier.

]]>
Astronomical observations – such as the rotation velocities of galaxies and gravitational lensing – show that more than 80% of the matter in the universe remains invisible. Deciphering the nature of this “dark matter” remains one of the most interesting questions in particle physics and astronomy. The CMS collaboration recently conducted a search for the direct production of dark-matter particles (χ), with especially good sensitivity in the low-mass region that has generated much interest among scientists studying dark matter.

Possible hints of a particle that may be a candidate for dark matter have already begun to appear in the direct-detection experiments; most recently the CDMS-II collaboration reported the observation of three candidate events in its silicon detectors with an estimated background of 0.7 events. This result points to low masses, below 10 GeV/c2, as a region that should be particularly interesting to search. This mass region is where the direct-detection experiments start to lose sensitivity because they rely on measuring the recoil energy imparted to a nucleus by collisions with the dark-matter particles. For a low-mass χ, the kinetic energy transferred to the nucleus in the collision is small, and the detection sensitivity drops as a result.

The CMS collaboration has searched for hints of these elusive particles in “monojet” events, where the dark-matter particles escape undetected, yielding only “missing momentum” in the event. A jet of initial-state radiation can accompany the production of the dark-matter particles, so a search is conducted for an excess of these visible companions compared with the expectation from Standard Model processes. The results are then interpreted within the framework of a simple “effective” theory for their production, where the particle mediating the interaction is assumed to have high mass. An important aspect of the search by CMS is that there is no fall in sensitivity for low masses.

CCnew3_05_13

The monojet search requires at least one jet with more than 110 GeV of energy and has the best sensitivity if there is more than 400 GeV of missing momentum. Events with additional leptons or multiple jets are vetoed. After event selection, 3677 events were found in the recent analysis, with an expectation from Standard Model processes of 3663 ± 196 events. The contribution from electroweak processes dominate this expectation, either from pp → Z+jets with the Z decaying to two neutrinos or from pp → W+jets, where the W decays into a lepton and neutrino, while the lepton escapes detection.

With no significant deviation from the expectation from the Standard Model, CMS has set limits on the production of dark matter, as shown in the figures of the χ–nucleon cross-section versus χ mass. The limits show that CMS has good sensitivity in the low-mass regions of interest, for both spin-dependent and spin-independent interactions.

The post CMS hunts for low-mass dark matter appeared first on CERN Courier.

]]>
https://cerncourier.com/a/cms-hunts-for-low-mass-dark-matter/feed/ 0 News Astronomical observations – such as the rotation velocities of galaxies and gravitational lensing – show that more than 80% of the matter in the universe remains invisible. https://cerncourier.com/wp-content/uploads/2013/05/CCnew3_05_13.jpg
NOvA detector records first 3D tracks https://cerncourier.com/a/nova-detector-records-first-3d-tracks/ https://cerncourier.com/a/nova-detector-records-first-3d-tracks/#respond Wed, 22 May 2013 10:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/nova-detector-records-first-3d-tracks/ The NOvA neutrino detector that is currently under construction in northern Minnesota has recorded its first 3D images of particle tracks.

The post NOvA detector records first 3D tracks appeared first on CERN Courier.

]]>
Production of a large shower of energy

The NOvA neutrino detector that is currently under construction in northern Minnesota has recorded its first 3D images of particle tracks. Researchers started up the electronics for a section of the first block of the NOvA detector in March and the experiment was soon catching more than 1000 cosmic rays a second. Once completed in 2014, the NOvA detector will consist of 28 blocks with a total mass of 14,000 tonnes. The blocks are made of PVC tubes filled with scintillating liquid. It will be the largest free-standing plastic structure in the world.

Fermilab, located 810 km south-east of the NOvA site, will start sending neutrinos to Minnesota in the summer. The laboratory is finalizing the upgrades to its Main Injector accelerator, which will provide the protons that produce the neutrino beam. The upgraded accelerator will produce a pulse of muon neutrinos every 1.3 seconds and the goal is to achieve a proton-beam power of 700 kW. A smaller, 330-tonne version of the far detector for NOvA will be built on the Fermilab site to measure the composition of the neutrino beam before it leaves the laboratory.

NOvA detector

The neutrino beam will provide particles for three experiments: MINOS, located 735 km from Fermilab in the Soudan Underground Laboratory, right in the centre of the neutrino beam; NOvA, which is located off axis to probe a specific part of the energy spectrum of the neutrino beam, optimal for studying the oscillation of muon neutrinos into electron neutrinos; and MINERvA, a neutrino experiment located on the Fermilab site.

The NOvA collaboration aims to discover the mass hierarchy of the three known types of neutrino – which type of neutrino is the heaviest and which is the lightest. The answer will shed light on the theoretical framework that has been proposed to describe the behaviour of neutrinos. Their interactions could help to explain the imbalance of matter and antimatter in today’s universe; there is even the possibility that there might be still more types of neutrino.

The NOvA detector will be operated by the University of Minnesota under a co-operative agreement with the Office of Science of the US Department of Energy (DOE). About 180 scientists, technicians and students from 20 universities and laboratories in the US and another 14 institutions around the world are members of the NOvA collaboration. The scientists are funded by the DOE, the US National Science Foundation and funding agencies in the Czech Republic, Greece, India, Russia and the UK.

The post NOvA detector records first 3D tracks appeared first on CERN Courier.

]]>
https://cerncourier.com/a/nova-detector-records-first-3d-tracks/feed/ 0 News The NOvA neutrino detector that is currently under construction in northern Minnesota has recorded its first 3D images of particle tracks. https://cerncourier.com/wp-content/uploads/2013/05/CCnew12_05_13.jpg
Imaging gaseous detectors and their applications https://cerncourier.com/a/imaging-gaseous-detectors-and-their-applications/ Fri, 26 Apr 2013 08:30:12 +0000 https://preview-courier.web.cern.ch/?p=104521 Ariella Cattai reviews in 2013 Imaging gaseous detectors and their applications.

The post Imaging gaseous detectors and their applications appeared first on CERN Courier.

]]>
By Eugenio Nappi and Vladimir Peskov
Wiley-VCH
Hardback: €139
Paperback: €124.99

CCboo1_04_13

For those who belong to the Paleozoic era of R&D on gas detectors, this book evokes nostalgic memories of the hours spent in dark laboratories chasing sparks under black cloths, chasing leaks with screaming “pistols”, taming coronas with red paint and yellow tape and, if you belonged to the crazy ones of Building 28 at CERN, sharing a glass of wine and the incredible maggoty Corsican cheese with Georges Charpak. Subtitle it “The sorcerer’s Apprentice”, and an innocent student might think they have entered the laboratory of Merlin: creating electrons from each fluttering photon, making magical mixtures of liquids, exotic vapours, funny thin films and all of the strange concoctions that inhabited the era of pioneering R&D and led step-by-step to today’s devices.

The historical memory behind this book recalls all sorts of gaseous detectors that have been dreamt up by visionary scientists over the past 50 years: drift chambers, the ambitious time-projection chamber, resistive plate chambers, ring-imaging Cherenkov counters, parallel-plate avalanche counters, gas electron multipliers, Micromegas, exotic micro-pattern gaseous detectors (MPGDs) and more. All are included, both the ones that behaved and the ones that did not pay off – providing no excuse for anyone to re-make mistakes after reading the book. All of the basic processes that populate gas counters are reviewed and their functioning and limitations are explained in a simple and concise manner offering, to the attentive reader, key secrets and the solutions to obviate hidden traps. From the basic ionization processes to the trickiness of the streamer and breakdown mechanism, from the detection of a single photon to the problems of high rates – only lengthy, hands-on experience supported by a profound understanding of the physics of the detection processes could bring together the material that this book covers. Furthermore, it includes many notable explanations that are crystal clear yet also suitable for the theoretical part of a high-profile educational course.

Coming to more recent times, the use of microelectronics techniques in the manufacturing process of gas counters has paved the road to the new era of MPGDs. The authors follow this route, the detector designs and the most promising future directions and applications, critically but with great expectation, leaving the reader confident of many developments to come.

Each of us will find in this book some corner of our own memory, the significance of our own gaseous detector in recent and current experiments, together with a touch of the new in exploring the many possible applications of gas counters in medicine, biology or homeland security and – when closing the book – the compelling need to stay in the lab. Chapeau!

The post Imaging gaseous detectors and their applications appeared first on CERN Courier.

]]>
Review Ariella Cattai reviews in 2013 Imaging gaseous detectors and their applications. https://cerncourier.com/wp-content/uploads/2013/04/CCboo1_04_13.jpg